Criteria currently used to compare GP practices' annual cancer diagnosis rates are misleading and should be replaced, according to findings by researchers at the University of Aberdeen.
Most of the variation between practices’ urgent cancer referral figures can be explained by differences in the types of cancer being presented and not by poorer GP performance as is often reported, the research shows.
The authors conclude that more ‘appropriate and robust’ methods need to be developed to fairly compare GP practices on their performance in diagnosing cancer.
The study, published in the British Journal of Cancer, focused on data from more than 950,000 cancer cases from 8,303 general practices in NHS England over four years. It followed on from a medical student study which found similar results in more than 10,500 cancer cases from 77 general practices in NHS Grampian in Scotland.
Currently in England, figures comparing cancer detection rates between different GP practices are released annually and feature prominently in the media, usually suggesting there is a “post code lottery” when it comes to diagnosis . The current research shows this is almost certainly not true.
“There are a number of reasons why comparing GP practices using the current criteria is not appropriate,” said paper lead author Dr Peter Murchie, a Senior Lecturer in Primary Care from the University of Aberdeen.
“Current reporting is based on referral data from a single year. An average sized GP practice of 6,000 patients and four GPs will have less than 30 new cancer cases each year, which is actually a small number for making such big comparisons. More importantly, the current method has a major flaw because it assumes that all cancers are equally easy or difficult to diagnose. We know this isn’t the case: some cancers can be straightforward to diagnose – for instance a woman with a typical breast cancer lump - while others are much more difficult, such as when symptoms are vague or initial tests are normal.
“Current national guidelines make it clear when GPs should or should not refer a patient urgently for suspected cancer. If GPs follow these guidelines properly, how they are reported as performing will depend on which symptoms and cancers their patients have, and not on how good GPs have been at spotting cancer.”
The study examined year-to-year variation for practices and found that how a practice performed in referring suspected cancer one year had little influence on how it performed in the next. The researchers then examined what happened when several years of data were pooled together. Finally they used national data on differences between cancers to examine how likely it was that a GP practice which was reported as being in the bottom ten percent of practices was there due to having more difficult cases, rather than because of poor performance.
Dr Murchie added: “When we examined data over a longer period of time than one year, we found that performance differences between practices became much smaller. With only one year’s data, we estimate that four out of every five average-sized practices which are reported as performing poorly will be incorrectly labelled. Measures from single years of data are misleading and should not be publically reported.”
Dr Murchie suggested that there are ways that future reporting could be improved. “We suggest greater emphasis on whether cases of suspected cancer were referred according to guidelines or not. While this would still need data pooled over several years for most practices, it would reduce the element of chance before publicly reporting GPs performance.”