Cancer screening has never been shown to “save lives” as advocates claim, experts argue in the British Medical Journal. This assertion is based on reductions in disease-specific mortality rather than overall mortality, say Dr. Vinay Prasad, an assistant professor at Oregon Health and Science University, and his colleagues. They argue that overall mortality should be the benchmark against which screening is judged, and they call for higher standards of evidence for cancer screening.
There are two chief reasons why cancer screening might reduce disease-specific mortality without significantly reducing overall mortality, write the authors.
First, studies may be underpowered to detect a small overall mortality benefit.
Second, reductions in disease-specific mortality may be offset by deaths due to the downstream effects of screening. Such “off-target deaths” are particularly likely among screening tests associated with false positive results (abnormal results that turn out to be normal) and overdiagnosis of harmless cancers that may never have caused symptoms, they explain. For example, prostate cancer testing yields numerous false positive results, which contribute to more than 1 million prostate biopsies a year –– which, in turn, have been linked to serious harms, including death.
Men diagnosed with prostate cancer are also more likely to have a heart attack or to commit suicide in the year after diagnosis or to die of complications of treatment for harmless cancers.
Data have shown that the public has an inflated sense of the benefits of screening and a discounted sense of its harms, the researchers write. For instance, in one study, 68% of women thought that breast screening would reduce their risk of developing breast cancer; 62% thought that screening at least halved the rate of breast cancer; and 75% thought that 10 years of screening would prevent 10 breast cancer deaths per 1,000 women.
The authors point out that the most recent Cochrane review of prostate-specific antigen (PSA) screening trials “failed to show a reduction in disease-specific death,” while their mammography review “did not show reduced breast cancer deaths when adequately randomized trials were analyzed.”
The authors say that investing in large trials that can determine overall mortality is “worth the expense compared with the continued cost of supporting widespread screening campaigns without knowing whether they truly benefit society.”
They acknowledge that political will, financial resources, and public perception “are common hurdles in building support for resource-intensive scientific endeavors, and developing consensus on these matters will take time and effort.”
They call on health care providers “to be frank about the limitations of screening,” and ask for higher standards of evidence “to enable rational, shared decision making between doctors and patients.”
In an accompanying editorial, Dr. Gerd Gigerenzer argues that “rather than pouring resources into ‘megatrials’ with a small chance of detecting a minimal overall mortality reduction, at the additional cost of harming large numbers of patients, we should invest in transparent information in the first place.”
He explains that even if the uncertain effect of screening on overall mortality is not removed, people can be provided with useful tools to help informed decision making, adding that “it is time to change communication about cancer screening from dodgy persuasion into something straightforward.”
Useful tools, such as fact boxes, can illustrate the harms associated with mammography screening, for example, by reporting all measures of mortality. “The harms are specified numerically so that an informed decision about screening is possible. Every article and pamphlet should provide a fact-box summary to facilitate informed decisions,” he concludes.
Source: Medical Xpress; January 7, 2016.