Weighing the risks and benefits of any treatment to decide whether it is worth pursuing takes a kind of knowledge that most in health care lack
One of the most effective things that humans can do to ensure a long life at high function is prevention. Preventing a problem before it occurs is not only far more clinically effective, but also usually costs far less.
One key element in prevention, though, is detection. The aim is to identify people who are at high risk for a disease, where a small change in early risk modification or intervention can produce large results further down the road.
Screening in support of prevention requires two elements:
- An effective means to identify those at risk
- An effective intervention
The second point is critical. It does little good to identify future problems if you have no tools that can change the course of the illness. None of the screening exams that we use to identify early risk work perfectly. They fail in two fundamental ways:
- Sometimes they fail to identify those who do have increased risk (false negatives).
- Sometimes they flag someone as being at risk when he really is not (false positives).
This second category turns out to be really important. Nearly always, the “interventions” we apply to ameliorate early risk are themselves risky. When applied to people who didn’t really have a disease risk, they do active harm which counts against the good that effective detection and intervention could otherwise achieve.
Robust body of science
Balancing failure to detect real risk against improper attribution of nonexistent risk can be difficult in the extreme. It rests on a robust body of science, but that science is not intuitive in any sense. It takes special training to do it well — training that the vast majority of physicians, nurses, and other health professionals (let alone patients, advocacy groups, etc.) lack.
In particular, it draws heavily on Bayesian statistical theory. That starts with an initial estimate of risk (pre-test or “prior” probability that an unrecognized condition exists), then examines how the test modifies the risk estimate (“posterior” probabilities). Pre-test probabilities, in turn, depend very heavily on the underlying prevalence rates of the condition to be detected in the general population.
Good screening is like refining gold ore: It nearly always goes through a series of refinement steps. At each step, you try to not lose too much gold (false negatives) while increasing concentration of the gold in the sample (fewer and fewer false positives left in your target population). You also consider the cost of the “refinement” process itself (screening tools) at each step.
For example, a “rich” gold ore body may contain only 1 ounce of gold per ton of ore (1 part in 32,000). A series of very inexpensive refinement steps might increase the concentration of gold to 1 part in 100, a 320-times increase in concentration, while only losing one tenth of the original ounce of gold. At that point, a mining company can justify a very expensive refinement method to finally yield pure gold.
When managing disease risks, initial population prevalence rates correspond to the raw gold ore. Initial “screening” usually examines simple population factors such as age, gender, and family history to produce a subset of patients with a much higher concentration of disease. A series of medical tests then refines the population even further, producing tighter and tighter groups of patients with higher and higher probabilities of actual disease. With most screening tests, though, the key success factor is the prevalence (concentration) of disease in the initial population.
The U.S. Preventive Services Task Force is a group of scientists selected specifically because they are experts in the skills required for this difficult task. They make recommendations for screening and prevention policy on behalf of the United States government that many others in the country then follow.
They constantly revisit and update their recommendations. When they publish a report, they make the reasoning behind their recommendations transparent (although you better have at least some training to even read and understand them); and they explicitly review the evidence that underlies their recommendations, with formal evaluations of that evidence. Their work represents the best that sound science has to offer, before strongly held beliefs, financial benefits, and politics kick in.