Brent C. James, MD

Weighing the risks and benefits of any treatment to decide whether it is worth pursuing takes a kind of knowledge that most in health care lack

Brent C. James, MD

One of the most effective things that humans can do to ensure a long life at high function is prevention. Preventing a problem before it occurs is not only far more clinically effective, but also usually costs far less.

One key element in prevention, though, is detection. The aim is to identify people who are at high risk for a disease, where a small change in early risk modification or intervention can produce large results further down the road.

Screening in support of prevention requires two elements:

  • An effective means to identify those at risk
  • An effective intervention

The second point is critical. It does little good to identify future problems if you have no tools that can change the course of the illness. None of the screening exams that we use to identify early risk work perfectly. They fail in two fundamental ways:

  • Sometimes they fail to identify those who do have increased risk (false negatives).
  • Sometimes they flag someone as being at risk when he really is not (false positives).

This second category turns out to be really important. Nearly always, the “interventions” we apply to ameliorate early risk are themselves risky. When applied to people who didn’t really have a disease risk, they do active harm which counts against the good that effective detection and intervention could otherwise achieve.

Robust body of science

Balancing failure to detect real risk against improper attribution of nonexistent risk can be difficult in the extreme. It rests on a robust body of science, but that science is not intuitive in any sense. It takes special training to do it well — training that the vast majority of physicians, nurses, and other health professionals (let alone patients, advocacy groups, etc.) lack.

In particular, it draws heavily on Bayesian statistical theory. That starts with an initial estimate of risk (pre-test or “prior” probability that an unrecognized condition exists), then examines how the test modifies the risk estimate (“posterior” probabilities). Pre-test probabilities, in turn, depend very heavily on the underlying prevalence rates of the condition to be detected in the general population.

Good screening is like refining gold ore: It nearly always goes through a series of refinement steps. At each step, you try to not lose too much gold (false negatives) while increasing concentration of the gold in the sample (fewer and fewer false positives left in your target population). You also consider the cost of the “refinement” process itself (screening tools) at each step.

For example, a “rich” gold ore body may contain only 1 ounce of gold per ton of ore (1 part in 32,000). A series of very inexpensive refinement steps might increase the concentration of gold to 1 part in 100, a 320-times increase in concentration, while only losing one tenth of the original ounce of gold. At that point, a mining company can justify a very expensive refinement method to finally yield pure gold.

When managing disease risks, initial population prevalence rates correspond to the raw gold ore. Initial “screening” usually examines simple population factors such as age, gender, and family history to produce a subset of patients with a much higher concentration of disease. A series of medical tests then refines the population even further, producing tighter and tighter groups of patients with higher and higher probabilities of actual disease. With most screening tests, though, the key success factor is the prevalence (concentration) of disease in the initial population.

The U.S. Preventive Services Task Force is a group of scientists selected specifically because they are experts in the skills required for this difficult task. They make recommendations for screening and prevention policy on behalf of the United States government that many others in the country then follow.

They constantly revisit and update their recommendations. When they publish a report, they make the reasoning behind their recommendations transparent (although you better have at least some training to even read and understand them); and they explicitly review the evidence that underlies their recommendations, with formal evaluations of that evidence. Their work represents the best that sound science has to offer, before strongly held beliefs, financial benefits, and politics kick in.

Brent C. James, MD, is chief quality officer and executive director of the Institute for Health Care Delivery Research at Intermountain Healthcare.

Managed Care’s Top Ten Articles of 2016

There’s a lot more going on in health care than mergers (Aetna-Humana, Anthem-Cigna) creating huge players. Hundreds of insurers operate in 50 different states. Self-insured employers, ACA public exchanges, Medicare Advantage, and Medicaid managed care plans crowd an increasingly complex market.

Major health care players are determined to make health information exchanges (HIEs) work. The push toward value-based payment alone almost guarantees that HIEs will be tweaked, poked, prodded, and overhauled until they deliver on their promise. The goal: straight talk from and among tech systems.

They bring a different mindset. They’re willing to work in teams and focus on the sort of evidence-based medicine that can guide health care’s transformation into a system based on value. One question: How well will this new generation of data-driven MDs deal with patients?

The surge of new MS treatments have been for the relapsing-remitting form of the disease. There’s hope for sufferers of a different form of MS. By homing in on CD20-positive B cells, ocrelizumab is able to knock them out and other aberrant B cells circulating in the bloodstream.

A flood of tests have insurers ramping up prior authorization and utilization review. Information overload is a problem. As doctors struggle to keep up, health plans need to get ahead of the development of the technology in order to successfully manage genetic testing appropriately.

Having the data is one thing. Knowing how to use it is another. Applying its computational power to the data, a company called RowdMap puts providers into high-, medium-, and low-value buckets compared with peers in their markets, using specific benchmarks to show why outliers differ from the norm.
Competition among manufacturers, industry consolidation, and capitalization on me-too drugs are cranking up generic and branded drug prices. This increase has compelled PBMs, health plan sponsors, and retail pharmacies to find novel ways to turn a profit, often at the expense of the consumer.
The development of recombinant DNA and other technologies has added a new dimension to care. These medications have revolutionized the treatment of rheumatoid arthritis and many of the other 80 or so autoimmune diseases. But they can be budget busters and have a tricky side effect profile.

Shelley Slade
Vogel, Slade & Goldstein

Hub programs have emerged as a profitable new line of business in the sales and distribution side of the pharmaceutical industry that has got more than its fair share of wheeling and dealing. But they spell trouble if they spark collusion, threaten patients, or waste federal dollars.

More companies are self-insuring—and it’s not just large employers that are striking out on their own. The percentage of employers who fully self-insure increased by 44% in 1999 to 63% in 2015. Self-insurance may give employers more control over benefit packages, and stop-loss protects them against uncapped liability.