Cover Story

Health Care Quality: It’s Motherhood and Apple Pie. Until You Start To Measure It.

How could anybody be against improving the quality of health care? But nobody seems to like the current ways of measuring it and the attempts to make it better. MIPS may make a bad situation worse.

Lola Butcher

More than 60% of measures pertinent to ambulatory internal medicine are invalid, says Catherine H. MacLean, MD, immediate past chair of the ACP Physicians Performance Measurement Committee.

When he’s using microsurgical techniques to treat unbearable facial pain, neurosurgeon Richard Zimmerman, MD, values precision above all. But as the chair of quality outcomes at Mayo Clinic in Arizona, he has come to accept that the government’s system for measuring health care quality is less than precise.

“If you’re a hematologist–oncologist, the survival rate of cancer patients might be a better indication of quality than how often you document that you have screened for depression,” he says.

But screening a patient for depression—or, more accurately, documenting that you have screened for depression, regardless of whether you actually remembered to do so—leads to higher pay from the Medicare program. Nobody’s paying more for high cancer-survival rates.

Welcome to health care’s pay-for-value movement, in which public and private payers want to reward—and penalize—physicians based on the quality of care they provide.

It’s a good idea with a big problem: Physicians don’t believe in it.

Sure, they believe the quality and safety of patient care should be improved. But many think the current batch of quality measures don’t really measure the quality of their work; that payers reward for documentation rather than better outcomes; and that financial incentives only exacerbate the problems. In some cases, say many physicians, quality measures have the unintended effect of worsening health care.

Projected cost savings attributable to changes in rates of hospital-acquired conditions

In billions

Source: AHRQ, National Scorecard on Hospital-Acquired Conditions, June 2018

Last year, Medicare began paying most physicians under its Merit-based Incentive Payment System (MIPS), one of two tracks in the Quality Payment Program created by the 2015 MACRA legislation. Every participating physician earns an “upward adjustment” or a “downward adjustment” in pay, based on quality measures and other performance factors.

MIPS performance categories for 2018

Source: CMS, 2018 MIPS Quality Performance Fact Sheet

Nobody wants to be a loser, so they need to embrace the program even if they don’t think it rewards high-quality care, says John Khoury, MD, neurologist at Abington Neurological Associates outside Philadelphia.

“All I’m telling my docs to do is make sure this box is clicked or make sure this field is filled out,” he says. “They are great doctors and they all practice good medicine, but none of these things is really improving patient outcomes.”

Out of alignment

It was a distressing number—as many as 98,000 dead hospital patients each year because of medical errors, according to the Institute of Medicine’s 1999 To Err is Human report—that riveted attention on improving the safety and quality of U.S. health care.

An era of self-reflection and external scrutiny—from payers, regulators, professional societies, and others—ensued, with management guru Peter Drucker’s dictum as its mantra: “You can’t manage what you can’t measure.” In the past two decades, tracking and reporting performance measures has become ubiquitous in health care.

A 2014 survey of physician practices in four specialties—internal medicine, family medicine, cardiology, and orthopedics—found that they spend, on average, 785 hours per year on collecting and reporting quality measures. That works out to an average of more than $40,000 per physician per year—or a combined total of $15.4 billion annually for the practices in those four specialties alone, according to Lawrence Casalino and his coauthors.

Sean C. Blackwell, MD, isn’t surprised. The chair of the obstetrics, gynecology, and reproductive sciences department at McGovern Medical School at UTHealth in Houston says that when his department contracted with a single health plan to test bundled payments for maternity care, it took 1.5 FTEs just to track and document the requisite quality measures.

Health plans often pick measures based on their knowledge as payers, rather than on what physicians know, says Sean C. Blackwell, MD, of UTHealth in Houston.

The effort would be well spent if the data informed quality improvement, but here’s the thing: Only 27% of the practice leaders surveyed by Casalino and colleagues believed that measures were “moderately or very representative” of the quality of care provided. Just 28% used their quality scores to focus on ways to improve the quality of the care they delivered, they noted in their Health Affairs article.

Blackwell could have told you so. Based on his own experience, health plans pick measures based on their knowledge as payers, rather than on what physicians know; on what consultants tell them is important; and on what is feasible to collect from administrative data, he says.“None of these factors optimally align with what will help a patient or improve care,” he says.

Even Donald Berwick, MD, coiner of the Triple Aim and CMS administrator when quality measurement was gathering steam, thinks the movement has run amok. In 2016, he called for CMS, commercial insurers, and regulators to reduce—by 75% in six years—the volume and total cost of measurement currently being used.

“Intemperate measurement is as unwise and irresponsible as intemperate health care,” Berwick wrote in JAMA two years ago.

Measurement, Medicare-style

CMS tiptoed into the quality measurement movement in 2006 when it introduced the voluntary Physician Quality Reporting Initiative (PQRI), which paid small bonuses to physicians who submitted certain quality measures. By then, most physicians were engaged in quality improvement programs of some sort, but they were lukewarm to PQRI and its successor, the Physician Quality Reporting System, which became mandatory a few years later.

Participation grew over time. Even so, by 2015, more than a third of physicians and other eligible providers were still sitting on the sidelines, choosing to take a 2% cut of their total Medicare pay rather than submit data.

That year, CMS announced its plan to go big on value. Its goal is to tie 90% of Medicare fee-for-service physician payments to quality and other performance measures by 2018.

MIPS—the new payment system—is a big step toward that goal. Physicians are rewarded or penalized financially on the basis of performance in several domains. The quality domain carries the most weight.

PQRS was folded into MIPS as its way of measuring quality. To participate in the program, physicians must report six quality measures chosen from CMS’s list of more than 270. That sounds like a lot of choices, but for some specialties, there are few or no specialty-specific measures. And many practices struggle to find six measures that are both meaningful to individual physicians and feasible from a data-collection perspective.

“The emphasis on quality certainly is great—I don’t have any problems with that at all,” says Zimmerman, the Mayo neurosurgeon. “But there’s a real challenge in how to look at these measures as proxies for outcomes.”

That’s because, in some cases, the measures do not appear to make sense. A committee of the American College of Physicians (ACP) assessed the validity of 86 MIPS quality measures it deemed relevant to ambulatory general internal medicine. Here is what they came up with:

  • 37% of the measures were valid
  • 35% were invalid
  • 28% were of uncertain validity

To conduct their assessment, the reviewers adapted a method developed at Rand and UCLA for evaluating the benefits and harms of a medical intervention. Validity was assessed in five areas: importance, appropriateness, clinical evidence, specifications, and feasibility/applicability.

Of the 30 measures the ACP committee deemed invalid, 19 were put in that category because of the dearth of evidence.

For example, one elder maltreatment MIPS measure requires screening all patients 65 years of age and older and a documented follow-up plan. However, the U.S. Preventive Services Task Force does not recommend routine screening. The MIPS measure did not necessarily support high quality care, argued Catherine H. MacLean, MD, immediate past chair of the ACP Physicians Performance Measurement Committee, and her coauthors—a fellow committee member and an ACP staffer—in a perspective piece published earlier this year in the New England Journal of Medicine.

“We believe the substantial resources required to screen large populations for maltreatment and to track follow-up would be better directed at care processes whose link to improved health is supported by more robust evidence,” they wrote.

One MIPS measure requires that patients’ blood pressure be controlled to ≤140/90 mmHg and that the reading be taken in the clinic setting. The measure does not specify any exclusions, prompting the ACP committee to toss it into the “not valid” pile.

“Forcing blood pressure down to this threshold could harm frail elderly adults and patients with certain coexisting conditions,” according to MacLean and her colleagues.

MacLean and her coauthors acknowledge that there may be a logical explanation for why CMS chose quality measures that don’t pass muster with the ACP. ACP uses a method that sets a higher standard for validity. Also, the people doing the assessments for CMS are a varied, multistakeholder group. In contrast, ACP relies solely on internal medicine specialists.

Regardless of who is right, if the physicians don’t have confidence in the quality-measurement approach, the whole exercise is a problem.

What’s not to like

The MIPS program is important because Medicare is such a big payer, but the payer–provider disconnect around quality measurement is widespread.

At UTHealth in Houston, the bundled-payment contract for maternity care assesses physician performance, in part, on measures outside of their control, Blackwell says. One quality measure is “onset of prenatal care,” which is determined by the patient. Another is “vaccine compliance for newborns,” although obstetricians do not take care of babies.

Another measure—how often patient counseling is documented in the medical record—is within the physician’s purview, but what the counseling is supposed to be is not defined. “Its utility as a measure to drive better care is limited,” Blackwell says.

At Mayo, Zimmerman tries to find ways that the MIPS quality measures have relevance for a wide range of specialists—and with some effort and imagination, he can identify some. When an oncologist grumbles that a certain MIPS measure—percentage of patients who had a risk assessment for falls completed within a year—is not pertinent to cancer care, he reminds them that patients on chemotherapy often get dehydrated, which puts them at risk for fainting and falling.

He pushes it because he believes physicians have a desire to improve the quality of the care they deliver even if they find the measures annoying—or worse. But he sees the limitations. “If I screen for falls and I tell patients that they’re at risk for falling, does that reduce their incidence of falls?” he says. “I’m not so sure that there is a direct correlation.”

There is, however, a direct correlation with a physician’s MIPS score and, by extension, his or her pay. More accurately, there is a correlation to pay if the physician documents an activity that corresponds to a quality measure.

That’s why many health systems create electronic medical record “templates” that document MIPS measure activities as a default. For example, if patients are supposed to be routinely screened for depression, the template autofills to indicate the screening occurred; if it did not, the clinician is supposed to remember to manually override the default.“The templating seems to game the system,” Zimmerman says. “If it’s not documented, it didn’t happen, but just because it was documented doesn’t mean it necessarily always happens.”

It’s an unintended consequence of tying financial incentives to quality measurement, but probably not the worst one. That spot might be reserved for the Joint Commission’s requirement, set out in 2001, that hospitals ask every patient about their pain level.

Hospitals must be accredited by the Joint Commission to receive Medicare and Medicaid payments, so they instructed clinicians to step up their efforts to assess pain. And not just assess pain, but control it because CMS’s hospital patient-satisfaction survey includes pain questions—and those survey scores often figure into pay-for-value formulas. Fifteen years later, the Joint Commission and its pain control requirements have been criticized for contributing to the nation’s opioid epidemic.

Where next?

The quality movement, without question, has improved patient care in many ways. The rate of hospital-acquired conditions decreased by 17% from 2010 to 2014 and fell another 8% by 2016, according to the Agency for Healthcare Research and Quality.

Still, this is not a mission accomplished. To cite just one example, AHRQ reports that mortality associated with ventilator-acquired pneumonia was estimated to be 14% in 2010—and the same in 2017. But, on average, the extra cost associated with ventilator-­acquired pneumonia increased from $23,000 to $47,000 during that time.

There is not yet any consensus on the way forward. MacLean and the ACP committee—who believe more than 60% of measures pertinent to ambulatory internal medicine are invalid or of uncertain validity—just want the government to pay attention to their research. “It is our hope that CMS will consider the findings and incorporate them into their plans to develop and assess future measures,” MacLean said in an email.

On the other end of the spectrum, the Institute for Healthcare Improvement (Berwick is its cofounder) is trying to reinvigorate the push for health care quality and safety. Recently merged with the National Patient Safety Foundation, the institute convened a National Steering Committee for Patient Safety. One notion is to replace all the measuring with a systems approach.

Meanwhile, Zimmerman, who received Mayo Clinic’s Lifetime Achievement Award for his career-long dedication to quality improvement and patient safety activities, has come to believe that quality measurement should primarily be used for internal analysis and over time. Does a hospital’s ventilator-associated pneumonia rate fall from one year to the next? How does a physician’s opioid-prescribing pattern compare to last year?—getting answers to those questions could make a substantive difference instead of the current check-the-box mentality, he argues.

When payers financially reward or penalize physicians based on quality metrics, they may cause more problems than they solve, Zimmerman says. “Outside influence has unintended consequences,” he says. “Please recognize that and consider what you are asking people to do.”

Lola Butcher, a regular contributor to Managed Care, is an independent journalist in Springfield, Mo., who writes about health care.

Career Opportunities

Manager of Government Enrollment and Physician Credentialing

Fantastic opportunity to grow with Alliance HealthCare Services in this unique role

Location: Irvine, CA - RELOCATION ASSISTANCE AVAILABLE

Click here to learn more