It was the first pay-for-performance program launched by the ACA and the first step in transforming Medicare from a buyer of health care to an agent of change. The Hospital Value-based Purchasing Program (VBP) signaled that CMS would no longer foot nearly a quarter of the nation’s health care tab without demanding some accountability from hospitals in return.
The approach of VBP’s implementation kept a lot of hospital executives up at night. Today, it barely merits a shrug.
What happened between then and now? You’ll get a lot of opinions, but most everyone seems to agree that the money at stake isn’t worth the effort. The program withholds 2% of hospitals’ Medicare pay and redistributes most of it to high-performing hospitals.
But 2% is the theoretical maximum penalty. The average bonus or penalty for high- and low-performing hospitals is much smaller (see table below). In absolute dollars, that worked out to a $213,000 average bonus or an average $1.2 million penalty in 2015, according to an analysis by the Advisory Board.
“You’ve got a relatively small percentage of providers in the winning circle, a relatively low or equal number of providers in a loser’s circle, and anyone else is more or less in the middle with indistinguishable performance,” says François de Brantes, vice president and director of the Center for Payment Innovation at Altarum Institute in Ann Arbor, Mich. The center was formerly known as the Healthcare Incentives Improvement Institute, or HCI3.
After five years, rewards and penalties to hospitals under the Value-based Purchasing Program suggest no discernible trends in overall performance.
In fact, most bonuses and penalties fall below one half of one percent. When you factor in payer mix, the net effect on a hospital’s bottom line is even less, according to a report by Leavitt Partners. It’s barely enough to justify the amount of expense and effort involved in redesigning care processes and information systems to meet the demands of the program.
VBP bases bonuses or penalties on a hospital’s total performance score across four domains: clinical care, which includes a mix of process and mortality measures; efficiency, which tracks Medicare spending per beneficiary; safety, which includes hospital-acquired infections; and patient experience of care, incorporating eight Hospital Consumer Assessment of Healthcare Providers and Systems measures. In all, there are 21 measures. Improvement in any one domain requires significant investments of time and money, according to the Leavitt report, and improvement on a single measure is unlikely to produce meaningful change in a hospital’s overall score.
Moreover, says de Brantes, the structure of the program—designed to keep it budget-neutral—makes the return on investment uncertain.
Any program designed to drive quality improvement should include a handful of measures that providers can control, says François de Brantes of the Center for Payment Innovation.
“You don’t get to know what your performance is until the end of the performance period because it’s a tournament-style program,” he says. Rather than allowing hospitals to establish benchmarks on which to base improvement goals, VBP pits hospitals against one another, diluting the reward pool. “When you’ve got a low probability of knowing what the outcome of your work is and the prize is relatively weak, you’re not going to get a whole bunch of people super excited about focusing a lot of energy on it.”
A financial lever based on a hospital’s overall quality score neither helps hospitals prioritize areas for improvement nor does it make a program easy to understand. In an article published last year in the Journal of Ambulatory Care Management, Richard Averill and colleagues at 3M Health Information Systems argued that mixing process and outcomes creates a bland cocktail that “is not an effective way of measuring value or controlling expenditures.”
The use of process measures for payment purposes, they wrote, creates an administrative burden that takes the focus off the objectives of the effort. “With process measures that range from clinically significant to micro administrative, the inevitable result is a composite score derived from arbitrary and complex rules that are difficult for health care delivery organizations to understand and use for real quality-improvement efforts,” they wrote.
Perhaps more importantly, there’s no evidence that this formula has improved mortality outcomes, according to a study published in BMJ last year. In an analysis of CMS data, Jose Figueroa and colleagues at Harvard’s T. H. Chan School of Public Health found that three years into VBP, 30-day, risk-adjusted mortality rates for AMI, heart failure, and pneumonia—the only three outcomes measures in the VBP program—had not changed significantly. What’s more, declines in mortality rates slowed after VBP’s initiation.
The evidence is discouraging to those who have been advocates of pay for performance. One of those, Ashish Jha, MD, a co-author of the BMJ study, wrote on JAMA Forum earlier this year that VBP was destined to fail because it lacks all of the key elements of a successful pay-for-performance program: a simple design, a focus on a small number of high-value measures, and “incentives that are large enough to motivate hospitals to make sizable investments in improving care.”
If that’s not enough, VBP was based on what Jha believes was a flawed pilot. VBP was modeled on the Premier Hospital Quality Incentive Demonstration, which ran from 2003 to 2009 and included more than 200 hospitals. CMS handed out more than $60 million in incentive payments for hitting quality marks, but there was no effect on patient outcomes. What became clear, Jha wrote, was that the Premier demonstration essentially rewarded hospitals for doing what they already did well, but otherwise it had little effect in influencing change. An October 2015 U.S. Government Accountability Office report analyzing results of the first three years of VBP reached essentially the same conclusion.
But the ACA required Medicare to put some teeth into purchasing, and the Premier demo—which at least didn’t worsen things—gave CMS something to try. “I think a big challenge for CMS is that you start with a demo, then you do an evaluation based on the results, and something goes through in some omnibus legislation,” says de Brantes. “But between the beginning of this process and its generalization through legislation, you are stuck with something CMS has to implement,” which may or may not be relevant to what is happening in the current environment.
Maybe too many things at once, argues de Brantes, who ticks off a list of competing interests for hospitals: the Medicare Shared Savings Program, mandatory bundled payments, and commercial payer programs focused on total cost of care. It creates a mix of incentives that compete for staff and financial resources.
“It’s time to bring the policy people, physicians, hospitals, and seniors into a single meeting place and make some decisions about what the next round of payments for traditional Medicare will look like,” Gail Wilensky, former HCFA director and the first chair of MedPAC, told a forum at Harvard’s Kennedy School of Government in February. There are so many alternative payment model demonstrations going on at once, she said, “it has to make it confusing for everybody.”
De Brantes, whose Center for Payment Innovation is a leading adviser on bundled payment arrangements, says that any program designed to drive quality improvement should include just a handful of measures that are tightly related to what the provider can control. In VBP, that took the form of process measurements, which are easy to measure and may have been an easy default in the wake of provider outcry about burden of measurement and risk adjustment. But, he says, “that put the kibosh on outcomes measures, which are what really matter.”
At the episode-of-care level, however, patient-reported outcomes become relevant, says de Brantes. “Patient-reported outcomes for a hospitalization is an oxymoron. But a patient-reported outcome for the management of a chronic condition or a specific treatment or a procedure—that means something. That’s a unit of measurement.”
De Brantes adds, “What matters to patients—which is, ‘How I was treated for a specific health care need’—also matters to the frontline clinician. That is what we should be getting to.”
Where we wind up, of course, depends largely on the fate of the ACA. Wholesale repeal of VBP’s enabling legislation would have effectively ended the program, along with the Bundled Payment for Care Improvement Initiative, the Readmissions Reduction Program, and health systems’ financial responsibility for hospital-acquired infections. With the failure of the House Republicans’ American Health Care Act, it’s unclear whether the Trump administration will prop up the ACA or undo it. If the ACA dies a death by a thousand cuts, then any of those programs could be shelved, modified, or combined à la the physician programs that were rolled into MACRA.