Say what you will about the CMS hospital star ratings—and much has been said about them, mostly disparaging—their release accomplished something that’s a rarity in Washington these days: Republicans and Democrats in Congress came together to block them. Last April, 225 House members signed a bipartisan letter to acting CMS Administrator Andy Slavitt to delay the release of the ratings. And CMS did delay the release—but only until July.
The ratings are meant to sum up performance on 64 quality measures with a designation of from one to five stars. That is supposed to give consumers a quick and easy way to assess hospitals on CMS’s “Hospital Care” website, like reviewers use stars to rate movies or restaurants. CMS also uses the star system to rate Medicare Advantage plans and nursing homes, but the hospital ratings provoked a backlash that the other ratings haven’t—evidence of, among other things, the clout of the hospital lobby. When it finally released the updated ratings last July, CMS said that the methodology had been reviewed “after substantive discussions with hospitals and other stakeholders.” But not many of those stakeholders have been happy with the end result.
The problem, say hospitals and some analysts, is that rating hospital quality is not so straightforward. How a hospital delivers care is multifactorial and complex, they argue, so trying to cram all that into a single score is misleading and can end up like rating a restaurant on its parking and signage as much as its food and service.
In an opinion piece in the Nov. 1, 2016, JAMA, Northwestern University researchers Karl Bilimoria, MD, and Cynthia Barnard listed seven concerns hospitals have with the star ratings, from a lack of transparency to a lack of risk adjustment to giving equal weight to elements (such as mortality and readmissions) with “dissimilar clinical significance.”
Jonathan Burroughs, MD, who runs a hospital consultancy in New Hampshire, calls the star rating system “a good start,” but says CMS has work to do. The agency has been working with the National Quality Forum and the Agency for Healthcare Research and Quality. “Now they need to shore up their methodology so it’s accurate,” he says.
Jonathan Burroughs, MD, worries the current star system will have the unintended consequence of discouraging hospitals from caring for poor or sick people because that will hurt their star rating.
Therein lies one major complaint about the current version of the star ratings. Burroughs and other critics say they are misleading because they do not take into account the severity or complexity of the case mix a hospital handles. Socioeconomic factors that have a pronounced effect on adherence and other factors that influence outcomes are also missing in CMS’s quality calculus. CMS already incorporates socioeconomic factors into the Medicare Advantage star ratings. Says J.B. Silvers, a professor of health care finance at Cleveland’s Case Western Reserve University and a former member of the Joint Commission, “There are some adjustments made, so if you have a riskier population you’re weighted differently.” Accounting for socioeconomic status more transparently and giving that appropriate weighting in the hospital quality calculation could be steps toward resolving these inequities.
The Philadelphia story
These blind spots mean academic medical centers and safety net hospitals fare poorly under the star system, while hospitals in more affluent areas come out looking good, critics say. Philadelphia is a good example. Paoli Memorial Hospital in the city’s tony Main Line suburbs got a five-star rating, while about 20 miles away in the western part of the city, the Hospital of the University of Pennsylvania garnered only three stars—even though it is a highly regarded academic medical center that shows up on lists of the best hospitals in the country.
Burroughs worries the current star system will have the unintended consequence of discouraging hospitals from caring for poor or sick people because that will hurt their star rating, and that, he says, is counter to the principles of population health.
In an analysis for the American Hospital Association, Francis Vella, chair of economics at Georgetown University, assailed the CMS methodology, saying that while it gives the impression of being rigorous and objective, it depends too much on the choice of measures and the weighting is subjective. And the absence of socioeconomic factors is glaring. “Two (or more) identical hospitals could have very different outcomes depending on the type of patient they have, where they are located, the type of health issues they typically face, and multiple other factors,” Vella’s analysis says.
Part of the problem with the CMS ratings is that they depend on claims data, says David Baker, MD, executive vice president at the Joint Commission, which does its own rankings of hospitals within service lines like oncology and pediatrics. He uses lung disease as an example. “There’s a variety of different tests to measure the severity of the lung disease, and you’d want to adjust for those things, but none of that information is in claims data,” he says. Other kinds of important information is also missing—in chronic emphysema and bronchitis, for example, one key factor in determining treatment and outcome is whether the patient is a current smoker. “Many of them are,” says Baker, “but we don’t have the data to adjust for that.”
One way to level the playing field for safety-net organizations would be to factor “nonclinical determinants”—socioeconomic, genetic, environmental, and behavioral factors—into quality metrics and outcomes. “They really are the key determinants on whether someone is going to bounce back into the hospital after discharge or not,” Burroughs says. “It has very little to do with what actually happens in the hospital or what’s done by the hospital.”
Besides employing the risk adjustment that Medicare Advantage ratings use, says Burroughs, “CMS needs to do what health plans do all the time, which is to risk-stratify their populations, identify those in greatest need and at greatest risk and give credit to organizations that care for those high-risk, complex populations.”
Silvers at Case Western Reserve calls these “socio-demographic adjustments” and says they are pivotal. Many quality measures are affected by sociodemographic factors such as living conditions and stress, he notes, and that has been clear for decades. Why would we not expect them to affect CMS quality metrics and the star ratings?
Pulling back the curtain
Another impediment to getting a clearer picture of how hospitals are doing is the hospitals themselves. Historically, hospitals have wanted to obfuscate what they’re doing. They are becoming more transparent, but it’s a slow process—and tools like the five-star ratings system can make them more skittish about pulling back the curtain.
“Zero transparency is the tradition, and traditions take a long time to change,” says Leah Binder, president and CEO of the Leapfrog Group, which reports on hospital quality for consumers and payers. Years ago hospitals had a lot of excuses for their reluctance to open up. One that carried weight was that there were few measures, and those were not very good, Binder says.
“Zero transparency is the tradition, and traditions take a long time to change,” says Leah Binder, president and CEO of the Leapfrog Group.
But hospital measures started to advance as the internet boomed, changing how consumers shop. “We had a different level of expectation from the public about transparency and the ability to compare providers of services and products, and hospitals were swept up in that,” Binder says. “The combination of having these better measures and having these expectations from consumers gave us a much more robust level of transparency, but we’re still not where we should be.”
To be more forthcoming, hospitals also need to examine their own systems for internal and external reporting. Hospitals frequently report masked outcomes to registries like the American College of Cardiology AFib Ablation Registry and the National Cancer Institute’s Surveillance, Epidemiology and End Results Program. “Is it absolutely necessary that everything in a registry be kept confidential?” Binder asks. “We don’t think so.” Such data could be used to inform other outcomes tracking tools, and, Binder says, can at least be made available internally to improve quality.
Hospitals can also use validated survey instruments from the Agency for Healthcare Research and Quality and the Culture of Safety Survey, which Leapfrog administers. “There are a lot of ways hospitals can be more transparent, and they should be looking for those opportunities because that’s the next generation of transparency that’s coming, and it’s not going to be optional much longer,” Binder says.
A cacophony of appraisals
Robert Wachter, MD, author of the bestseller The Digital Doctor and professor at the University of California–San Francisco, has long called for improved hospital transparency. He is a member of the Lucian Leape Institute, the think tank of the National Patient Safety Foundation (NPSF). In a 2009 paper, the institute members called transparency “the most important single attribute of a culture of safety.” Another report in 2015, Shining a Light: Safer Health Care Through Transparency, offers more than three dozen recommendations for improving transparency.
“Once all the data are digital, then big data can kick in and do the kind of analyses that are not all that different from what Amazon does that tell you how many other people liked the same kind of book,” says Robert Wachter, MD.
Greater transparency may also be a matter of time as electronic medical records become more ubiquitous. “Once all the data are digital, then big data can kick in and do the kind of analyses that are not all that different from what Amazon does that tell you how many other people liked the same kind of book,” Wachter says. “We just haven’t had the data sets to be able to do that work in the past.”
One relatively new problem with measuring and rating hospitals is the sheer number of organizations doing it. Besides CMS, the Joint Commission, and Leapfrog, there’s Truven, US News & World Report, and Consumer Reports. Each has a different way of gauging hospitals, so the public faces a cacophony of appraisals.
And now Yelp is getting into the game. Last year, researchers from the University of Pennsylvania reported in Health Affairs on the role Yelp hospital reviews play in consumer choice. Maybe CMS could learn something from Yelp. The Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey is one of seven categories in which CMS evaluates hospitals for the star rating, but the Penn researchers found that HCAHPS doesn’t measure or report the things that Yelp reviewers find most relevant to hospital reviews.
“I don’t read restaurant reviews anymore; I read Yelp,” Wachter says. “In some ways it’s going to become more of a Wild West, not less, as ratings of hospital quality become more democratized and more web-inized and more consumer-focused than what we’ve traditionally had.”
Audit like the IRS
Another way to improve transparency and the quality of data hospitals report is to take a page out of the IRS playbook by doing an audit—in this case, of quality data collection and performance, not finances.
“There are certainly ways to audit those data,” says Northwestern’s Bilimoria. “Other quality reporting programs have instituted audit systems without any issue.” They consist of paper audits, remote audits, and targeted audits. “A system of audits, both random and targeted, would certainly improve the quality of the data,” he says.
“So we know how to adjust for differences in patients’ severity of illness, but it’s hard because it’s expensive. Medicare doesn’t have the ability to spend all the money on this.” — David Baker, MD, Joint Commission.
Baker of the Joint Commission says that to accurately adjust for differences in patients’ severity of illness across the hospital, chart abstraction is a necessity, and hospitals need to have a system for that. He holds up the Society for Thoracic Surgery as a model for an audit strategy. “They train chart abstractors, and they audit 10% of participating sites,” he says. “So we know how to do this, but it’s hard because it’s expensive. Medicare doesn’t have the ability to spend all the money on all this. But without the hammer of the audit, getting valid, reliable data will be difficult.”
It could be that the art and science of evaluating hospital quality just needs to grow up. “The problem is that the field of quality measurement is still such a young and immature field that there’s that fear that they will be wrong or misleading,” Wachter says.
A big step in the maturation process will depend on some standardization of definitions for quality metrics. Adds Wachter: “In some ways the National Quality Forum was organized around the recognized need for that capacity in the health system; a trusted, arms-length party not dependent on funding from anyone with a vested interest to look at potential quality measures and judge them against the evidence. That sounds good. Turns out it’s also really hard to do.”
The result, in Wachter’s estimation, has been “tremendous disagreement among well-meaning people.” Until they reach a consensus on meaningful measures, hospital quality will be hard to pin down with stars.