An expert in improving safety in hospitals sees a need to apply more scientific rigor to outcomes data and quality improvement efforts
Bloodstream infection rates are down, hospitals are saving millions of dollars, and overall mortality rates for Medicare patients have decreased in at least one state, all thanks to a safety protocol project for intensive care units. Developed at Johns Hopkins University, the program involves a simplified checklist of safety procedures and requires collaboration among clinicians, hospital administration, and state organizations. While the research results from the first wave of states to adopt the program are still being published, a total of 44 states have rolled it out, with the rest to follow soon. “It is the first national success story for quality and safety improvement,” says Peter J. Pronovost, MD, PhD, medical director for the Center for Innovation in Quality Patient Care and a professor in the Johns Hopkins University departments of medicine, nursing, and public health.
Pronovost hopes to use the model to tackle other areas where preventable deaths are highest, but big gaps in the science of measuring outcomes could slow him down. Insurers, who have played a central role in the implementation of the ICU project, can continue to help, he says.
Pronovost is co-author of the book Safe Patients, Smart Hospitals: How One Doctor’s Checklist Can Help Us Change Health Care from the Inside Out, which was published last year. He chairs the Joint Commission’s ICU Advisory Panel for Quality Measures and the ICU physician staffing committee for the Leapfrog Group. He is part of the Quality Measures Work Group of the National Quality Forum and is an adviser to the World Health Organization’s World Alliance for Patient Safety.
Pronovost earned his bachelor’s degree at Fairfield University and his medical degree and a master’s in public health from Johns Hopkins University. He also completed an internship in emergency medicine, a residency in anesthesiology and critical care, and a fellowship in critical care medicine, all at Johns Hopkins. He spoke recently with MANAGED CARE editor John Marcille.
MANAGED CARE: We interviewed Allan M. Korn, MD, chief medical officer and senior vice president for clinical affairs for the Blue Cross & Blue Shield Association, and he called your program to reduce bloodstream infections in intensive care units “sticky.” Why does it work?
PETER J. PRONOVOST, MD, PhD: It works because we score with measures that clinicians believe are valid. They are the ones who have to do the work of improving care, so they have to drive this process. We were also guided by science. We used research about how to reduce these infections, and we were committed to collaboration. No group is going to solve this problem alone. It’s not going to be insurers, consumers, or clinicians alone. It’s going to be all of us working together, drawing from our own strengths, pulling different levers, and driving a common measure toward improvement.
MC: Tell us how this got started.
PRONOVOST: Bloodstream infections kill as many people in the United States as breast cancer, around 31,000. It is an enormous public health problem, and for decades we thought it was inevitable that sick people got these infections. We first put in a program at Hopkins to eliminate them, and it worked. We then put it in hospitals in Michigan and not only showed that we could virtually eliminate them, but showed that those reductions could be sustained — for over four years now. We also showed that the mortality of all Medicare patients in the state was reduced by 10 percent. We moved mortality in a whole state.
MC: Your mission is to focus on improvements at the bedside. Can you explain?
PRONOVOST: When we developed our checklist, we started with a standard-of-care guideline from the Centers for Disease Control and Prevention. Like most guidelines, it was used about 30 percent of the time. The reason it wasn’t being used more frequently is because it was elegant, it was scholarly, but it was nearly useless at the bedside. It is 200 pages long and tells me to do 90 things and doesn’t prioritize. So we focused on five simple things that were most important. This kind of effort is an example of taking science and making it feasible and simplified.
MC: You also write quite a bit about the need to improve the science of measuring health care.
PRONOVOST: If you look at the last 50 years, the advances of medicine are pretty breathtaking. Life expectancy has increased. Many cancers, especially childhood cancers, are now curable. AIDS is a chronic disease. And yet, over the last decade, we’ve made virtually no progress in reducing patient harm, despite a lot of press about the need to focus on that. The main reason we haven’t made progress is because much of quality improvement has run away from, rather than embraced, science. For example, there is a whole profession called human factors engineers, and I have three of them on my research team. Most hospitals don’t have any. Nurses and lawyers are largely the ones investigating mistakes. Why on earth would we think in health care that we don’t need technical experts — these systems engineers who look at human factors? Every other field that has become safe employs them widely. We need to define the science of health care delivery and then apply it. But, again, it has to be practical.
MC: What’s not working, what’s not practical?
PRONOVOST: One example is quality measures. Managed care plans collect outcomes measures using administrative data that they say are “good enough.” Physicians push back and say the data are not valid, so they can’t use them. But what we haven’t done is say, How good are they and how good do they need to be before we’re comfortable using them? That’s a pretty basic clinical research method. In other words, if I am using discharge data to measure pneumonia right now, we have no idea whether that measurement gets it right 5 percent or 95 percent of the time. Frankly, physicians don’t often look at them because they most likely aren’t valid.
MC: It’s hard to believe that large managed care companies would be relying on information that does not represent the actual situation.
PRONOVOST: If you look at what core measures are processed and the outcomes data that we have, it is shocking that we have no idea how accurate or inaccurate they are. For bloodstream infections, there are about four studies that show discharge data gets it right one in four times — one in four! The other measures, we don’t know, but they are probably lower or around the same degree of performance.
MC: And yet physicians are sometimes paid or are put in tiers based on this information.
PRONOVOST: I am all for measuring outcomes, but it seems unjust and unwise to have a measure that is almost certainly highly inaccurate that you would either sanction or report or not pay people for. Rather, what we should do is find out how accurate the data are or move toward more clinically relevant data.
MC: How could we improve data integrity, and how can insurers contribute?
PRONOVOST: Three things need to happen, and insurers can be of enormous help with all three. The first is that someone has to write the definitions. That might sound simple, but it’s a chunk of work, and doctors need to meet insurers at the table. How do we define renal failure after surgery as a complication? Second, someone has to have a data source to pull those measures. Insurers have that data, but what we haven’t done is taken that first step of writing the specifications. And third, we need to get reports back to clinicians that give them feedback on how well they are doing. Insurers have data that extends beyond the hospital and looks at episodes of care, so they are uniquely able to measure and report on population measures of quality and safety. But we haven’t invested in writing the specifications, and we haven’t invested in the data infrastructure.
MC: What is the role of clinicians in designing these better outcomes recognition systems?
PRONOVOST: They have to be the ones driving it. It could be coordinated or facilitated by an insurer or the government, but at the end of the day, if doctors don’t believe the measures are valid, they won’t use them to improve care. It won’t work. We could publicly report until the cows come home, but if I am reporting something that doesn’t have credibility with the doctors, they are never going to use it to improve. That’s what we’ve done for the last decade, and it hasn’t worked very well.
That’s because this validity check has to happen at multiple times. The way to develop these measures is to gain consensus on what topics are most important. Say we decide renal failure is important. How do I define renal failure? What are my data sources? Are they good sources? Are the data useful and valid? And what do the reports look like? Are they informative? That kind of iterative approach really needs close collaboration between clinicians and insurers, because insurers have a lot of the technical expertise to make this happen.
MC: Physicians need to be involved at every step?
PRONOVOST: The challenge is that to date, doctors haven’t been at the table. That might be because they weren’t invited, or it might be because they didn’t want to come. Kind of like the Republicans saying they weren’t invited for health care reform.
What it takes is some leadership to say, OK, we need a partnership. We need the clinicians to write and define outcomes that are meaningful for them. Perhaps patients could be there, too, to say what measures are important for patients. Then we need a way to get that data. It’s all labor-intensive, so it would be inefficient for every group to do it themselves. It might be coordinated through a trade association or on a national level.
MC: Will the Affordable Care Act change things?
PRONOVOST: Nearly all of health reform is posited on improving patient outcomes, and there is bipartisan support. It is non-controversial that we have to start paying for value rather than volume. But there is no mechanism in this country to produce the thousands of outcomes measures that are going to be needed to make that possible.
Consumers have woefully inadequate information about clinicians’ performance, but health reform blindly assumes that these measures exist. It’s frightening in a way, because they don’t exist. It is going to take some focused investment and effort to produce them. I have been strongly advocating that we have an organization in health care similar to the Securities and Exchange Commission. Although the SEC performs poorly as a regulator, it performs exceedingly well as a transparency agency. I could look at the financials of any company on the Web right now and have reasonably good faith that they are accurate. Certainly there are bad examples like Enron, but for the most part, our whole business world relies on the accuracy of that data.
MC: So this entity would ensure the accuracy and validity of health care outcomes data?
PRONOVOST: It would be in charge of overseeing health care measures. You would have the private sector making the rules, as you do in the Federal Trade Commission, and have public sector transparency in auditing. Then you would have private sector re-analysis of the data, kind of like medical journalism, which is what Bloomberg and Reuters do. They take that raw data and make it more meaningful to consumers. The raw data would also be available for verification.
MC: Physicians could then be sure it was valid.
PRONOVOST: Insurers are using proprietary algorithms for many outcomes measures, which to me is just unconscionable. I don’t know of any other industry that would tolerate being paid or not paid or having reports published about them without knowing how someone is calculating the score. It’s a black box. Imagine us evaluating writers or editors and saying, “Here’s your good writing score. We’re going to pay you based on this, but we are not telling you how we are calculating it.” You would go ballistic. And yet somehow we tolerate that in health care.
MC: Are there any areas where this is being done well now?
PRONOVOST: At Intermountain Health, they get physicians to define outcomes and they measure it. They have been successful in a hospital-based system, and it could be broadly expanded. It is exceedingly inefficient and ineffective for every hospital to make these rules themselves. It takes a fair bit of effort to do this kind of work.
MC: How has it worked out in terms of costs?
PRONOVOST: There are huge economies of scale. Once we developed the program in Michigan and made it Web-based, the marginal costs of doing it in other states were exceedingly small. It’s a turnkey program. We go in, we invite all the hospitals to participate, and we implement it. There are some costs for hosting a meeting in a state and for my team to go there for a day, but they are pretty limited.
In about 15 of the states, the insurers help support the state hospital association, which runs the program. It runs about $50,000 to $100,000 for the infrastructure. We have had some private philanthropy as well, which initially allowed us to get several states going. My time is funded by the federal government through a research grant from AHRQ.
MC: The costs to the hospital are relatively small?
PRONOVOST: Yes. There are really no marginal costs. There is some nurse and staff time, but nobody has hired a new FTE. We just finished a cost analysis that is going to be published, and when we looked at all of the staff time, it cost the hospital about $3,000 per infection avoided. The average one of these infections costs about $36,000 to treat. The average hospital saved about $1 million per year. That’s a pretty damn good investment.
MC: What’s next?
PRONOVOST: We’ve developed a model much like the model for drug development, producing these programs in phases. Phase 1 is developing the measures and summarizing the evidence. That takes some resources and time. Phase 2 is pilot testing the program to see if it really works. Phase 3 is scaling it nationally. We have already done the same thing for pneumonia in Michigan. We reduced ventilator-associated pneumonia by 70 percent — remarkable performance. The model works beautifully. We would love to have a partnership with insurers to help create a pipeline to produce these. They save an awful lot of money, because these complications are expensive. We would love to do one for deep venous thrombosis, for example. The bloodstream infection program was successful, and what we need to figure out is how to replicate it. What is killing people, and what can we do to fix that?
MC: Thank you.
We could publicly report until the cows come home, but if I am reporting something that doesn’t have credibility with the doctors, they are never going to use it to improve.
Insurers are using proprietary algorithms for many outcomes measures, which to me is just unconscionable.