A Conversation with Mary Barton, MD, MPP: Keeping Measurement Relevant
MANAGED CARE November 2011. ©MediMedia USA
With national attention on health care reform and new models of care, the NCQA’s VP for performance measurement looks at how quality improvement can stay at the forefront
Working as a physician at Harvard Community Health Plan and as scientific director for the U.S. Preventive Services Task Force has given Mary Barton, MD, MPP, a broad perspective on the use of data to improve quality of care. Just months into her role as vice president for performance measurement at the National Committee for Quality Assurance, she’s using that perspective to help the organization understand new markets, work with the government on health reform, and think about how nuanced performance measures may have to become to stay relevant. “What we need to do next is figure out the locus of measurement for every kind of health care organization,” she says. “What are the measures that will enable them to improve the health care of the people who are entrusted to them?” At the Agency for Healthcare Research and Quality, Barton led the clinical services team that made the U.S. Preventive Services Task Force’s recommendations to health care providers. She previously served on the faculty of Harvard Medical School. Barton earned her medical degree at Harvard and completed a master’s degree in public policy, a residency in internal medicine, and a fellowship in general medicine. She is a member of the American College of Physicians and the Society of General Internal Medicine. She spoke recently with MANAGED CARE Editor John Marcille.
MANAGED CARE: Will your experience at Harvard help you in your new role at the NCQA?
MARY BARTON, MD, MPP: When I trained at Harvard, there was a fairly stark contrast between the specialty orientation of most of the tertiary medical centers in Boston and the growing attention to primary care. I had the privilege of doing my primary care training within Harvard Community Health Plan, so my critical outpatient experience was shaped by the challenge of simultaneously thinking about population health along with taking care of one person at a time.
MC: Was Harvard Community Health Plan measuring performance to influence clinical practice then?
BARTON: Absolutely. Even before I had my medical degree, as a first-year medical student I worked with several of the medical directors at Harvard Community Health Plan on a research evaluation of an influenza vaccination program, which used a peer-comparison feedback model to tell physicians how many of their eligible population had been vaccinated. My mind was open to the idea of using measurements to drive improvements in quality of care from the very beginnings of my medical education.
MC: Why did you leave Harvard to go to AHRQ?
BARTON: Academic research is challenging and intellectually satisfying, but you have a trajectory that almost always leads you more narrowly to define your area of work. Moving from academia to my position at the AHRQ was an opportunity to jump from the narrow and increasingly specified research world out to a true generalist position where I would look at issues of effectiveness of care across the age spectrum, across populations, and across a wide spectrum of interventions. It was also very appealing to move from clinical practice to an environment where the opportunities to influence policy were much greater.
MC: Is that a move you would recommend for other physicians?
BARTON: I would encourage physicians who think about policy to consider whether government service could advance their goals. There is a lot of important work to be done, especially with the Department of Health and Human Services implementing the Affordable Care Act.
For the first 15 years, the NCQA took all of the low-hanging fruit, what they knew worked, and made sure that it was put into the language of measurement.
MC: What was it about your role at AHRQ that created a good fit with the NCQA?
BARTON: I think the appeal for the NCQA in recruiting me to be part of this team was that my affiliation with the U.S. Preventive Services Task Force was one of fidelity to evidence-based medicine. The performance-measurement world has always had hewing closely to evidence-based medicine as one goal, but it has, out of necessity, sometimes been closer and sometimes further away from that goal.
MC: How do you view the evolution of evidence-based medicine, and where is that heading today?
BARTON: About the same time that employers were saying they didn’t know which health plan to offer their employees — which led to the establishment of the NCQA — the medical professions were also coming to grips with the need to organize the resources available to physicians to maximize outcomes. We had the American Heart Association and the American College of Cardiology collaborations on guidelines for the care of someone with a heart attack. There were forays from within medicine to say, “We actually know what works, so we should write it down and disseminate it, and let people know what they should be doing in certain situations.”
MC: Performance measurement was evolving at the same time?
BARTON: For the first 15 years, the NCQA took all of the low-hanging fruit, what they knew worked, and made sure that it was put into the language of measurement so that plans could show how many men and women over the age of 50 had received colorectal cancer screening or how many diabetics had received flu shots. The first measures tended to be structural, and they grew to include process. Part of what this decade has been about is moving into outcomes.
MC: Is the use of evidence-based medicine the main focus today?
BARTON: Evidence-based medicine is a crucial tool for all clinical measures, but it is less relevant to some structural measures. For example, you don’t need a randomized controlled trial to tell you that you need to have clear and well-written patient beneficiary pamphlets. If you are in an area that has clear evidence, it’s crucial that you use it and don’t ignore it. But you can’t say that all performance measurement is going to be driven by evidence-based medicine.
MC: How do you define what types of measures are relevant?
BARTON: We are in a time of great changes. For example, there has never been something called an accountable care organization, but now people are creating them and NCQA is accrediting them. Health insurance exchanges have never really existed. At the same time, primary care practitioners across the country are seeking the way to move toward process improvement, most commonly along the pathway of patient-centered medical homes. So you have that groundswell from providers and at the same time the possibility of these big organizational changes. The first audience for NCQA was the integrated health plans for which it was easiest to see the value of taking care of a population. Then it included all of the big insurers that have PPO networks. Now we have to consider how to move toward measuring the care for the rest of the country.
MC: Is the NCQA going to have difficulty making an adjustment between the HMO, PPO, TPA world and the accountable care world?
BARTON: I feel very confident that the NCQA has everything it needs right now to be flexible and to continue its leadership in performance measurement across these different entities. But 20 years ago, the NCQA was the only one in the game. Now it’s a very crowded field. To play in the new arena requires partnerships and collaborations, and that is appropriate. No entity can handle all of the needs. The agenda of improving the quality of health care in this country is enormous.
MC: How do you see measures evolving?
BARTON: Similar to the history of measures at NCQA, the U.S. Preventive Services Task Force tackled the low-hanging fruit first. They were making recommendations that were relatively easy to get our hands around. Cancer-screening topics, where you have a big benefit in life years if you save middle-aged people, were important. The cutting edge now for preventive guidelines is to ask, “How do we deal with the extremes of age, where mortality is not the most relevant outcome? What kind of care should 85-year-olds be getting to maximize function or to maximize outcomes that are important to them?” There may be outcomes that are equally or even more important to them than survival. At the same time, we can look at the child end of the spectrum. Very few things threaten survival in young children, and we’ve already taken care of all the main ones we know about, through immunizations, for example. But that doesn’t mean that there aren’t other important outcomes for children. Improving the trajectory of growth and development to permit better social development, better school attainment, and better communication skills are important outcomes for this age group. They aren’t more important than death, but they are important outcomes. Those are two areas where the onus is on the measure developers to get more subtle and more relevant and to take the understanding of these clinical situations into account.
MC: Will the NCQA replace the measures that tackle low-hanging fruit as HEDIS measures evolve?
BARTON: I do think about what we are going to do with this long list of measures. The challenge, besides measures needing to get more subtle and appropriate for both children and for older adults, is also to figure out how to make measures more appropriate to the locus of application. There could be a library of measurement sets that are used by entities that tailor the options to drive their quality improvement.
MC: Tailoring will also have to be done for different types of organizations?
BARTON: We cannot expect the same criteria to apply to every part of the American health care system, so there has to be some effort to meet people where they are and inspire them to improve to become the best that they can be. That may mean that we have to sell a lot more flavors of accreditation or collaborate in the future with partners we haven’t even dreamed of yet. The Affordable Care Act described a large number of innovative programs, some must wait to be funded before they can be developed. There is a description of a program to create a primary care extension service, to provide facilitation to practices to become reflective, learning systems. They could examine their practices, and use that information to improve. That’s not on the ground yet; it’s an idea, but we have to think about how the NCQA and other members of the quality improvement sector can help support quality improvement on a scale like that, not just in big health plans.
MC: Is this the direction the NCQA is heading?
BARTON: It’s part of our long-term planning. I’m trying to convey a sense of why I think it’s an appropriate time to look at the world of performance measures and try to make sure that we are moving ahead with relevant and powerful tools for quality improvement.
MC: It’s a broad view.
BARTON: The NCQA has a lot of experience to bring to bear on questions of how to use information and measurement to improve quality. In addition to all of the health plan and PPO experience, the NCQA has been collaborating with the AMA on measures for individual clinicians.
The cutting edge now for preventive guidelines is to ask, “How do we deal with the extremes of age, where mortality is not the most relevant outcome?”
MC: What is the status of performance measurement for individual physicians?
BARTON: The theoretical basis for measuring care on the individual physician level has much in common with the framework that has been used for measuring performance on health plans. But the extra element for physician measures comes in thinking about what is important to measure, and what represents a practice improvement opportunity for clinicians. This field is really growing quickly, and our understanding of it is getting more and more sophisticated. Patient-reported outcomes, for example, are an important building block for the future of understanding clinician performance.
MC: Why are they so important, and are they related to patient satisfaction?
BARTON: We are at an elementary level of understanding of the meaning of satisfaction surveys and what a clinician should be aiming to reach. But the clinical quality of care is a little bit different. If I were being held to how many of my hypertensive patients were adequately controlled and how well they understood their medications, that would be a fair way to compare me to other docs. And that requires asking patients to tell us whether and how much they understood.
MC: That would be a mixture of process and outcomes, then.
BARTON: It would be.
MC: But it wouldn’t have anything to do with whether the patient got an appointment quickly.
BARTON: No. There’s a mixture of things that one would want to measure with patient-reported outcomes. Just imagine if we want to improve end-of-life care. We are not going to measure death rates — that’s ludicrous — but we need to somehow find out from patients and from families about the experience the patient had. If we are talking about, for example, post hip-reconstruction surgery and post-rehab, there’s not going to be any way to find out about someone’s function unless you ask him. There’s no administrative or clinical database that will tell you whether this group of patients is able to climb stairs or not.
MC: Are patient-reported outcomes important only at the physician level?
BARTON: There are people in the insurance industry who would say that health plans should be graded on things that they can affect and shouldn’t be graded on things that they really have no prayer of having an impact on.
MC: For example?
BARTON: Current questions in the health outcomes survey that the NCQA administered for CMS ask whether, if the patient had urinary incontinence, the doctor talked about options for treatment. On one hand, a payer has a lot of responsibilities for assuring the quality of care that patients get, but in terms of realistic quality improvement, the way that we improve the quality of care for older women with urinary incontinence might be more powerful if it were directed at a lower level than the health plan. It might be; I don’t know because we have a somewhat elementary level of understanding about how to apply measures at the right locus to leverage improvement.
MC: Thank you.