Evidence-Based Medicine Is Not Enough
Evidence-Based Medicine Is Not Enough
MANAGED CARE August 2007. ©MediMedia USA
Although EBM is only just getting started, many people are looking beyond evaluating treatments head to head and considering cost as well
Even as demand for evidence-based medicine reaches a crescendo, a chorus of calls for comparative-effectiveness research is emerging to overwhelm it.
Payers, politicians, and policymakers agree that the "E" of EBM needs work, and that it would be a great idea to create a new entity to compare the merits of alternative therapies, devices, and procedures.
Proponents believe that such an entity is key to reducing the high rate of uninsurance in America, and they draw the connection in just two steps.
"When you start looking at the uninsured, that immediately leads you to the affordability of coverage," says Christina Nyquist, director of policy analysis for Blue Cross & Blue Shield Association. "And when you're at affordability, one of the things staring at you is comparative effectiveness."
Indeed, the BCBSA and America's Health Insurance Plans are so enthusiastic about comparative effectiveness research that they have offered to pony up for a new organization that would carry out the research agenda. They expect the federal government and possibly other industry segments to be in the mix — and they want to influence the way the research infrastructure works.
"The health plans are ready to step up to the plate," says Steven Pearson, MD, a senior fellow at AHIP. "The conversations are ongoing about the best way to structure it, and the fairest way, the way that makes the best sense, to share this process."
Nice to be popular
The lack of information about how one treatment compares to its alternatives has frustrated providers, patients, and payers for decades. But over a period of just several months, that frustration has turned into energy to make it happen.
Pearson attributes the enthusiasm to a confluence of factors: rising health care costs; societal demands for innovation and the resulting questions about whether expensive innovation is better than its predecessor; and a wave of comparative-effectiveness research and policymaking in other countries. Add to that the growing conviction that the Medicare Trust Fund is dancing its merry way toward depletion, and consensus on CE (not to be confused with continuing education), as comparative effectiveness is sometimes called, came quickly.
"People are saying, 'We spend a ridiculously small amount trying to evaluate the comparative effectiveness of what we do in our health care system, and we're heading toward a fiscal cliff,'" Pearson says. "Wouldn't it make more sense to provide good, solid evidence for decision-makers up and down the line as a smart way to attack multiple problems, including quality, safety, and cost issues?"
The first legislative proposal — the Enhanced Health Care Value for All Act, introduced in May by Rep. Tom Allen, a Maine Democrat and Rep. Jo Ann Emerson, a Missouri Republican — won public support from many organizations that count: AARP, American Academy of Family Physicians, Consumers Union, National Business Group on Health, Aetna, UnitedHealth Group, and BCBSA.
Separately, three presidential candidates — Hillary Clinton, Barack Obama, and John Edwards — have each called for creation of a national institute to oversee comparative effectiveness research. Meanwhile, various legislators and other influential folk — for example, Sen. Max Baucus, a Montana Democrat, and Newt Gingrich, former Republican speaker of the house — lined up to support the BCBSA's proposal, suggesting bipartisan consensus that the time is right.
"This actually could happen," says Gail Wilensky, PhD, the former administrator of the Health Care Finance Administration and chairwoman of the Medicare Payment Advisory Committee.
An economist at Project Hope and a leading player in the comparative-effectiveness movement, Wilensky says CE appeals because it addresses both health care quality and efficiency in one sweep.
"Not only do we not have great outcome statistics in general, but your likelihood of getting the right things done if you go to a hospital are not so hot — and that's pretty appalling," she says. "A lot of people see comparative effectiveness research as a building block for how to spend smarter, which is how I view it."
What of today's evidence?
The CE movement builds as physicians and their patients are being asked to make treatment decisions based on evidence. Dozens of P4P and other quality initiatives already have emerged to steer medical practice towards evidence-based medicine, and more are being prepared.
"How can we be paying for performance when so many questions about what we don't know are being raised?" Wilensky asks.
The short answer, which Wilensky herself provides, is that most P4P programs currently in place use basic measures for which evidence is clear-cut. As the same time, those programs are creating a new appreciation for evidence and the infrastructure for rewarding the use of evidence in treatment decisions.
"The more you start to base your practice and conversations with patients on good evidence, the more you realize that we could use more objective, transparent evidence to help us," Pearson says.
Specifically, Wilensky is looking for a new level of sophistication from the future research findings. She believes that researchers can identify which treatments are most appropriate for individual patients.
"We have tended to look too much in a binary way at the world, saying something is effective or it's not effective," she says. "We really need to have a better understanding about when we're likely to have a substantially good clinical outcome with a particular intervention for somebody with a set of conditions — and when it's either much less clear or when it may not be clear at all."
Where agreement fades
Although the groundswell of support for the concept of comparative effectiveness research is impressive, all advocates know where the happy whistling ends: Should cost-effectiveness be an element of comparative-effectiveness decisions?
In a Health Affairs interview, David Eddy, MD, PhD, said America's refusal to think about costs got us where we are today.
"Our failure to explicitly consider costs in medical decision-making is the single greatest flaw in our health care system," he says. "It is not only an omission; it is actively causing great harm. It is just plain nonsensical for a society to go off and buy expensive things without having any concern at all for what they cost."
Eddy, senior adviser for health policy and management at Kaiser Permanente, founded the KP-sponsored Archimedes simulation model for health care decision-making. He says he introduced the term "evidence-based" into the health care vernacular.
His interviewer was Sean Tunis, MD, a former top CMS official who is director of the Center for Medical Technology. Tunis agreed with Eddy's criticism, but reminded him of how antithetical that idea is to American thinking about health care. The nation's largest health care payer — CMS — is forbidden to consider cost in its coverage decisions.
"When you weigh costs against benefits in health care, it could reasonably be called rationing...." Tunis said in the interview. "It's still hard to imagine saying to Mrs. Jones, 'There is a reasonable chance that this would help your mom, but we just think it's too expensive and there are better ways we could use the money.'"
That difficulty, Wilensky says, is why any document with a dollar sign needs to be left out of the new comparative-effectiveness institute or center.
"I am pleading not to bring cost and cost-effectiveness into the center, either right now or at all, for both political and technical reasons," she says.
In her view, cost-effectiveness research and cost considerations in decision-making are appropriate, but not to be tied to comparisons of clinical effectiveness. She says that costs should be considered separately, after comparative-effectiveness findings identify the best treatment option.
Not worth the risk
Making cost-effectiveness a component of comparative-effectiveness analysis is not worth the risk, she says. It complicates the decision-making and increases the likelihood that political opposition would scuttle the comparative-effectiveness center before it got off the ground.
"You will have far less lobbying effort on the part of industry and the patient advocacy community if you don't have it in there," Wilensky says. "Those groups are very uneasy about having cost-effectiveness directly in."
Jennifer Bright, convener of the National Working Group on Evidence-Based Health Care, a coalition of more than 30 consumer, caregiver, and research organizations, backs that up.
"Too often, the assumption is that we oppose the role of cost as a factor in evaluating options, and I would say that's not true," she said at a conference on comparative-effectiveness research. "Cost is a factor, but in a later stage of the decision-making."
Health plans beg to differ
Regardless of where other stakeholders line up politically, the private payer community wants cost-effectiveness to be up front and center.
"It can ultimately be a mistake to separate out those two processes," Pearson says. "We do believe that it can be done linked within a federal enterprise with the rigor and transparency that's necessary to actually make that kind of information legitimate."
The decision models that are needed for comparative effectiveness analysis are based on those used in cost-effectiveness studies. Including clinical- and cost-effectiveness in the same analysis allows a study to consider effects on utilization and care downstream that explains the implications of a decision.
That said, Pearson agrees with Wilensky that putting cost-effectiveness into the mix will make comparative-effectiveness a more difficult sell.
"From the purely pragmatic political perspective, the words cost effectiveness often cause concerns and the easiest political pathway toward comparative effectiveness would be to leave cost and cost-effectiveness out," he says. "But the easy pathway is often not the best. That's why a lot of people are continuing to talk about the value that would be lost if we leave cost and cost-effectiveness on the outside."
Line in the sand?
Wilensky says putting them on the inside jeopardizes the whole idea. Whether AHIP or BCBSA would withdraw their financial support if cost does not stay in the equation remains to be seen.
Wilensky is the only one saying the issue is a line in the sand.
"It is political death to put it in," she says. "Would you rather not have this concept at all? I think that's a really bad idea for them. The payers are the first and foremost gainers of comparative clinical effectiveness."
User guide for health plans
Comparative effectiveness information could help payers — private and public — make better policies, says MEDPAC Executive Director Mark Miller.
Among his ideas:
- Creating a tiered payment structure that encourages providers to focus on high-value services
- Prioritizing pay-for-performance measures
- Creating a tiered cost-sharing structure that requires lower copayments or coinsurance for services that show more value
- Not paying the additional cost of a more expensive service if evidence shows that it is clinically comparable to its alternatives
- Requiring manufacturers to give a rebate to the payer for services that do not yield expected outcomes for the patient
- Prioritizing disease management initiatives
What health plans will pay for
The proposal currently before Congress would make $3 billion over the next five years available to the Agency for Healthcare Research and Quality to kick its now-modest level of comparative-effectiveness research into high gear.
That amount includes $1 billion from Medicare with the remainder coming from health plans and self-funded employers.
Justifying the cost
Advocates express no worries about the wisdom of investing in the research enterprise.
Steven Pearson, MD, a senior fellow at America's Health Insurance Plans and former Medicare adviser, points to a $70 million study, funded in part by Medicare, that weighed the risks and benefits of lung-volume reduction surgery. Patients and providers used the findings in a rational way.
"Without Medicare lifting a finger, the use of that surgery dropped tremendously and the federal government has saved hundreds of millions of dollars since — just by funding a single study," he says.
The Enhanced Health Care Value for All Act, proposed by Rep. Tom Allen, a Maine Democrat, and Rep. Jo Ann Emerson, a Missouri Republican, calls for an independent advisory board that would recommend whether to establish one or more federally funded research and development centers to oversee the research agenda.
Several other governance models have been suggested. One of the key principles of BCBSA's proposal, for example, is that the institute should be chartered as an independent, private entity.
Scope of work
Gail Wilensky, PhD, at Project Hope, says randomized clinical trials that pit one treatment against another would be just one of many research methods. She also hopes to see "real world" studies in which people with a range of comorbidities and demographic attributes are included in the study populations and in observational analyses using payers' databases.
All data sources — including clinical trials that eliminate people with certain characteristics — have limitations that need to be explained so that research findings can be used in helpful ways.
"All of this is really trying to refine and narrow the range of ignorance and to recognize the likelihood that somebody with a certain set of conditions and symptoms will gain clinically from a particular intervention," she says.
For further reading
"Research on the Comparative Effectiveness of Medical Treatments: Options for an Expanded Federal Role," Congressional Budget Office Testimony, June 12, 2007.
"Producing Comparative Effectiveness Information," Medicare Payment Advisory Commission Testimony, June 12, 2007.
"Reflections on Science, Judgement, and Value in Evidence-Based Decision Making: A Conversation with David Eddy," by Sean R. Tunis. Available online at «http://content.healthaffairs.org/cgi/content/abstract/hlthaff.26.4.w500v1»