Even as demand for evidence-based medicine reaches a crescendo, a chorus of calls for comparative-effectiveness research is emerging to overwhelm it.
Payers, politicians, and policymakers agree that the "E" of EBM needs work, and that it would be a great idea to create a new entity to compare the merits of alternative therapies, devices, and procedures.
Proponents believe that such an entity is key to reducing the high rate of uninsurance in America, and they draw the connection in just two steps.
"When you start looking at the uninsured, that immediately leads you to the affordability of coverage," says Christina Nyquist, director of policy analysis for Blue Cross & Blue Shield Association. "And when you're at affordability, one of the things staring at you is comparative effectiveness."
Indeed, the BCBSA and America's Health Insurance Plans are so enthusiastic about comparative effectiveness research that they have offered to pony up for a new organization that would carry out the research agenda. They expect the federal government and possibly other industry segments to be in the mix — and they want to influence the way the research infrastructure works.
"The health plans are ready to step up to the plate," says Steven Pearson, MD, a senior fellow at AHIP. "The conversations are ongoing about the best way to structure it, and the fairest way, the way that makes the best sense, to share this process."
The lack of information about how one treatment compares to its alternatives has frustrated providers, patients, and payers for decades. But over a period of just several months, that frustration has turned into energy to make it happen.
Pearson attributes the enthusiasm to a confluence of factors: rising health care costs; societal demands for innovation and the resulting questions about whether expensive innovation is better than its predecessor; and a wave of comparative-effectiveness research and policymaking in other countries. Add to that the growing conviction that the Medicare Trust Fund is dancing its merry way toward depletion, and consensus on CE (not to be confused with continuing education), as comparative effectiveness is sometimes called, came quickly.
"People are saying, 'We spend a ridiculously small amount trying to evaluate the comparative effectiveness of what we do in our health care system, and we're heading toward a fiscal cliff,'" Pearson says. "Wouldn't it make more sense to provide good, solid evidence for decision-makers up and down the line as a smart way to attack multiple problems, including quality, safety, and cost issues?"
The first legislative proposal — the Enhanced Health Care Value for All Act, introduced in May by Rep. Tom Allen, a Maine Democrat and Rep. Jo Ann Emerson, a Missouri Republican — won public support from many organizations that count: AARP, American Academy of Family Physicians, Consumers Union, National Business Group on Health, Aetna, UnitedHealth Group, and BCBSA.
Separately, three presidential candidates — Hillary Clinton, Barack Obama, and John Edwards — have each called for creation of a national institute to oversee comparative effectiveness research. Meanwhile, various legislators and other influential folk — for example, Sen. Max Baucus, a Montana Democrat, and Newt Gingrich, former Republican speaker of the house — lined up to support the BCBSA's proposal, suggesting bipartisan consensus that the time is right.
"This actually could happen," says Gail Wilensky, PhD, the former administrator of the Health Care Finance Administration and chairwoman of the Medicare Payment Advisory Committee.
An economist at Project Hope and a leading player in the comparative-effectiveness movement, Wilensky says CE appeals because it addresses both health care quality and efficiency in one sweep.
"Not only do we not have great outcome statistics in general, but your likelihood of getting the right things done if you go to a hospital are not so hot — and that's pretty appalling," she says. "A lot of people see comparative effectiveness research as a building block for how to spend smarter, which is how I view it."
The CE movement builds as physicians and their patients are being asked to make treatment decisions based on evidence. Dozens of P4P and other quality initiatives already have emerged to steer medical practice towards evidence-based medicine, and more are being prepared.
"How can we be paying for performance when so many questions about what we don't know are being raised?" Wilensky asks.
The short answer, which Wilensky herself provides, is that most P4P programs currently in place use basic measures for which evidence is clear-cut. As the same time, those programs are creating a new appreciation for evidence and the infrastructure for rewarding the use of evidence in treatment decisions.
"The more you start to base your practice and conversations with patients on good evidence, the more you realize that we could use more objective, transparent evidence to help us," Pearson says.
Specifically, Wilensky is looking for a new level of sophistication from the future research findings. She believes that researchers can identify which treatments are most appropriate for individual patients.
"We have tended to look too much in a binary way at the world, saying something is effective or it's not effective," she says. "We really need to have a better understanding about when we're likely to have a substantially good clinical outcome with a particular intervention for somebody with a set of conditions — and when it's either much less clear or when it may not be clear at all."
Although the groundswell of support for the concept of comparative effectiveness research is impressive, all advocates know where the happy whistling ends: Should cost-effectiveness be an element of comparative-effectiveness decisions?
In a Health Affairs interview, David Eddy, MD, PhD, said America's refusal to think about costs got us where we are today.
"Our failure to explicitly consider costs in medical decision-making is the single greatest flaw in our health care system," he says. "It is not only an omission; it is actively causing great harm. It is just plain nonsensical for a society to go off and buy expensive things without having any concern at all for what they cost."
Eddy, senior adviser for health policy and management at Kaiser Permanente, founded the KP-sponsored Archimedes simulation model for health care decision-making. He says he introduced the term "evidence-based" into the health care vernacular.
His interviewer was Sean Tunis, MD, a former top CMS official who is director of the Center for Medical Technology. Tunis agreed with Eddy's criticism, but reminded him of how antithetical that idea is to American thinking about health care. The nation's largest health care payer — CMS — is forbidden to consider cost in its coverage decisions.
"When you weigh costs against benefits in health care, it could reasonably be called rationing...." Tunis said in the interview. "It's still hard to imagine saying to Mrs. Jones, 'There is a reasonable chance that this would help your mom, but we just think it's too expensive and there are better ways we could use the money.'"
That difficulty, Wilensky says, is why any document with a dollar sign needs to be left out of the new comparative-effectiveness institute or center.
"I am pleading not to bring cost and cost-effectiveness into the center, either right now or at all, for both political and technical reasons," she says.
In her view, cost-effectiveness research and cost considerations in decision-making are appropriate, but not to be tied to comparisons of clinical effectiveness. She says that costs should be considered separately, after comparative-effectiveness findings identify the best treatment option.
Making cost-effectiveness a component of comparative-effectiveness analysis is not worth the risk, she says. It complicates the decision-making and increases the likelihood that political opposition would scuttle the comparative-effectiveness center before it got off the ground.
"You will have far less lobbying effort on the part of industry and the patient advocacy community if you don't have it in there," Wilensky says. "Those groups are very uneasy about having cost-effectiveness directly in."
Jennifer Bright, convener of the National Working Group on Evidence-Based Health Care, a coalition of more than 30 consumer, caregiver, and research organizations, backs that up.
"Too often, the assumption is that we oppose the role of cost as a factor in evaluating options, and I would say that's not true," she said at a conference on comparative-effectiveness research. "Cost is a factor, but in a later stage of the decision-making."
Regardless of where other stakeholders line up politically, the private payer community wants cost-effectiveness to be up front and center.
"It can ultimately be a mistake to separate out those two processes," Pearson says. "We do believe that it can be done linked within a federal enterprise with the rigor and transparency that's necessary to actually make that kind of information legitimate."
The decision models that are needed for comparative effectiveness analysis are based on those used in cost-effectiveness studies. Including clinical- and cost-effectiveness in the same analysis allows a study to consider effects on utilization and care downstream that explains the implications of a decision.
That said, Pearson agrees with Wilensky that putting cost-effectiveness into the mix will make comparative-effectiveness a more difficult sell.
"From the purely pragmatic political perspective, the words cost effectiveness often cause concerns and the easiest political pathway toward comparative effectiveness would be to leave cost and cost-effectiveness out," he says. "But the easy pathway is often not the best. That's why a lot of people are continuing to talk about the value that would be lost if we leave cost and cost-effectiveness on the outside."
Wilensky says putting them on the inside jeopardizes the whole idea. Whether AHIP or BCBSA would withdraw their financial support if cost does not stay in the equation remains to be seen.
Wilensky is the only one saying the issue is a line in the sand.
"It is political death to put it in," she says. "Would you rather not have this concept at all? I think that's a really bad idea for them. The payers are the first and foremost gainers of comparative clinical effectiveness."