Comparative Effectiveness: An Idea Whose Time Has Finally Come

It is exactly what managed care has been promoting for years, but some people worry that the information it creates could be misused

Martin Sipkoff

Contributing Editor

Federally mandated comparative effectiveness research — a $1.1 billion allocation in the American Recovery and Reinvestment Act of 2009, known to most people as the “stimulus bill” — is either a godsend to payers and to doctors and their patients or a first step toward federally mandated euthanasia. Depends on whom you listen to.

“Some people are afraid of government,” says Brent James, MD, executive director of the Intermountain Healthcare’s Institute for Healthcare Delivery Research in Salt Lake City. “They should be more afraid of their doctors’ lack of information.”

The purpose of comparative effectiveness research (CER) is to get data into the hands of doctors — information that reflects which treatment, device, and drug is best suited to the patient. “This is information they now lack,” says Mark Fendrick, MD, a professor in the department of internal medicine at the University of Michigan medical school and a strong advocate of CER. “What people tend not to talk about is how bad the status quo is.”

Both James and Fendrick are firm in their faith in physicians’ desire to do the right thing. But in spite of recent efforts by health plans and federal organizations such as the Agency for Healthcare Research and Quality (AHRQ) to get evidence-based medicine protocols into doctors’ hands, too little information exists about which drug, device, treatment, or diagnostic tool is better than another for the same medical condition.

The purpose of CER is to correct that deficit, according to the legislation. The stimulus bill authorized a council of 15 federal employees to coordinate research and advise President Obama and Congress on how to make the best use of the $1.1 billion. It was created March 19 by the Department and Health and Human Services.

The $1.1 billion is divided between AHRQ ($300 million), the National Institutes of Health ($400 million), and the Department of Health and Human Services ($400 million). AHRQ is giving $1.5 million to the Institute of Medicine to make a recommendation report by June 30.

“The majority of its members are clinicians,” says Carolyn Clancy, MD, director of AHRQ and a member of the council, which is named the Federal Coordinating Council for Comparative Effectiveness Research. “It is designed to include subpopulations of interested parties, including mental health experts, people with disabilities, and minorities.”

The composition reflects the concerns expressed by several advocacy organizations that CER must be designed to consider the needs of a diverse national population. But notwithstanding those concerns — and an energetic vocal opposition that worries about government-mandated treatment restrictions — CER appears to be an idea powered by necessity, with enough support to eventually make it a standard for quality of care.

For example, HealthPartners, a not-for-profit integrated health care organization in Bloomington, Minn., has long used CER data available from a variety of sources, says associate medical director Patrick Courneya, MD.

“Unfortunately, the amount of evidence is pretty limited,” says Courneya, a practicing family physician. “The private sector hasn’t come up with a solution that works very well. We need all the help we can possibly get.”

Health plan support

Health care officials, including health plan medical directors, appear to universally favor CER. So do health care trade organizations, such as the Pharmacy Care Management Association (PCMA, the PBM lobby), America’s Health Insurance Plans (AHIP, the HMO lobby), and the Pharmaceutical Research and Manufacturers of America (PhRMA) — a group whose drug company members could be expected to have reservations about competitive research. Most consumer groups, such as AARP, are unstinting in their support.

AARP, in a statement of support, in fact accused opponents of the CER provision of using scare tactics and false reports to perpetuate lack of adherence to quality standards.

“AARP strongly opposes any attempts that would limit doctors and hospitals from providing the best possible care to their patients,” said CEO Bill Novelli in a prepared statement.

“This is really a no-brainer,” says Mark Merritt, president and CEO of PCMA. “It makes no sense to oppose these tools. Actually, people are demanding this kind of information. The idea that it will lead to government mandates is a red herring. People are demanding to know the value of what they are paying for.”

“We strongly support providers and patients knowing which treatments are the most effective,” says AHIP spokesman Robert Zirkelbach. “The more evidence we have, the better off the entire health system will be. Incorporating CER with formulary tiers, step therapy, and other tools will enhance the ability of our members to make decisions. That is the point of this, to provide information, not to set coverage decisions.”

Zirkelbach, James, Fendrick, and many others say the situation must improve, and quickly. Quality of care is suffering because of lack of CER. They point to a seminal Rand study titled “The Quality of Health Care Delivered to Adults in the United States,” published in the New England Journal of Medicine in June 2003, that found that only 54.9 percent of patients receive care recommended by evidence-based guidelines. CER takes evidence-based medicine one step further by making direct comparisons between medical options, not simply prescribing approved therapies.

In 2004, Paul Keckley, PhD, then executive director of the Vanderbilt Center for Evidence-Based Medicine in Nashville and now director of the Deloitte Center for Health Solutions in Washington, D.C., concluded in an article titled “Evidence-Based Medicine in Managed Care: A Survey of Current and Emerging Strategies,” that “there remain significant challenges in the implementation of evidence-based care management by plans, including   . . . substantial distrust among providers.”

“We don’t believe that situation has improved much, and more concrete data, disseminated by health plans, will make the situation better,” says Zirkelbach.

The PhRMA statement mentions the AARP stance: “Simply put, we agree with AARP in advocating on behalf of research that ‘could save your life by giving your doctors better information so they can prescribe the best treatments available to you.’ Such research must focus on medical outcomes, rather than cost-effectiveness analysis that has a long track record of being used to deny patients needed care.”

BIO, the Biotechnology Industry Organization, supports CER, but is concerned that the information may end up being used only to contain costs. BIO also expresses concern that research methodologies be consistent.

Effectiveness and efficiency

PhRMA’s statement that CER should focus on “medical outcomes rather than cost-effectiveness” points to a major point of contention related to CER. In fact, that issue ended up dictating how the Recovery Act money can be spent. Congressional skeptics about the allocation were successful in removing cost-effectiveness as a consideration in any CER studies conducted using stimulus bill money. The law states that value, i.e., the cost of care, may not be considered in comparative effectiveness research paid for with this federal money.

It states that only comparative effectiveness, not efficiency, may be studied. Its wording specifically separates the federal coordinating council’s research findings from coverage decisions: “Nothing in this section [on coverage rules] shall be construed to permit the council to mandate coverage, reimbursement, or other policies for any public or private payer.”

That provision does not concern HealthPartners’ Courneya, who says his plan can decide for itself what it wants to do with the data. “According to the legislation, it will be the federal role to produce comparative effectiveness information with no mandate about what is done with it when it is available,” he says. “If anything — any process — improves the information available to help patients make good choices, it is constructive.”

“We shouldn’t have to figure out what is best all by ourselves,” says Brent James. “That information is expensive. Good doctors are doing their level best. It’s not their fault; it’s a failure of the current system.”

Although Mark Fendrick believes payers will use the collected data to make cost-effectiveness decisions, he strongly disagrees that CER should not reflect value. He points to the fact that spending on health care totaled $2.2 trillion, or 16 percent of the nation’s gross domestic product, in 2007, and that the Congressional Budget Office estimates that without any changes in federal law, it will rise to 25 percent of GDP in 2025. This is significantly greater than any other country.

“I believe that we do not spend enough money on health care,” says Fendrick. “But we spend unwisely, without sufficient evidence of effectiveness. The overutilization of unnecessary or unproven medical services is rampant, but the current discussion has nothing to do with value, the cost component. The losers in this scenario are the patients. The winners are the people making money but not producing health.”

Current plan efforts

Notwithstanding the current debate, CER is not a new concept to health plans. Although plans are calling for significant expansion of CER through their trade organization and individually, several plans point to the CER-related work they are doing right now. Cigna Healthcare’s program is a good example.

“We have made extensive use of CER for some time,” says Douglas Hadley, MD, a Cigna medical director. He is a member of the company’s coverage policy unit, and his responsibilities include an assessment of the comparative effectiveness of new health technologies.

Hadley says that the company uses published evidence from several available sources to determine whether an emerging drug, device, protocol, biologic tool, or diagnostic tool is “either superior, inferior, or neutral.” And, he says, cost plays a role in that decision. “It is when the determination is ‘neutral’ that cost becomes important,” he says.

Cigna uses several different sources for its CER data (see “Current Research Efforts,” page 20). According to Hadley, when evidence exists, the company decides which technology has superior outcomes. “In some benefit plans administered by Cigna, only the lowest-cost alternative, where such alternatives are clinically equivalent, is covered; in other benefit plans, the consumer may choose either technology but may have higher copayments or coinsurance for the more expensive of the two competing, but clinically equivalent, health care technologies.”

In determining the cost of a particular technology, Hadley says, Cigna considers the total associated cost, including the direct and indirect costs, such as laboratory monitoring, average length of stay during hospitalizations, and so forth.

“We wholly support this CER legislation,” says Hadley, “and we will participate with other health insurance industry representatives, AHIP, consumer agencies, private employers, and governmental agencies in advising the administration on which projects will probably provide the most effective use of the funds for CER studies.”

He adds that value should be part of the equation. “Cigna believes that CER should include a comparison of total cost associated with that technology. It is an important component of health care reform, improving the quality of care given to consumers and controlling costs.”

And, say Hadley and others, it is not new to the health systems of other nations. Britain, France, and several other countries have government-sponsored CER programs.

Also, for several years, CER-oriented programs — both privately and publicly funded — have existed in this country. In fact, AHRQ has had a program in place since the Medicare Modernization Act of 2003.

It is really very simple, says PCMA’s Merritt. “What is needed is knowing what works and what doesn’t,” he says.

“More information is always better than less information,” says Fendrick. “To do less than that is unconscionable.”

Defining comparative effectiveness research

The American Recovery and Reinvestment Act of 2009 states that the $1.1 billion CER allocation “shall be used to accelerate the development and dissemination of research assessing the comparative effectiveness of health care treatments and strategies, through efforts that: (1) conduct, support, or synthesize research that compares the clinical outcomes, effectiveness, and appropriateness of items, services, and procedures that are used to prevent, diagnose, or treat diseases, disorders, and other health conditions; and (2) encourage the development and use of clinical registries, clinical data networks, and other forms of electronic health data that can be used to generate or obtain outcomes data.”

The legislation is at Look for H.R. 1, version 8.

Support For Stimulus Bill Meets Detractors

Opposition to the $1.1 billion stimulus bill allocation for comparative effectiveness research is, in some quarters, virulent, but not across the board. Some are more resistant than opposed.

Although most trade organizations, such as PhRMA, publicly support CER, somewhat muted resistance is taking place behind the scenes. In November 2008, lobbying organizations representing the drug, device, and biotechnology industries, as well as some patient-advocacy groups and professional medical societies, formed a coalition named the Partnership to Improve Patient Care (PIPC). According to its spokesman, PIPC wants to make sure that CER is not used “in an inappropriate manner that may limit treatment options for patients.”

PIPC members include Easter Seals, Friends of Cancer Research, and the Alliance for Aging Research. All have expressed concern that people with disabilities or particularly costly medical needs could find themselves being denied care if CER takes cost into consideration.

According to an analysis by the Wall Street Journal, “a major goal [of PIPC] is to give industry a seat at the table when federal officials decide what to research with the $1.1 billion.” PIPC’s new chairman is Tony Coelho, a California Democrat and former congressional powerhouse. He suffers from epilepsy and was a key author of the 1990 Americans With Disabilities Act.

In a prepared statement, he said, “I will work to make sure CER achieves [its] potential, and does not become a basis for denying patients access to the care they need.”

Some manufacturers whose products could be affected by CER have shown a mixed response and express relief that value is not a consideration in the new federal program. Quoted in the New York Times, Andrew Witty, chief executive of the pharmaceutical company GlaxoSmithKline, pointed to European efforts at CER and called the results mixed.

“Comparative effectiveness is a useful tool in the tool kit, but it is not the answer to anything,” said Witty. “Other countries have fallen in love with the concept, then spent years figuring out how on earth to make it work.”

Mark Merrill, a resident scholar at the Institute for Policy Innovation, said in a March 12 commentary in the Detroit News that there is nothing wrong with the effort “to establish which drugs or devices work best for given medical problems, on certain patients, and under specific conditions. But there is a dirty little secret to CER when it comes to prescription drugs. When the government sets up a process to do the research, there may be an ulterior motive: to save money, even at the expense of patient health and satisfaction.”

Some commentators are more reactionary, becoming apoplectic at the idea of the federal government spending money on CER.

The conservative commentator Rush Limbaugh and the Fox News anchor Bill Hemmer both referred to the CER allocation as a possible first step toward federally mandated denial of care, especially for the very sick.

The Washington Times published an editorial titled “Health ‘efficiency’ can be deadly” next to a photo of Adolf Hitler:

There is no telling what metrics will be used to define the efficiencies, but it is clear who will bear the brunt of these decisions. Those suffering the infirmities of age, surely, and the physically and mentally disabled, whose health costs are great and whose ability to work productively in the future are low. . . . This notion is fully in the spirit of the partisans of efficiency but came from a program instituted in Hitler’s Germany. . . . Under this program, elderly people with incurable diseases, young children who were critically disabled, and others who were deemed non-productive, were euthanized. This was the Nazi version of efficiency.&helip;

A more reasoned argument is posed by the economist Gail Wilensky, PhD, who formerly headed Medicare. She has long been a public and vocal supporter of the development of a national center for comparative research information. But in a June 2008 editorial in the Annals of Internal Medicine titled “Cost-Effectiveness Information: Yes, It’s Important, But Keep It Separate, Please!” she said. “I think it is vitally important to keep comparative clinical effectiveness analysis and cost-effectiveness analysis separate from each other. Payers, not a national clinical comparativeness program, should do cost-effectiveness analyses — and act on them.”

Current Research Efforts

Comparative effectiveness research is being conducted now by several public and private organizations.

  • ICER, launched in 2006, is an academic technology assessment initiative based at the Massachusetts General Hospital’s Institute for Technology Assessment (ITA).
  • Hayes Inc. is an independent health technology research and consulting company. It performs evidence-based health care technology assessments of the safety and efficacy of new, emerging, and controversial health technologies, and evaluates the effect of these technologies on health care quality, utilization, and cost. Hayes’ clients include hospitals, health care systems, government agencies, employers, and managed care organizations.
  • The Cochrane Library contains independent evidence to inform health care decision-making. It includes reliable evidence from the Cochrane Collaboration and other systematic reviews and clinical trials. Cochrane reviews combine results of worldwide medical research studies and are recognized as a standard in evidence-based health care.
  • ECRI Institute, a not-for-profit organization, uses applied scientific research to study and report which medical procedures, devices, drugs, and processes are the most effective.
  • Drug Effectiveness Review Project (DERP) is a collaboration of organizations that have joined to obtain the best available evidence on effectiveness and safety comparisons between drugs in the same class, and to apply the information to public policy and decision making in local settings.
  • Blue Cross Blue & Shield Association’s Technology Evaluation Center, founded in 1985, provides health care decision makers with objective and scientifically rigorous assessments synthesizing available evidence on the diagnosis, treatment, management, and prevention of disease.
  • AHRQ Effective Health Care, funded by the Medicare Prescription Drug, Improvement, and Modernization Act (MMA) of 2003, maintains a web site that offers comparative effectiveness guidelines on several subjects, such as oral medications for diabetes and the treatment of GERD and prostate cancer.
  • National Institute for Health and Clinical Excellence (NICE), in the United Kingdom, is an independent organization responsible for providing national guidance on promoting good health and preventing and treating ill health. Its data are used by many U.S. health plans.

IOM Focuses On Comparative Effectiveness

In recent years, the Institute of Medicine conducted a series of roundtables to discuss comparative effectiveness research. It published a report in September 2007 titled “Learning What Works Best: The Nation’s Need for Evidence on Comparative Effectiveness in Health Care.”

The IOM concluded that “current practices are not meeting what might be expected for a high value health system.”

To create that value, IOM identified the best standards for determining clinical effectiveness and communicating those results to health plans, doctors, and patients. These include:

Quantity of evidence — The need to develop a robust system for capturing evidence from everyday care

Observational data — The need for quality standards, the right analytics and tools, and an understanding of the circumstances under which observational data should be considered the standard

Planning for evidence development — The need for more coordinated setting of priorities and planning for the most appropriate life cycle of the research enterprise

Quality of evidence — The need to tailor trials to answer specific questions with implications for policy and practice, and the need for quality standards for observational data

Common taxonomy — The need to develop a more common understanding of the evolving concepts and terminology

Interpretation of evidence — The need to develop a common understanding of how to evaluate a body of evidence with consistency

Monitoring results — The need for a better mechanism to understand effectiveness across populations and time

Application of evidence — The need to develop translational rules to improve the consistency with which different forms of evidence are used

Communicating evidence — The need to better understand how to communicate the nature of evidence to the public as well as health care professionals

Incorporating patient preferences — The need to include patient preferences in evidence-based systems

Data as a common resource — The need for collaborative sharing of data as a common resource for the improvement of health and health care

The full report is available at

“I have a hard time imagining why anyone has even the slightest problem with it,” says Brent James, MD, a key researcher at Intermountain Healthcare. “Doctors make most decisions based on subjective recall, and… their estimates are often way off.”

“The private sector hasn’t come up with a solution that works very well,” says Patrick Courneya, MD, an associate medical director at HealthPartners.

“Cigna believes that CER should include a comparison of total cost associated with that technology,” says Douglas Hadley, MD, who runs that insurer’s program on comparative effectiveness.

“What people tend not to talk about is how bad the status quo is,” says Mark Fendrick, MD, professor at the University of Michigan Medical School.