Research Topics Underpin Comparative Effectiveness
MANAGED CARE November 2009. ©MediMedia USA
The government committee charged with helping health plans and providers choose best treatments suggests 100 areas of interest
One could say that the government, in its backing of comparative effectiveness research, wants to compare apples and oranges. The Committee on Comparative Effectiveness Research Prioritization released 100 topics that it says should be the focus of research. The very first topic illustrates just what those behind the CER push hope to accomplish. “Compare the effectiveness of treatment strategies for atrial fibrillation including surgery, catheter ablation, and pharmacologic treatment.”
“What this means,” says I. Steven Udvarhelyi, MD, senior vice president and chief medical officer at Independence Blue Cross, “is that the experts that sat on the panel with me looking at the available research said that we do not have enough information that tells us the relative effectiveness. Do we know that surgery works? Yes we do. Do we know that catheter ablation works? Yes we do. Do we know that drug treatment works? Yes. But do we know which subset of patients each one of those is best suited to, and what is the differential effectiveness? No, we don’t.”
Udvarhelyi was one of two health plan executives (the other was George J. Isham, MD, medical director and chief health officer at HealthPartners) who sat on the committee, which seeks to allocate the $1.1 billion the government set aside for CER under the American Recovery and Reinvestment Act of 2009, a.k.a. the stimulus program.
The idea is for all stakeholders to know what works better. “It could be a drug versus another drug,” says Udvarhelyi (pronounced “ewd ver hi”). “It could be a drug versus a non-drug intervention. It could be two surgical interventions. It could be a medical intervention versus a surgical intervention. It could be looking at different ways to disseminate information and knowledge to patients. How do you make care more effective? It’s not that it’s not happening today. It’s just not happening on a broad enough scale.”
The committee, under the auspices of the Institute of Medicine, defines CER as “the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.”
The topics were divided into four parts, and topics in the first quarter were given a higher priority than those in the other three. It comes down to knowing what works.
“...[I]nnumerable practical decisions facing patients and doctors every day do not rest on a solid foundation of knowledge about what constitutes the best choice of care,” says the committee report, Initial National Priorities for Comparative Effectiveness Research. “One consequence of this uncertainty is that highly similar patients experience widely varying treatment in different settings, and these patients cannot all be receiving the best care.” (For the report, go to http://www.nap.edu/catalog.php?record_id=12648.)
Take this topic, for instance: “Compare the effectiveness of management strategies for localized prostate cancer (e.g., active surveillance, radical prostatectomy [conventional, robotic, and laparoscopic], and radiotherapy [conformal, brachytherapy, proton-beam, and intensity-modulated radiotherapy]) on survival, recurrence, side effects, quality of life, and costs.”
Sheldon Greenfield, MD, former editor of Annals of Internal Medicine and cochairman of the committee, says, “there are five or six major modalities for prostate cancer out there. There has never been a head-to-head comparison. The primary care physicians might tell you the treatment choices, but without any data to support it.”
The committee wants the Department of Health and Human Services to create an advisory board that will oversee a public-private enterprise. The study states: “To ensure research activities that truly embrace the definition of CER, the ARRA funds — and subsequent funding to support CER — should flow through a CER coordinating authority directly to grantees, through federal agencies, or both.”
Greenfield: “No one government agency really has the breadth of study designs or approaches that is needed to cover these topics. But on the other hand, CMS couldn’t make a recommendation about whether there should be a separate independent group, although many people felt that way, or whether it should be housed in AHRQ or some other place.
“Without sort of a central oversight group at the federal level, it is uncertain how these things are going to proceed. Right now they are proceeding through each individual agency. AHRQ specializes in systematic review of the literature and big databases, whereas NIH does trials. So each government agency does its own thing.”
Research organizations that are funded by health insurers, such as the Blue Cross & Blue Shield Association’s Technology Evaluation Center in Chicago, should certainly consider tapping some of the federal funding, says Udvarhelyi. Generally, though, health plans will most likely want to be the happy end-users of the data. “I think health plans have a vested interest in seeing this expanded research be successful.”
Info source matters
Greenfield says that health plan exuberance will depend somewhat on where the data are mined. “Do the HMOs want to participate?” says Greenfield. “Well, maybe they do and maybe they don’t. Let’s say a bunch of HMOs got together, United and a bunch of the Blues, and Kaiser and so forth, and they say, We’ll participate with researchers who do this from our own data and we’ll make sure that it is done right, which is critical. Also, since some of it came from our own kinds of patients, we’ll know more about them.”
Greenfield cites the California Health Benefits Review Program (CHBRP), an organization established under the University of California by the California State Legislature in 2002. By law it is funded by health insurance plans. The money is collected by the two regulatory bodies, the Department of Insurance and the Department of Managed Health Care, and then goes through the Comptroller’s Office to the University of California. The funding primarily goes to a task force made up of faculty from the University of California campuses that have medical centers and schools of public health. The CHBRP effort also includes the three private universities in California with medical centers.
“It happens to be about one subcategory, which is legislative mandates. But the principle is there.” In other words, says Greenfield, HMOs have long recognized that it is in their interest that an unbiased group examines, compares, and then rates different treatments.
“They get better data, which is the critical point,” says Greenfield. “It’s an unbiased group with the best kind of study designs and the best kind of expertise. It just helps the plans make decisions. They don’t necessarily have to make them all themselves based on just about nothing.” (For more on how managed care reacts to CER, see our April cover story at http://www.managedcaremag.com/s/cer).
The memories of backlash because of denials of care are still fresh for many health plan medical directors. Conceivably, comparative effectiveness research will take most of the onus off.
“It can’t help but help medical directors because it will provide the kind of information that will preempt them getting clobbered,” says Greenfield.They can point to the fact that the data come from their own patients, much as plans such as Kaiser Permanente and Geisinger Health Plan do now in their efforts. “They point out that the data doesn’t come from the VA or any other population far removed from the people they serve. The people they serve are part of the data pool.”
Covering the bases?
The Committee on Comparative Effectiveness Research Prioritization had thousands of potential topics to choose from in coming up with the final 100, an arbitrary number. “It actually started out 50 and we thought that wasn’t quite fair,” says Greenfield. “We wanted to make sure that a lot of bases were covered.”
The topics answer a fundamental question, says Udvarhelyi. “If you are at the clinical level thinking about which of a variety of options are best for patients, do you have the information available to you and the patient to understand the pros and cons and, literally, the comparative effectiveness of each of those options to make that decision?”