Payers Step in With ‘Real-World’ Comparative Effectiveness Research
Payers Step in With ‘Real-World’ Comparative Effectiveness Research
MANAGED CARE June 2011. ©MediMedia USA
With access to large patient populations, health plans are in a position to mine a fresh set of data, but concerns about bias remain
Every new therapeutic application submitted to the FDA comes packaged with a full set of efficacy and safety data that can take up to 10 years and hundreds of millions of dollars to collect. And, if they get approved, those new therapies frequently arrive on the market with head-to-head data comparing the manufacturer’s new drug to a standard of care, at least, according to one study in the May 4, 2011 issue of the Journal of the American Medical Association. (“Availability of Comparative Efficacy Data at the Time of Drug Approval in the United States.” Pages 1786–1789.)
In that study, investigators sifted through NDAs filed over the past decade and determined that 7 out of every 10 new drugs faced with rival treatments already in use — essentially every therapy not stamped with an orphan indication — were filed with head-to-head comparative data attached. For biotech and pharma companies, the ideal new drug debuts on the market fully prepared to compete for market share and angling for favorable placement on payers’ formularies.
But comparing treatments in clinical trials typically relies on the drug developers’ ability to recruit a sampling of patients — often just a slice of the patient population that awaits the new drug. Looking for a better cross section of patients spread across genders, ages, and ethnicities, payers are stepping up to fill in the gaps with their own comparative effectiveness research work.
“If we do it right,” says Felix Frueh, PhD, who oversees Medco’s expanding research efforts in personalized medicine, “we can learn a lot about how drugs work and for whom, in what I believe is a much more realistic real-world environment.”
“Historically, comparative effectiveness research has been done, but the availability of data relevant to health care decision-makers has been limited,” says T. Jeffrey White, PharmD, director of drug evaluation and clinical analytics at WellPoint. “It’s unlikely that all pharmaceutical manufacturers will fully embrace comparative effectiveness research comparing drug A to drug B to drug C since there may be some risk in that one drug may result in better health compared to other drugs. Additionally, they may not have large-enough patient populations within a clinical trial setting to include several comparators. But WellPoint has one of the largest data pools and can be a valuable source of patient information.”
To get that “real-world” look at how drugs compare on efficacy and cost, several insurers are systematically reviewing all the available data and studies in the literature, recruiting patients at the pharmacy for their own studies, and mining mountains of pharmacy and medical payment claims databases to come up with a fresh set of numbers to ponder.
Every quarter now, WellPoint completes two or three comparative effectiveness studies, examining how treatments in a particular disease category stack up on effectiveness and cost, says White. Each one takes from one to two months to complete.
“We start by carefully reviewing the clinical data,” says White. “We don’t consider costs until later on in the process. High quality clinical trial data are used to make a determination of which drug will likely result in the best health. We will limit the use of studies with potential biases. Then, we conduct comparative effectiveness research analysis to determine which therapies are resulting in the best health in a “real world population. Then we look at the market share, drug trends, and costs, through an actuarial review. The final process is a value assessment using clinical efficacy, effectiveness, real world outcomes data, member impact data, and cost to determine the best therapies. We will make a final tier placement or edit determination to encourage utilization of the best therapies.”
That’s the approach WellPoint used on one drug (drug A), tracking several years of data on the experiences of 25,000 members suffering from osteoporosis and comparing outcomes with groups prescribed the competing Fosamax and Actonel.
“We followed new starts [newly prescribed patients] over three years and looked at pharmacy use and medication adherence,” says the pharmacist, as he recounted one of the highest-profile studies the company has undertaken. “Is drug A at once monthly better for adherence? And what was the difference in fracture rates? If one medication is better than another, we want to encourage members to use medications with lower fractures, which should also result in lower medical costs.”
Both Fosamax and Actonel scored better than drug A on key factors. Actonel scored high on compliance and efficacy for patients taking that drug, while members taking Fosamax suffered fewer fractures, preventing the need for expensive interventions. Drug A wound up on the health plan’s tier for non-preferred branded drugs. And Aetna followed suit.
“The primary goal is to improve the health of our members,” says White. “We want to reduce cardiovascular events, prevent asthma exacerbations, fractures and so on. If we improve the health of our members, we should reduce health costs overall. We also see where more expensive medications could be less expensive overall.
“We’ve seen this in COPD,” adds White. “We looked at three common inhalers for COPD and found that the older medication, which was less expensive on the pharmacy side, was more expensive in terms of higher hospitalizations and ER visits compared to the more expensive drug. The higher cost was offset.” As a result, Spiriva is on its second tier — preferred branded therapies.
“Health plans that offer pharmacy benefits are best positioned to do more and more of this type of analysis,” says White. “And we will see more and more of this work being done.”
That same approach to cost-effectiveness led RegenceRx — a PBM of the Regence Group — to embrace Herceptin, the breast cancer drug that is many times more expensive than competing therapeutic options.
The health plan’s approach to comparative effectiveness research has evolved, says Lynn Nishida, RPh, director of clinical pharmacy services at RegenceRx. Over the last four or five years, the plan began to look for more data that compared drugs used for the same indication. At best, it was offered head-to-head data from some pharma companies.
Now RegenceRx tries to extract the most relevant data available from clinical trials. Then the company will use its own claims data or the results of other studies to try for a real-world look at what works best, where the best compliance rates are found, and how generics fare against branded therapeutics.
It doesn’t happen overnight.
“For this kind of real-world analysis, it takes time for data to accumulate,” says Nishida.
“We’re really looking at strategies to manage a disease or a condition,” says Naomi Aronson, who has been helping shape the comparative effectiveness field over the 26 years she’s been working at the BlueCross BlueShield Association’s Technology Evaluation Center. “We’re looking at a condition in its entirety.” Plenty of work is required to achieve the kind of goals long time adherents of the field are pursuing.
In any drug study, researchers are “looking for a clean result,” says Aronson, who relies heavily on a systematic review of existing studies to compare treatments. But payers are acutely aware that some patients do better than others, and health plans have the opportunity to explore which subgroups of patients, by age, sex, or ethnicity will respond best.
“Think how few studies include patients with comorbidities,” says Aronson. “In clinical trials, people with other diseases and conditions are often excluded” for the obvious reason that they would probably skew the outcome.
“In the real world, especially when you are addressing chronic conditions that afflict people in midlife and later, you will see comorbidities,” she adds. “So we need to understand different subsets of patients. Think of age. In the very young, we often know little about pediatric uses. And, at the other end, how little we know about how medications act in the elderly, the old-old. There are other elements. Women are often under-represented in clinical trials. We want to know about sex, ethnicity, and genetic predispositions. Those are not well-understood.
“We also know there are issues of adherence. Taking medications in the real world is complicated by adverse effects. And some are more mundane effects, such as becoming nauseated or dizzy or whatever. Comparative effectiveness is putting all of these pieces together. We want to get a full picture and then also understand patient preferences and what’s important to them,” says Aronson. “This could include side effects on libido and more. This is the big vision.”
Seeing where gaps remain will help Aronson and her colleagues charged with setting priorities on the methodological standards for comparative clinical effectiveness research that will be pursued by the Patient-Centered Outcomes Research Institute (PCORI), a new $1.1 billion CER initiative established by the Affordable Care Act.
Even after PCORI is up and running, insurers will be left to their own devices. The public initiative was barred by law from comparing cost-effectiveness after several legislators voiced concern that their studies could be used to restrict care. But despite the U.S. government’s refusal to follow in the footsteps of such groups as the UK’s National Institute for Clinical Excellence (NICE), drug makers are well aware of the bar being raised by payers.
Listening to customers
“We’ve really been listening to our payer customers,” asserts Brian Sweet, executive director of development for AstraZeneca, which recently teamed up with WellPoint’s HealthCore subsidiary to conduct comparative effectiveness research on drugs used for chronic diseases. “There’s a lot of pressure related to shrinking the health care dollar. Economic pressures are not just affecting insurance payers but even government payers. State budgets are under water. AstraZeneca’s vision is to find out how we can improve patient health while lowering the overall cost of care.”
That will take real-world evidence about the comparative effectiveness of drugs that goes well beyond the limits of the efficacy and safety data needed to win regulatory approval for marketing a drug, evidence such as the effect a drug can have on hospital readmissions. “That’s evidence that matters to people.”
“We’re asking for evidence that matters way beyond what the FDA requires,” adds Sweet. “We’re actually looking to use a real-world evidence approach to measuring cost of care while drugs are in development, even in discovery. There are things we can answer from long patient data that gives us insights into the disease.”
The American College of Physicians has long asserted the need for more of this kind of comparative analytics. But the ACP is sensitive to the potential for an appearance of a conflict when insurers start considering relative costs.
“Health plans have been doing comparative effectiveness for a while,” notes Neil Kirschner, senior associate for regulatory and insurer affairs with the ACP. “Blue Cross Blue Shield has a very well known comparative effectiveness center, a respected one. So the fact that these private entities are doing it is nothing new. But there’s always a concern about bias.”
When a study is funded by a managed care company, it’s natural to wonder if a company simply wants to steer more patients to the least expensive therapy. That’s why the ACP wants insurers to present comparative data on drugs without handcuffing providers on which drugs to prescribe.
Insurers say they too are sensitive to the concern. “There is a stigma that health plans’ research is based on drug costs only, so we recognize there’s a need to reach out to physicians,” says White.
But there’s no backing away at this point.
“Efficacy rates of drugs can range from 20 percent in some areas in oncology to 70 percent in pain management,” says Frueh. “The point is that it’s 50-50 overall whether a drug works for you. What does it mean when you bring in a new drug that works in 60 percent of the patients? Does that mean it’s better than a drug that works in 20 percent?”
The answer is yes if the new drug is helping more of the patients who aren’t already being treated effectively, no if it’s aiming at the patients who can already be treated with existing drugs.
“Personalized data are critically important,” Frueh sums up. “Without these studies — who responds and who does not — we’re not really making a lot of progress.”
The personalized approach
At the big pharmaceutical benefit manager Medco, the discussion on the development of personalized medications — using diagnostic tests to identify which patients would be most likely to respond to a standard like warfarin and which might suffer serious side effects — helped trigger the move toward comparative effectiveness research five years ago.
“The Medco board wanted to see more engagement of Medco in the research environment,” says Felix Frueh, PhD, who oversees Medco’s efforts in personalized medicine. “How can we amplify that message — bring new technology to patients in such a way that they benefit from advances? That’s when we started to create a more formal R&D environment. We engaged in a number of different research activities: personalized medicine, pharmacy research, and so on. That’s where the comparative effectiveness piece came in strongly.”
There is both a clinical and a pharmacoeconomic perspective in mind at Medco, which helps manage the pharmacy benefit for 65 million members.
Back in the fall of 2009, Medco outlined an ambitious study, stretching out more than two years, to see how Plavix, which is facing an onslaught of generic competition once it loses patent protection in 2012, fared against the then newly approved Effient. The PBM’s investigators are using diagnostic tools to identify the 25 percent to 30 percent of patients who have a genetic mutation that leaves them unable to metabolize Plavix properly.
Medco’s studies identified “70 percent of the population we believe respond well to clopidogrel [Plavix, which earned $6.1 billion in the U.S. in 2010 for the Bristol-Myers Squibb/Sanofi Pharmaceuticals Partnership],” says Frueh. If they compare equally to the patients on prasugrel (Eli Lilly’s Effient), then payers will be able to steer members to the less-expensive generic, assured that they are getting comparable care at a greatly reduced cost.
“Millions of patients are on the drug, and that translates into an enormous savings,” says Frueh. “But that only works if the clinical outcome is as good or better. If you were in a situation where you increased the number of cardiovascular events, you couldn’t ignore that additional cost. And it’s wrong for patients. We’re hoping that, by identifying patients who benefit from Plavix, we can reduce adverse events.”
That study, which involves some 14,000 patients, will be finished late next year. No centers and no sites are necessary for the CER work.
“We can enroll patients at the time the patient comes to the pharmacy for a prescription,” says Frueh. By tracking claims data, investigators also can track outcomes and costs.
ECRI opens a portal on comparative effectiveness
Looking for some in-depth background on the comparative effectiveness discussion?
The not-for-profit ECRI Institute, which routinely reviews health care technologies in the marketplace, put together a special site on its home page to track the topic, with a special focus on the government’s new Patient-Centered Outcomes Research Institute (PCORI ) created by the Affordable Care Act.
ECRI’s analysis includes recordings of national policy conferences, expert perspectives, and links to some of the top groups weighing in on how comparative effectiveness studies need to be done. It includes statements from such groups as AHRQ, AHIP, AdvaMed, and the AMA — even congressional testimony dating back to 2007. And it includes samples of the systematic review work the institute has been doing on various treatment methods over the years.
“We’ve been doing comparative effectiveness studies since before it became a popular buzz term,” says Laurie Menyo, the spokeswoman for ECRI. These days, the preferred phrase at ECRI is patient-centered outcomes research, which ties more closely to the new PCORI initiative.
One of its most recent reports delves into various treatments for bulimia nervosa (bulimiaguide.org), comparing cognitive behavioral therapy with the pharmaceutical treatments and other psychotherapies now in use. The institute’s investigators also scoured the Web sites of 19 health plans to get a better idea of how this condition is covered. In addition to the reports styled more for payers and providers, ECRI also created an easy-to-use guide for patients and their families under bulimiaguide.org.
Go to ecri.org/ce to find out more.