Real-World Evidence Not Quite Believable Enough

In theory, this approach could help untangle some knotty cost and quality concerns about medications as they move from clinical trials and into clinical use. But there’s that credibility issue.

First of two parts

The idea of using real-world evidence to improve health care is really something of a no-brainer. It’s just plain common sense. Clinical guidelines to aid practitioners’ everyday decisions aren’t available for 85% of services, and there is a constant onslaught of new procedures and treatment options that need to be evaluated as they are used in clinical practice and not in the controlled and fundamentally artificial context of a clinical trial.

Real-world evidence is in an ideal position to fill the gap. Unfortunately, from the start, it has struggled with a credibility issue, and it continues to be viewed as a distant third, fourth, or fifth behind the randomized controlled trial, which is still the gold standard for producing reliable data. One indication of how RWE studies are viewed is that they are rarely used by the organizations that develop clinical guidelines.

RWE’s shortcomings range from imprecise data fields to inconsistent data reporting to poor analysis. A huge amount of effort has gone into overcoming these problems, but most of the focus has been on technical issues like data quality and fixing study designs. Public investment to enhance RWE has come from several groups, including the Patient Centered Outcomes Research Institute (PCORI), the National Patient-Centered Clinical Research Network, and the National Institutes of Health Precision Medicine Cohort, but most of that money has gone toward establishing an elaborate data collection infrastructure.

Credibility gap

But a more fundamental problem plaguing RWE is the perception that much of the research is biased and that the studies are designed so the results serve the interests of the sponsors. Statistical fixes, such as the use of propensity-score matching to offset selection bias, help but only so much. Bias continues, though, to cast a shadow over RWE research, and those suspicions undercut claims by RWE proponents that it supplies valuable missing information.

“Payers analyze their own data from medical and pharmacy claims, but they are much less likely to use data or studies from drug manufacturers,” says Robert Dubois, MD, of the National Pharmaceutical Council.

More RWE studies are done on medications than any other type of treatment. Robert Dubois, MD, chief science officer at the National Pharmaceutical Council (NPC), a pharma-sponsored health policy research organization, says RWE studies that are sponsored by drug companies are widely seen as tainted by business and profit motives.

“Payers analyze their own data from medical and pharmacy claims, but they are much less likely to use data or studies from drug manufacturers,” says Dubois. “They wonder about the believability of RWE studies. When they see a study produced by a drug company, it’s not unusual for them to say, ‘We’re not exactly sure you didn’t work with the data until you found the answer you wanted.’”

One antidote for the suspicion would be more cooperation between pharma and payers in the design, execution, and analysis of RWE research. Payers need the information to better understand the safety and efficacy profile of a medication, and drug companies’ credibility would obviously improve if they generated high-quality studies.

Cooperation could be a win–win situation for payers and drug companies, and NPC has picked up the torch to bring the two parties closer together on RWE studies. It has launched a new transparency initiative designed to root out bias in RWE studies. “We are knee-deep into developing approaches to RWE studies and data, so payers can trust it,” says Dubois.

NPC is working with Academy Health to understand how RWE is viewed by outside sources and on ways of gaining greater trust and acceptance. Dubois says one of the first steps has been to develop the elements or criteria for transparency and actively engage payers in this effort. One of those elements would be to make it a practice to publish complete protocols of RWE studies. Another might be opening RWE studies to an outside audit.

The council has brokered intergroup relationships before. Several years ago, it created the comparative effectiveness research collaborative with the Academy of Managed Care Pharmacy and the International Society for Pharmacoeconomic Outcomes and Research. The collaborative produced a scorecard and training course for payers to evaluate the quality of RWE study designs.

Still learning

As part of its current efforts, the NPC paid for a study by the University of Arizona College of Pharmacy of how payers use RWE. Titled “Real World Evidence: Useful in the Real World of US Payer Decision Making? How? When? And What Studies?” was published in the journal Value in Health. Designed like a focus group, the study included 20 medical and pharmacy directors affiliated with health plans, PBMs, HMOs, and other health care organizations.

The lead author, Daniel Malone, a professor at the University of Arizona College of Pharmacy, says health plan executives frequently mentioned transparency as one of their concerns with RWE.

Participants were asked about the advantage of RWE studies over randomized controlled trials in providing information about the performance of medications in diverse populations or specific subpopulations. About 60% responded that RWE studies deliver this information only sometimes, so these health insurance executives are not convinced that RWE delivers on the core advantage it is supposed to provide.

The participants cited safety signals, gathering evidence not covered by RCTs, determining the effectiveness of a drug, and evaluating comparative effectiveness of competing medications as the information they look for in RWE studies. Even though these clinical issues were important, the participants said that RWE is not widely used in P&T committee evaluations or decisions. Malone says that P&T committees are still adhering to traditional methods and standards in their deliberations.

Participants were asked several different questions about how often they consider results from RWE studies in administrative decisions about formulary placement, utilization management, and tiering. Across this set of questions at least 16 of the 19 respondents said almost never or only sometimes. Yet this is often among the main reasons drug companies commission RWE studies.

The participants in this study by Malone and his colleagues were also asked about their skills and comfort level in using observational studies. Only 24% agreed that they are confident in their ability to interpret observational study results.

Malone also tested how aware the participants were of RWE studies that had already been published. One of the studies, published in 2013 in the New England Journal of Medicine, examined bleeding events associated with dabigatran (Pradaxa) and was conducted by the FDA’s Mini-Sentinel program. The second, published in JAMA, compared long-acting beta agonist regimens in older adults with COPD. Malone says these studies were selected based upon their overall quality and relevance, and he was surprised to find that few of the participants had seen them given that they were published in prestigious journals and were considered timely topics.

One important takeaway from this study is that health insurance executives are in the early stages of learning about RWE studies, according to Malone. Payers are receptive to RWE studies, but they also say that they don’t need marketing studies. “The payers in our sample want to be involved in asking the researchers questions so they want to encourage sponsors to come talk to them before they start a study to make sure the answers meet their needs.” This upfront approach seems like a simple step to improve transparency.