Perhaps no other human enterprise is as dependent on the gathering and analysis of data as health care. Almost every encounter with the health care system—even a basic well visit—begins with a round of information-gathering: a blood pressure reading, a check of the pulse, weight, height, some blood work most likely. And if you are diagnosed with a health condition, the freshets of information become torrents as the calendar fills with appointments and imaging tests, more blood work, and a slew of other tests.
That is just the clinical side of health care for a single patient. Now consider all the data generated by health care research. In late December, when I checked the database of clinical trials maintained by NIH, ClinicalTrials.gov, it included 232,733 clinical trials being conducted in 195 countries with 40,421 of them still recruiting subjects. Now factor in all the different endpoints those trials are measuring. It is mind boggling, the sheer volume of information.
But in health care, we have another problem besides the amount of data and making sense of it. Our data is sticky. It is captured close to where it is originated, tends to stay there, and doesn’t get shared—even when doing so would benefit patients, providers, and payers.
Real-world evidence could be what we need to unstick health care data.
There’s some confusion about real-world evidence because people use the term in different ways. It is best understood as health care information that comes from sources other than clinical trials, especially randomized controlled trials (RCTs). Those sources include, of course, electronic health records and claims data, but also disease and product registries and, increasingly, apps and social media.
Ideally, real-world evidence would be continually looped back into the drug development process. There is a deep reservoir of information about side effects, adherence, and differences in efficacy among subpopulations embedded in the various sources of real-world evidence. That information could do a lot to improve the expensive, time-consuming way we test and research drugs. Real-world evidence can yield early signs about a drug’s efficacy and safety. Structured collection of adverse events can flag problems and supplement data from RCTs. Regular infusions of real-world evidence into research and development could do a lot to improve the way RCTs are conducted. Traditional endpoints, such as the six-minute walk test, may not reflect a treatment’s true benefit in the day-to-day life of patients. Data collected from a wearable activity tracker might.
Payers are also interested in seeing robust real-world evidence developed. Data from RCTs can’t answer all of their value-based questions about side effects, cost, and effectiveness. Indeed, about two thirds of drugs fail to meet revenue expectations during the first year, partly because the data used for FDA approval often isn’t persuasive enough to put a new drug on a formulary. A research model that incorporates real-world evidence can ease the concerns of payers by providing early insights into a drug’s or device’s performance.
But concerns that real-world evidence is lowering the evidence for safety and efficacy should be addressed. More clarity from regulators will help, and the 21st Century Cures Act should make that happen. The law says the FDA, in consultation with other stakeholders, must implement a framework for real-world evidence within two years and draft guidance about acceptable standards and methods within five years. But it is also up to pharma, to CROs—to everyone involved—to set high standards for the kind of data and studies that will be acceptable in real-world evidence research.
Some commentators have positioned real-world evidence as undercutting RCTs as the gold standard for proving efficacy and safety. But there’s no reason that we can’t make the data and techniques used in real-world evidence studies just as rigorous and establish a new gold standard.