Not long ago, I was involved in a pair of projects to determine real-world outcomes. It didn’t take long to figure out that EHR and claims data are insufficient for measuring the patient experience. For example, rheumatologists don’t always know when their patients with gout have a flare because most of those patients don’t visit the doctor. The patients are at home trying to alleviate their suffering as best they can with their usual therapies. Or consider the complication rates for joint replacements. They are important, but they don’t tell us anything about what patients cared about most: “When will I walk again?” Most symptomatic disease states and issues with medication that are sporadic fit this same paradigm.
Neil Minkoff, MD
Phase 4 studies are few and far between, and those few don’t ask the right questions. For some autoimmune diseases, for instance, hospitalization rates and symptoms are less relevant than knowing “is this treatment preventing relapse?” or “how many work days did you miss?”
We wanted to capture how the patient is feeling, even when no bill is generated. So, we had to create our own measures—and take them directly to patients.
Patient-satisfaction surveys have become ubiquitous. And with good reason: Survey data can tell us a lot about the generalizability of treatments and their effects. But surveys, too, can be chock-full of problems—not the least of which is insufficient response, which skews “representation” of the whole group. All too often, only the patients who are happy, angry, or who just like filling out surveys return them. Also, surveys don’t give us real-time information on patient disease states or symptoms.
So, what’s the secret sauce in patient engagement? At EmpiraMed, we have learned how to make surveys or e-diaries that are intuitive, fun (yes, fun), and rewarding.
A key to our approach is that our patient-engagement platform, which we have named PRO Portal (PRO for patient-reported outcome), was built by people whose primary experience is not in health care but in optimizing the user experience. A system must be completely intuitive to use, and it shouldn’t read like a list of NQF quality metrics.
Putting the patient in the center data collection allows for the integration of additional data to gain a holistic view of a patient’s care relative to outcomes. Depending on the context, information from sources in the blue boxes may be integrated to compare PROs against clinical trial data or observational studies, analyze the effects of quality initiatives, or determine whether PROs align with the goals of outcomes-based contracts.
Patients provide answers to simple, plain-English questions on a HIPAA-compliant web portal through their computer, tablet, or phone. A comprehensive rewards program encourages participation. These methods influence the amount of data collected, not treatment behavior, so it eliminates the prospect of care that “teaches to the test.”
Turning the process into a game helps to boost patient engagement. For answering questions, patients can earn virtual hearts, coins, jewels, or trophies—adding up to gift cards or donations to charity. As they accumulate points, participants move closer to a coveted status (e.g., “MS Warrior” or “Neuro-QoL Champion”) that recognizes their contributions to research. Patients also face challenges, in which they can earn extra rewards for using the e-diary more frequently and for not missing surveys. It’s also a forum for patients to share with and learn from others just like them. Upon completion, patients can see how their results compare, helping people to understand what works in the real world.
It may seem obvious, but people are more likely to remain engaged in an activity if they find it enjoyable—and outcomes back this up. Among the more than 1,000 patients who enrolled in our multiple sclerosis registry, 95% remained active after one year. In a study of patients with diabetes, participants logged an average of 3.5 e-diary entries per week. Routinely, for every 100 monthly surveys we send out, approximately 75% are completely filled out. That’s three times the industry benchmark.
Through a validation process, we know that the information we are collecting is meaningful. The survey instruments themselves are usually, but not always, validated, but responses can be run through a validated challenge tool to ensure their legitimacy. So, for instance, while we may ask MS patients about a flare, we measure the patient-reported severity of the flare against a validated instrument, a self-reported disability scale.
In a managed care environment, member satisfaction is an important metric, and patient-reported outcomes can be a key tool for determining satisfaction. Quite often, when costly treatments will never result in full cost offsets, what the payer is buying is: “Does the patient feel better?” Reliable answers, straight from the patients, are the best way to know.