Will New Benefit Design Harm Some Patients?

In the past, reducing demand for care by raising patients’ costs has resulted in the loss of some needed care. Can we avoid the trap?

MargaretAnn Cross

Contributing Editor

When customers began walking away from the prescription counter without their medications because they’d been surprised by a high copayment, Bridget Olson, PharmD, began to get worried. “I can’t say that they didn’t come back later, or after they got paid that week. But there were certainly a lot of people who voiced concern that they wouldn’t be able to keep paying the higher copayment,” she says. And Olson, an assistant research scientist at the University of Arizona’s Center for Health Outcomes and Pharmacoeconomic Research, started to wonder just what walking away empty-handed would mean for patients’ health in the long run.

Other pharmacists and physicians — and even consumers — are voicing similar concerns as insurers respond to employer demands to keep premium increases in check by raising copayments, deductibles, and coinsurance rates. Much of the pressure to implement such changes is coming from employers who believe that if employees have to pay more for care, they will use it more wisely, which may suggest using it less frequently or using less of it.

Employers, of course, are acting in response to increases in health care premiums that, in an uncertain economic climate, cause trouble because they are far greater than the escalation of other costs.

It turns out that employers are correct — in part. Increasing cost sharing does cause people to use less health care. Milliman USA for years has built that assumption into the actuarial models used by health plans and employers to set their rates, says Greg Herrle, a consulting actuary in Milliman’s Milwaukee office. And study after study has proven it true.

Hard on the poor

But it is much less clear what effect decreased utilization has on health outcomes and on long-term costs. “It’s simple economics that you can expect any increase in cost at the point of service is going to result in a decrease in utilization,” says Jill Yegian, director of health insurance at the California HealthCare Foundation, which earlier this year surveyed insured Californians about how they’re responding to greater cost sharing. Overall, 17 percent said they’ve skipped a doctor’s visit because of increased costs, including 46 percent of people who are chronically ill and have an annual income of less than $25,000 (see chart on page 35).

Those numbers are early warning signs and something to be concerned about, Yegian says. “You still have to ask, did it matter for their health? That’s the million-dollar question. We can’t answer it, but it’s definitely a question that you want to have answered.”

To the general public, however, the answer is obvious. Eighty-seven percent of Americans believe higher out-of-pocket costs will cause at least some people to forego essential care, according to a September poll by the Wall Street Journal Online and Harris Interactive. Fifty-four percent of those polled said that public health will be hurt by increased cost sharing, “There is a certain amount of common sense that says if people do not get the care that they need for chronic conditions, then they will get worse,” says Humphrey Taylor, chairman of the Harris Poll at Harris Interactive. The company also found that 22 percent of adults did not fill a prescription in 2002 because of high costs, including 44 percent of those with out-of-pocket costs of $500. High cost is what the patient says it is.

“If utilization goes down, some people aren’t going to get the care they need. There’s no question about that,” says Michael O. Fleming, MD, president of the American Academy of Family Physicians. A family physician in Shreveport, La., Fleming has seen firsthand how cost can influence a patient. He’s had people with chest pain insist on going to his office rather than to the emergency room because the office visit copayment was $10, compared with $50 for the ER.

“I fear that the same sort of thing is going to happen now. We just may see that taken up another step,” says Fleming. “It has ramifications for quality of care, but it also has ramifications for ending up costing more on the downside, because if that person doesn’t go to the ER and is having an event — something for which early intervention could make a difference — our society instead will pay for the cost of years of complications. That’s going to cost multiples of what it could have cost if taken care of in the first place.”

Yet go looking for proof that greater cost sharing will produce worse clinical outcomes and greater long-term costs, and the issue gets murky again.

The data are “just not there” right now, says Olson, who reviewed years of published studies to write “Approaches to Pharmacy Benefit Management and the Impact of Consumer Cost Sharing,” which appeared in the January issue of Clinical Therapeutics. “You can look at cost-sharing’s effects on medication utilization and see that utilization and expenditures on the part of the health plan go down,” she says. “But if someone’s high blood pressure medication goes up in copay and they stop taking it or they take it every other day to extend the life of the prescription, you are not going to see the effects on health outcomes for years to come. You can say that you think that man is going to end up with a heart attack and in the hospital and list all of the other costs he may incur. But to accurately state that, you have to conduct a long-term study.”

Undertaking such a long-term study would be difficult because enrollees change health plans so frequently, Olson says. Such a study would be too expensive as well, others assert. The longest study on the relationship between cost sharing and outcomes, the Rand Health Insurance Experiment, tracked people with different levels of out-of-pocket expenses for three- and five-year periods. But it took place about 25 years ago, and the cost sharing levels were different from today’s.

It’s possible, though, to work with what’s available and piece together a strong case for keeping outcomes at the forefront of debates about new benefit designs and higher cost sharing, says Alan Lyles, ScD, MPH, RPh, a professor of health systems management at the University of Baltimore. Some studies have been pivotal in demonstrating that cost sharing reduces utilization across the board, not just for care that may be unnecessary, he says.

For example, studies led by Stephen Soumerai in the late 1980s and early 1990s found that in states restricting the number of medications Medicaid would pay for a single patient — requiring the beneficiary to pay out-of-pocket for others — the amount spent on drugs did go down. In one case, users of multiple drugs dropped from filling 5.2 prescriptions per month to 2.8. Insulin use tracked in that study dropped by 28 percent, and other essential medications also showed large dips in usage.

In a later study, when the number of medications for Medicaid patients was limited during an 11-month period in New Hampshire, the rate at which elderly patients were moved into nursing homes increased. And in 1994, Soumerai found that for noninstitutionalized patients with schizophrenia, restrictions caused an immediate decline in the use of antipsychotic drugs, antidepressants, and lithium. Patient visits to their physicians rose 57 percent per month in some cases, and the use of emergency mental health services and partial hospitalizations rose.

The study, published in the New England Journal of Medicine, found that health care costs for those patients rose by $1,530 — 17 times what was saved by reduced drug costs.

While Soumerai focused on Medicaid patients, other studies on cost sharing also consistently find that higher costs affect people with low incomes and chronic conditions the most. That may be obvious, but the degree to which it affects them can be a wake-up call to health plans and employers, says Yegian. In the California HealthCare Foundation’s poll, 29 percent of people with chronic conditions and income below $25,000 did not fill a prescription because of costs, compared with 9 percent of all Californians.

Such findings are consistent with the Rand study — the “gold standard” of looking at cost sharing and outcomes even today, Yegian says.

Rand’s long shadow

Launched in the 1970s, when the nation was considering implementing a national health insurance program, the Rand study looked at how a range of cost sharing levels would affect people’s health. The results showed that the more people had to pay, the less they used health care, and that “at least for the great bulk of people there were no adverse effects,” says Joseph P. Newhouse, PhD, a professor of health policy and management at Harvard University, the principal investigator of the study team, and author of Free For All? Lessons from the Rand Health Insurance Experiment.

But for low-income and chronically ill individuals, cost sharing did affect outcomes. “In particular, the hypertensive were less well controlled in the cost sharing plans than with free care,” Newhouse says. “And that had a substantial predicted effect on mortality downstream.” Researchers predicted that with less controlled hypertension, patients in the low-income, chronically ill group faced about a 10 percent increase in mortality, “a substantial effect,” Newhouse says.

The Rand study is still relevant and can be applied to new consumer-directed health plans being developed, says Newhouse, who today is on the board of Aetna Inc. “I have no reason to doubt that the Rand results still apply in terms of how patients respond to cost sharing.” It’s important to note, he says, that the study demonstrated that cost-sharing “was a fairly blunt tool. It seemed to reduce both appropriate and inappropriate use, which is probably why there was no overall effect on outcomes. It not only got rid of some good things, it got rid of some bad things.”

How to minimize the temptation to forgo preventive care?

As insurers design benefit plans with higher copayments, deductibles, and coinsurance rates, many are exempting preventive care so that members aren’t tempted to ignore tests and office visits that could stave off future problems. Such selective, deliberate use of cost sharing will be key to avoiding the potential adverse effects on outcomes that may come when people use less health care because of increasing out-of-pocket expenses, some physicians and researchers say. Other ideas include:

Joseph P. Newhouse , PhD, John D. MacArthur professor of health policy and management, Harvard University

“Medical directors would ideally be working on trying to improve the quality of care that is being delivered in their networks. Cost sharing could be turned into a more selective tool — for example, higher cost sharing for lower-quality providers. We’re not at a stage where we could do that yet with any reliability, but that would be an ideal way to use cost sharing.”

Nancy Dickey, MD, president of the Health Science Center at Texas A&M University

“We know that we can impact utilization by higher cost sharing. The question is, can we channel it? Can we devise cost sharing that enhances the utilization that science would tell us should be enhanced, and that diminishes the utilization that science would tell us is perhaps marginal? One of the things that has not been tried in several decades in this country is to manipulate the payment system in such a fashion that patients are motivated to modulate the amount of care they get. You don’t want to discourage preventive care — you want to encourage it with incentives. You don’t want to discourage the care that science clearly tells you contributes to improved quality of life and improved duration of life. You want to identify things that are marginal contributors and hopefully encourage patients to ask more questions, such as, what if I don’t have that test done? What if I don’t fill that? Hopefully, we can create some controls on the cost of health care that are more functional than the ones we’ve used over the last couple of decades.”

Greg Herrle, consulting actuary, Milliman USA

“Income plays a big part in what one can afford to pay. As deductibles continue to increase, issues may arise, depending on the make-up of an employer’s workforce, as to the employees’ ability to afford the cost sharing required. Certain high-cost-sharing benefit designs may work better for a high-paid, white collar work force than for a low-income work force. So far, health plans and employers haven’t varied benefits much by income, but down the road, I would expect to see more differentiation in benefits by income level.”

Public pessimistic about cost sharing

As cost sharing goes up, public health will go down, consumers believe. More than 2,000 adults told Harris Interactive and the Wall Street Journal Online that substantially higher out-of-pocket costs for health care were likely and then were asked: “If that happens, how much do you think each of the following will also happen?”

SOURCE: THE WALL STREET JOURNAL ONLINE/HARRIS INTERACTIVE HEALTH CARE POLL, SEPT. 10, 2003

Generic drugs, but skipped visits

Insured Californians seek lower-cost medication alternatives, but some — especially the chronically ill — skip visits to the doctor or don’t have prescriptions filled because of increases in cost sharing.

SOURCE: CALIFORNIA HEALTHCARE FOUNDATION

MANAGED CARE December 2003. ©MediMedia USA
error: