Six years ago, the health care economist Michael Chernew concluded from a review of the evidence that the driving force behind rising health care costs was new technology. “It’s not increased waste, it’s not fraud, it’s not increased lawsuits, it’s not the fact that people on average are older — all of that may contribute, but the predominant factor relates to the development and utilization of new medical techniques, of which there are an enormous number,” he said in a 1998 study.
The University of Michigan professor concluded that one of three possible methods would ultimately be required to tame this technological budget-buster: Managed care companies would ration the use of new technology; the government would step in and implement guidelines; or the very nature of technology development would be transformed into a positive economic force, driving costs down instead of up.
But it didn’t work out in any of those ways — at least not yet. Consumers and physicians have not liked barriers to accessing new technologies. Managed care organizations did not like the bad publicity — or the administrative costs — tied to rationing anything. Government agencies never volunteered to rein in the market; the Food and Drug Administration has been focusing on speeding up its approval process and the Bush administration has been placing its bets on competition among health plans to hold down costs.
And anyone who noted the recent report declaring that health care costs now consume $1.55 trillion a year — after swelling 9.3 percent in 2002 — can see hard evidence that technology is still a potent accelerant for the economic fuel that’s heating prices.
“Everyone has backed away from the issue,” says Chernew today, “and health care costs have soared.”
And until costs come back to earth, the vexing issue of managing technology won’t go away.
Over the past few years, managed care organizations — especially the bigger ones — have been trying to do a better job of evaluating new health care technology, including medications, devices, procedures, and health services. Many have formed professional review committees, hired technology assessment experts to report on new devices and keep them better informed on what’s in the pipeline, and — in some cases — posted their decisions on the Internet in an attempt to keep members and providers in the loop.
All that gives MCOs a better understanding of the technology and how to anticipate adding the best the research pipeline has to offer for people’s health. But to help apply the brake on costs and get a heads-up on what will soon be in store, many plans are also taking a multipronged approach: identifying what works, coming up with new strategies to try to partially rein in the galloping growth rate of technology costs, and taking stock of technologies in research and development so they can do some strategic planning on when the next big technologies are likely to hit.
In the brave new world of consumer-directed care, that trend is shoving increasing responsibility — and a growing share of the tab — into the hands of the only group left to make difficult rationing decisions: health plan members.
Following the leaders
This is one arena where bigger can quickly translate into better. Deep pockets can afford big advantages.
“I think there’s a sense that the bigger plans have good processes to evaluate technology and have resources to put in processes and smaller plans may not have the resources to do it,” says Peter Neumann, associate professor of policy and decision sciences at Harvard School of Public Health. “Small plans tend to follow the lead.”
And there are some good reasons for doing so.
“In larger managed care organizations,” says Susan Levine, PhD, vice president for technology assessment at Hayes Inc., the independent technology assessment company, “the technology assessment and reimbursement process is more clearly developed. They have a separate process — a technology assessment committee, often staffed by physicians and other people with clinical backgrounds. They take our report and really evaluate it. Is the technology safe? Is it effective? What does it treat and which patients would really benefit?”
That’s the way to more narrowly define the population that stands to benefit, she adds. “They would make the decision on whom to cover. Financial people might then decide if it fits into the benefit structure. Is it proven, better than existing technology?” Or is it just equivalent to what’s already available. “But the cost comes later,” she says. “That’s where you see an increase in premiums.”
Not all health plans are built the same way when it comes to the transparency of their coverage decision-making processes.
Experts say several health plans excel at the way they communicate their technology assessments with members and the tech companies that cater to them. Oxford Health Partners and several of the Blue Cross plans post their coverage policies online for all to see. But one MCO in particular comes in for frequent praise.
“Only one stands out, and that’s Aetna,” says a reimbursement director at a major technology manufacturer who does not wish to be identified.
“It covers a lot of technology.”
“There’s a steady stream of medical technology that we are asked to evaluate,” says Robert McDonough, medical director of Aetna’s clinical policy unit. “Last year, we developed 200 new formal policies and we have more than 600 clinical policy bulletins, many of which discuss more than one technology.”
In some cases, Aetna they may be contacted by a manufacturer asking to submit data for consideration. And many times the company will act when one or more of its providers make a preauthorization request or there’s a review of a claim that’s been filed.
To help stack the odds in their favor, some manufacturers are quick to make their case to a professional medical society related to the technology’s application. An endorsement from them — and a surge in requests from the specialists who want to use the new technology — is likely to attract payers’ attention.
“I think it certainly is a good strategy to get the medical profession and leading medical associations on board,” says McDonough. “It certainly helps us in our determination.”
But to really make an effective case, the coin of the realm here is ink — publication of a research paper in a respected, peer-reviewed publication.
“We rely on information in the public domain,” adds McDonough. “Not unpublished or proprietary studies; peer-reviewed literature.”
That’s the formal approach to gaining entry to the managed care world. But there’s a back door, too. And physicians learned how to pick the lock years ago.
Language of reimbursement
“In a lot of circumstances,” says Steven Garber, a senior economist at Rand Health who has studied technology adoption, “managed care organizations adopt new technologies without even knowing they’re doing it.”
It’s not that hard, he adds.
“If a doctor reads about a new procedure or goes to a seminar and hears about a new procedure, or sees a rep from a device company, the doctor may decide, ‘I’m going to do it.'” All he does is use the technology and then bill for it using an existing code.
“Coding is the language of reimbursement,” says Garber. “If you’ve used a new device and it fits comfortably into the current language, you may use the current language. When doctors use existing codes, nothing in particular triggers the realization by MCOs that something new was going on here.
“In some instances, the insurance companies learn that something new is going on because device companies want new codes,” he adds. That’s often the case if the device makers want to gain a higher reimbursement rate. But because new reimbursement codes and higher reimbursement rates are hard to get, manufacturers are often satisfied with seeing the technology adopted and paid for under an existing code — even though the MCOs may think they are paying for a previously adopted technology.
There’s also a catch that can create a big hurdle for plans evaluating technology. Something that works well in an academic setting during clinical trials may not work so well in a practice setting.
“There are lots of reasons to expect that something will work better in a top academic center where doctors are better trained and they perform large volumes of particular procedures. But it’s not the real world,” says Garber. “You get left with this dilemma: Insurers want to know how safe and effective a new technology will be in routine practice as they decide if it is something they want to cover, but until technology is used in routine practice, no one can know.”
To get a better reading on real world applications, a lot of plans call on outside groups for extra help.
A range of MCOs contracts with one of the technology assessment companies — ECRI, or Hayes, or the Blue Cross and Blue Shield Association — to obtain independent evaluations of new technologies or new applications of existing technologies.
“Our core business is health care technology assessment,” says Hayes’s Levine. “We evaluate medical technology on efficacy and safety and, in a sense, effectiveness. You want to be sure it works in a general health care setting, not just an academic setting or clinical trial setting.” Hayes also helps identify what segment of the population would benefit from the advance: “No treatment is for everyone.”
ECRI’s services also include forecasting about technologies in development that have not hit the market but are likely to do so in coming months and years.
“Payers are focusing more than ever on strategic planning for technology in the pipeline,” says Vivian Coates, ECRI vice president for technology assessment and information services. “They want technology assessment and forecasting. So ECRI married these services. They go hand in hand.”
In 2003, ECRI dropped its annual Health Technology Forecast publication and switched to an interactive online database that is updated weekly. “It tracks hundreds of new technologies as they go from research to market,” says Coates.
Case in point: ECRI most recently spotlighted noninvasive computed tomography angiography, handled by a dedicated cardiac CT scanner that’s an odds-on favorite to prove increasingly popular with high-volume cardiology departments.
Health plans are migrating to a more sophisticated, evidence-based approach to evaluating technology, says Coates. “That’s why they want technology assessment and forecasting. They need to know what’s coming down the pike as well as what works today.” ECRI’s Health Technology Forecast rates the potential impact of technology in five different areas on a scale of 1 to 5. It predicts potential impact on levels of utilization, cost, diffusion, health care delivery, and patient care.
A host of pharmaceutical giants can dig into some deep pockets to get state-of-the-art research published in peer-reviewed journals. But of the companies that make medical devices, only a few loom large. Both Medtronic and Boston Scientific, for example, are multibillion-dollar, major league concerns, capable of funding clinical trials and pushing their study results into the publications that command the respect of all the professional societies.
But after that, says Levine, health technology companies quickly start to shrink, scaling down to small outfits operating on tight budgets and big dreams. And that’s when evaluators like Hayes find themselves sailing into choppy waters.
At Hayes, evaluators grade technologies A through D. Score high, and the technology gets a green light and probably will gain a reimbursement code in the managed care world. Get a low grade, though, and you’ll be put on the shelf until you can answer more questions. This is how Levine defines each grade:
A “Absolutely rock solid.” Safety is clearly backed up by long-term studies. Efficacy and patient criteria — who gets covered — have also been established.
B “Very promising.” To get a B, companies need to provide results from randomized controlled trials with good safety and efficacy records. “It works for sure on some, but not so sure on others.”
C “Promising but not there yet. Sometimes it will hang there and be proven effective with more studies, and other times it goes down and goes away.” Definitely in need of more study and an unlikely candidate for coverage.
D The evidence shows that it doesn’t work, or isn’t safe, or there’s “just not enough published evidence to make any other rating,” says Levine.
“Manufacturers see a D rating and get upset. ‘But it’s FDA approved,'” they’ll object. And it’s an everyday event.
In fact, says Levine, only about 10 percent of evaluations end up stamped A. Maybe 20 percent to 25 percent get a B. The rest, a big majority, get a C or a D, which typically means they don’t get covered by a health plan or are only allowed for very limited purposes.
At Hayes, a low grade gets a fast response.
“Some of them are very polite and inquire nicely,” says Levine. “Some people just write nasty letters, and we are sometimes threatened with the legal department.”
Coates says ECRI does not subscribe to letter grading because “it can mislead and present a skewed picture about a technology. Just because you have a randomized, controlled trial (RCT) does not mean you have good evidence that warrants a higher grade.
“When assessing evidence, ECRI doesn’t discriminate against technology according to study design,” she says. “We evaluate all the evidence, no matter what the study design, and figure out how reliable the available evidence is. You can have an RCT that yields useless evidence and a case series that yields better quality, reliable evidence.”
And lots of technologies will never have an RCT to provide evidence. It may be unethical or impossible to conduct an RCT, as is the case with some cancer technologies and surgical procedures, she adds.
Everyone involved knows the stakes are high, but forecasting hard costs is a challenge.
“It’s hard to evaluate new technology because it’s new,” says Chernew.
“Evidence of cost-effectiveness is typically not available and even when it is, plans have trouble using that evidence,” says Neumann. “There’s deep reluctance to ration openly. There’s also a problem determining when a new technology shifts from investigatory to routine. Those turn on lots of difficult modeling projections.”
When ECRI conducts a cost-effectiveness analysis, the research analysts usually consider each perspective — payers, patients, society — and explain the results in the context of each perspective. “We believe it’s important for each constituency to understand the other perspectives as well as the impact of the technology in their own area,” says Coates.
Says Levine: “Each managed care organization might have a different take on what is cost-effective. Doing obesity surgery might be very expensive up front, and yet what about down the road? If they don’t have heart complications, they might save all that cost. A lot of times they cover things because they’re safe and effective and hopefully will be paid back.
“We’re not health care economists,” adds Levine. “They don’t want us to evaluate that.”
Most of the MCOs’ technology review committees don’t try to evaluate costs either, says Garber.
“If you ask about money, they would say, ‘We don’t worry about cost,'” says Garber. “‘If a technology they approve for reimbursement turns out to be very costly, that’s a business issue for contracts people to deal with.'” The number crunchers may decide that it will cost an extra two cents per enrollee per month, and they’d go with it.
An outright refusal to cover a medically appropriate technology usually would be left up to a large employer that might be particularly sensitive to the cost of a new technology, says Garber. But the plans’ main concern is often cost predictability rather than cost level. If the popular press, for example, publicizes a new technology and triggers a rush for it, plans with a stack of long-term contracts could find themselves on the hook.
“Suppose out of the blue a really good, expensive new technology comes along and within a year or two there are significant costs associated with it,” says Garber. “Until plans can renegotiate premiums, those extra costs can come out of the pocket of the MCOs.”
But with costs soaring, plans are spending more time looking for ways to slow technology inflation.
It’s going to be an uphill fight. And here’s one example why: The Blue Cross and Blue Shield Association estimates that between 2000 and 2005, the cost of diagnostic imaging will jump from $75 billion to $100 billion a year.
“Diagnostic imaging technology is one of the most important advancements to health care in the past quarter century,” said Allan Korn, chief medical officer at the association, when he recently released a new study on the issue. “But it is also the most expensive technology. One of the critical questions before all of us in health care is: How to ensure access to medical technology and keep it affordable?”
Better technology is coming along, says Blue Cross, and more and more people are demanding it — even if it’s not clinically indicated. But in its study of the issue, the association found that new technology was not replacing the old. If new technologies are being added, and use of older X-rays and ultrasound devices continues to expand, says the association, it’s natural to assume that there is a significant amount of duplication.
Sometimes, the study says, people continue to use technology even when it is widely known to be of little use. Case in point: imaging for diagnosing prostate cancer.
So some of the Blues started to take action.
In Rochester, N.Y., the local Blues plan put together a committee drawn from hospitals, physicians, and patient representatives to review new and existing technology to make recommendations on what to cover, and when. The result: Local use of magnetic resonance imaging is 19 percent under the national rate and 28 percent lower than the state average.
To try to reduce the number of duplicate MRIs in Kansas, the local plan is working with the state medical society to develop protocols on how and when to call for an MRI. But in Virginia, Anthem Blue Cross Blue Shield added a $100 copayment for all expensive imaging procedures for members in a high copayment plan.
In a sense, MCOs are taking the lessons they learned from their drug formularies and applying them to technology. If rationing is out of fashion, then exposing members to some stiff out-of-pocket payments should make them think twice — and ask their doctors — about the technology they need.
“That has emerged in some sense as an acceptable alternative to denying coverage,” says Neumann. “So you say you’re not denying coverage, but if you want that new device, you have a high copayment. It’s a way to make the consumer conscious of costs, and share in the decision and expose him to financial consequences.” Adding high copayments, says Chernew, reflects the underlying shift away from the command-and-control style associated with HMOs in the early to middle ’90s to a more market-oriented approach. With health plans unwilling to ration technology, they have a new approach: Let the consumer do the rationing.
“People don’t like the word ration, but you ration everything,” says Chernew. That includes cars, food, and, ultimately, health care.
Garber’s group concluded that increased government involvement won’t improve technology assessment or adoption. But there is a lot that MCOs and medical tech companies could do together.
For starters, Rand recommended that MCOs and manufacturers work together to gather the information that was needed on each product at an earlier stage of introduction.
And manufacturers would be well advised to analyze the information they have and prepare it for presentation to the MCO world before going after regulatory approval.
In fact, getting out information on devices still in testing would help MCOs account for new technology while they were in contract negotiations, rather than deal with it after rates are set. And everyone should do more to eliminate obsolete technology.
For Chernew, one possible answer lies in developing better consumer-directed plans. Sticking consumers with a high copayment for something they need or don’t need is a clumsy way to influence utilization of expensive technologies.
Instead, he’s been studying how plans could start charging copayments for technology based on the member’s condition.
“Members would pay based on the clinical benefit,” says Chernew. If technology would likely deliver a significant clinical benefit to a member, make it available at a low copayment. If the patient stands to gain little significant benefit, make it a higher rate.
Disease management programs and better information technology have both been held out as means to control costs, says Chernew, but he’s skeptical that they are the answer to rising costs. That skepticism helps explain why Chernew still hasn’t taken mandated rationing — either by plans or the government — off his list of possible ultimate solutions.
Adds Chernew: “Someone has got to figure out how the brakes are going to be put on.”
Who that someone is, though, has yet to be decided.
Paul Lendner ist ein praktizierender Experte im Bereich Gesundheit, Medizin und Fitness. Er schreibt bereits seit über 5 Jahren für das Managed Care Mag. Mit seinen Artikeln, die einen einzigartigen Expertenstatus nachweißen, liefert er unseren Lesern nicht nur Mehrwert, sondern auch Hilfestellung bei ihren Problemen.