How Insurers Can Bear The Burden of Proof for New Treatments
How Insurers Can Bear The Burden of Proof for New Treatments
MANAGED CARE December 2009. ©MediMedia USA
Too often, society prods regulators into adopting new drugs and technologies without adequate scientific analysis
Pierantonio Russo, MD
Independence Blue Cross
Alan Adler, MD, MS
Independence Blue Cross
Scope of the argument
The pressures for health systems and providers to become early adopters of new treatments and the expectations of patients to have immediate access to the newest therapies continue to raise the question of when innovations become the standard of care. Delay in coverage of a new treatment could harm patients who would not have access to interventions that are potentially superior to those currently available. On the other hand, premature coverage could lead to the diffusion of possibly harmful treatments, could prevent further research, and could result in waste of resources. Two serious limitations of current technology assessments are their primary reliance on published literature and the use of a predetermined hierarchy of scientific evidence. In addition, they frequently omit socioeconomic considerations and comparative effectiveness studies.
We support the view that the evidence used to determine the coverage of new treatments should be extended to practical clinical trials, registries, cohort studies, and expert opinions, and should include socioeconomic and comparative effectiveness analyses.
In addition, coverage with evidence development (CED), already introduced by the Centers for Medicare and Medicaid Services (CMS), could be considered in specific circumstances. The CED would be conditioned on the collection of new information, and could be withdrawn later in the absence of demonstrated effectiveness. In implementing CED, an expert panel could guide the choice of the best method for collecting the additional evidence for a specific new treatment. However, in considering CED, the cost of collecting new data should be weighed against the benefit of extending coverage. Finally, commercial health plans and CMS should become directly involved in funding clinical research and comparative studies.
Lure of the new
In this era of easy access to information, social forces frequently push regulators to adopt new drugs, technologies, and procedures without adequate scientific and value analysis. Pressure often comes from patients, who are frequently influenced by direct-to-consumer advertising and who perceive the new treatment as their best option or only hope; from providers seeking “the best for their patients”; and from the pharmaceutical industry and medical device manufacturers, which are eager to achieve market advantage.
The demands for health systems and providers to adopt new therapies, combined with the expectations from patients to have access to the latest treatments, continue to raise two key questions: When should new therapies, introduced in clinical practice without formal research, become standard of care, and when should they be considered medically necessary?
What constitutes standard of care?
Most therapies and procedures accepted as standard of care share at least each of the following characteristics (Reiser 1994):
- Their clinical indications are clearly established and largely accepted.
- Their outcomes are known and reproducible.
- The skills and expertise that practitioners must have for their clinical implementation are well described and are often included in formal hospital privileges. The logistics needed for their applications are frequently included in JCAHO, CMS, and professional societies’ recommendations and guidelines. A pertinent example is organ transplantation, for which the training of providers, the facility’s capabilities, the minimum number of procedures required to maintain proficiency, and the range of expected outcomes are clearly delineated and subjected to regular reviews.
- Their clinical indications, expected outcomes, and, for procedures, the technical skills necessary for their implementation are commonly taught to trainees and practicing health care providers through special certification programs.
Consequently, an innovation becomes standard of care when it has met the burden of proof for each of those four characteristics. A new therapy that does not have all of the characteristics should be considered experimental. However, often a continuum exists between standard and experimental treatments. It consists of crossover therapies potentially progressing from experimental innovations to standard of care. These are treatments already adopted in the community without having been completely assessed. There are also therapies and technologies that are the only hope for seriously ill or terminal patients.
The potential of many new therapies, particularly crossover therapies, to achieve standard of care may be able to be tested only if their utilization is evaluated outside the controlled environment of experimental protocols and in the real world of common clinical practice. Such utilization prior to becoming standard of care is essential to allow incremental improvements in safety, clinical indications, and implementation that often derive from the application of a new therapy in a variety of clinical settings.
The definition of medical necessity used by Independence Blue Cross under its commercial plans (Love 2008) is “care services that a physician, exercising prudent clinical judgment, would provide to a patient for the purpose of preventing, evaluating, diagnosing, or treating an illness, injury, disease, or its symptoms, and that are: (a) in accordance with generally accepted standards of medical practice; (b) clinically appropriate, in terms of type, frequency, extent, site, and duration, and considered effective for the patient’s illness, injury, or disease; and (c) not primarily for the convenience of the patient, physician, or other health care provider, and not more costly than an alternative service or sequence of services at least as likely to produce equivalent therapeutic or diagnostic results as to the diagnosis or treatment of that patient’s illness, injury or disease. For these purposes, ‘generally accepted standards of medical practice’ means standards that are based on credible scientific evidence published in peer-reviewed medical literature generally recognized by the relevant medical community, physician specialty society recommendations, and the views of physicians practicing in relevant clinical areas, and any other relevant factors.”
Increasingly, commercial health plans and the CMS have been using health technology assessment as the basis for coverage of new treatments. The National Institute of Health (NIH), the Agency for Health Care Research and Policy (AHRQ), the Blue Cross & Blue Shield Association (BCBSA), and the U.S. Preventive Services Task Force (USPSTF) all provide evaluation of new therapies.
Methods used to evaluate innovations vary among organizations. In general, technology assessments rely primarily on scientific criteria alone and on a hierarchy that places published randomized phase 3 clinical trials (RCT) at the top of the required evidence. However, the median time from patients’ enrollment in a study to the publication of findings can be as much as 5.5 years (Ioannidis 1998). Therefore, by the time the results of the published trials are reviewed through technology assessments, new information is frequently not included in the evaluation, even if the new therapy has already evolved and data that are more recent are available.
Socioeconomic circumstances that might influence the implementation of the new therapy are not always included in technology assessments. (Atkins 2005, Chalkidou 2008). For example, newborn hearing screening is now mandated in 38 states, even though the USPSTF concluded in 2001 that the evidence was “insufficient to recommend for or against universal screening.” In most states, policymakers decided that universal screening would create an opportunity to identify all hearing-impaired children, regardless of family income, geographic location, and level of medical care available to them (USPSTF 2001, American Speech-Language-Hearing Association 2001, Thompson 2001).
Other important limitations of RCTs are now recognized. For one, the reporting of trial outcomes has been found in several cases to be incomplete, biased, and sometimes inconsistent with the original study protocol. Moreover, the common statistical analysis applied to RCT is frequently misleading, because the data are presented as a statistical average and analyzed through a binary, rather than a multivariate, analysis (Ioannidis 1998, Chan 2004). Consequently, patients truly harmed by a new treatment may be missed, and high-risk patients, potentially benefiting the most from the experimental therapy, may not be identified. The applicability of statistics from clinical trial to common clinical practice is further limited by the very stringent p value, usually less than .05, required as proof that the difference between the treatments under investigation is not due to chance.
Claxton, et al (2005), provide two examples. In one, they illustrate a hypothetical situation where a new pain medication was available with known low risk of side effects, low cost, and the possibility of offering relief for patients with severe symptoms. In the next example, they present another hypothetical situation where patients with a terminal illness could access a new medication for which the evidence suggests an 80 percent chance that the observed efficacy is real and 20 percent representing only chance. In both cases, the authors question whether it would make sense to withhold access because the stringent .05 p value had not been met.
Finally, RCTs are designed to test the efficacy of a new treatment, which is how it performs in experimental controlled settings. However, RCT cannot address effectiveness, which is how a new treatment performs outside the experimental protocol, in clinical practice.
Because of the limitations of RCT, a stronger role is now advocated for practical trials, including observational studies and formal registries (Dreyer 2009).
According to the Agency for Healthcare Research and Quality (AHRQ), a patient registry for evaluating outcomes is “an organized system that uses observational study methods to collect uniform data (clinical and other) to evaluate specified outcomes for a population defined by a particular disease, condition, or exposure, and that serves a predetermined scientific, clinical, or policy purpose(s)” (Gliklich 2007).
As an alternative to RCT, practical trials can build reliable evidence because they provide timely and realistic information on how well an innovation may work in clinical practice. This is important because technology assessments relying only on scientific evidence, particularly on the availability or lack of RCT data, frequently do not evaluate the clinical effectiveness of the innovation outside experimental settings or its cost effectiveness. Cost effectiveness takes into consideration societal and financial factors. It is an essential tool of comparative analysis, but usually it is not included in technology assessments.
Given the limitations and variations of current technology assessments, a more comprehensive approach must be used to address whether the lack of chosen levels of scientific evidence will harm patients by denying them access to innovations potentially superior to established treatments.
On the other hand, the same comprehensive evaluation must determine whether, given current circumstances, premature coverage of a new therapy would encourage the diffusion of potentially harmful and wasteful practices and prevent further research.
Recognizing the need for a new comprehensive approach to the evaluation of new treatments, the USPSTF recently noted the need to consider evidence “as a whole, including trade-offs among benefits, harms, and costs and the net benefit relative to other needs for optimal resource allocation” (Harris 2001). One such approach being increasingly explored in health policy debates utilizes techniques of decisional analysis.
In decisional analysis, the hierarchy and type of evidence is not predetermined in the evaluation of a new treatment. Instead, standards of evidence depend on all circumstances surrounding a particular innovation. Decisional analysis takes into account the potential benefits of the new treatment but also the consequences of adopting it prematurely if its effectiveness is later not confirmed (Steinberg 2005, Griffin 2006, Garber 2001).
Furthermore, in decisional analysis, additional scientific and socioeconomic evidence is recommended only after completion of an investigation of its potential value, given all the specific circumstances. Such investigation includes the potential contribution of the new information to the decision-making process, the feasibility of acquiring new evidence, and the cost of new research relative to its worth in the decision process. Depending on individual situations, gathering of additional evidence may be required prior to the adoption of the innovation or after its conditional adoption.
The decision whether or not an experimental therapy has achieved all the characteristics of standard practice affects many stakeholders: patients, physicians, professional societies, hospitals, businesses, and payers. Furthermore, CMS and private insurers must decide whether to pay for it as reasonable and medically necessary (Steinberg 2005, Griffin 2006, Garber 2001, NICE 2004, CMS 2006). When an innovation has been vetted through a comprehensive evaluation process, the providers, the public, and the insurers may rely on available scientific evidence and cost effectiveness analyses to decide on adoption and reimbursement.
However, frequently a conflict arises between the public and the physicians on one side and the payers on the other regarding the appropriateness of covering crossover therapies. One side of the argument is that the adoption of an experimental procedure before the conclusion of a thorough comprehensive evaluation may facilitate the introduction of ineffective clinical practices, substandard clinical results, and poor use of health care dollars.
In addition, in the case of drugs and devices, early adoption and coverage effectively involve indirect contribution by CMS and private insurers to research and development costs.
On the other hand, the delay in adopting and covering a new treatment may potentially harm high-risk and terminally ill patients who would not have access to interventions that could be superior to those currently available.
New solutions are required to address the question of when and how to cover crossover therapies, while at the same time acquiring additional evidence to establish their effectiveness, safety, and therapeutic role.
In this context, insurers may want to consider facilitating early research on new therapies for at least two reasons.
First, high quality information on innovations should be acquired before physicians and patients develop a bias in favor of a new treatment based solely on perceptions derived from its uncontrolled diffusion in clinical practice. Once favorable bias toward a new therapy has already developed, physicians and the public tend to resist the implementation of randomized and comparative effectiveness studies (Steinberg 2005, Garber 2001). This leads to failure in obtaining the evidence needed to establish the medical necessity and cost effectiveness of the new treatment.
Second, by facilitating early research, insurers could collaborate with hospitals and physicians in a process of early conditional and provisional adoption while further evidence is being collected. As an added benefit, such partnerships will reduce the risk that social and legal pressures would dictate the adoption of new treatments not vetted through comprehensive evaluation.
We support the view that when an innovation has already been introduced in clinical practice but is not yet considered standard of care, evidence used to determine coverage should not be limited to randomized controlled trials (RCT) because of the limitations previously discussed. Instead, following decisional analysis methodology, the assessment should include registries, observational studies, and expert panels and take into account socioeconomic factors. Specifically, following the lead of Great Britain’s National Institute for Clinical Excellence (NICE), the cost of collecting new data must be weighed against the benefit of extending coverage (NICE 2004).
When questions remain after evaluation of all available evidence, consideration could be given to coverage with evidence development (CED) (CMS 2006), provided that certain conditions are met. In particular, the crossover therapy should be used for life-threatening or severely disabling diseases for which adequate therapeutic alternatives are not available. Preliminary applications must have demonstrated a reasonable degree of safety. And there must be a strong rational for introducing the innovation, supported by initial reports and expert opinions. Furthermore, coverage should be conditioned on the contemporary acquisition of additional data and the submission of scientific and cost effectiveness analyses (ACP 2008).
The additional evidence could be collected through RCT (when feasible), registries, and prospective cohort studies. An external panel of national and regional experts could guide the choice of the most suitable studies for a particular innovation. Finally, coverage should be provisional and subject to withdrawal when interim and/or final analyses fail to show that the new therapy is safe and effective.
However, even with this approach, questions remain regarding which costs associated with the conditional and provisional adoption of new drugs, devices, and procedures should be eligible for coverage.
Commonly, the trial sponsor pays for non-FDA approved drugs used in clinical trials and private insurers pay for the routine cost of care during the trial. Some payers cover routine costs of care only for phase 3 trials. However, we believe that when a trial has therapeutic intent, and the new drug is used to treat highly disabling diseases and terminally ill patients, insurers should consider covering routine cost of care regardless of the trial phase and label, particularly in the absence of good therapeutic alternatives. This approach would represent the kind of partnership between insurers, providers, researchers and the industry that would accelerate the acquisition of evidence, while still allowing access to innovation in cases of life threatening conditions when no adequate therapy is available.
Furthermore, in addition to routine care, insurers could decide to pay for the costs of FDA approved drugs used in clinical trials designed to investigate their off-label application and their effects in combination regimens. This would facilitate the accumulation of evidence in areas like pediatrics and oncology where, routinely, 50 percent to 70 percent of FDA-approved drugs are used off label.
Coverage of devices, biotechnology products, and procedures is more controversial. According to the Industry Manufacturers Association (HIMA 1995), often the manufacturers cannot afford the costs of sponsoring the device or product and government and private grants are not always available to sponsor procedures.
The CMS has led the way in these areas by offering CED for surgical procedures, devices, and off-label drugs (national emphysema trial/lung reduction; angioplasty/stenting of carotid artery; FDG-PET for dementia and oncology; use of ICD; and off-label use of biologics approved for colorectal cancer) (CMS 2006). In all these cases, by covering the new treatment, CMS acted effectively as the trial sponsor during the acquisition of new evidence.
Ideally, the federal government should increase funding of clinical studies, investigating devices, biotechnology products, and clinical procedures through grants administered by the NIH, the NHLBI, the AHCPR, and the USPSTF.
Although private insurers could follow the lead of the CMS and support CED for certain new treatments, direct payment for devices, products, and procedures is problematic. Funding such costs through an expansion of the coverage provided under standard policies would inevitably contribute to increased premiums.
However, in the absence of alternative sponsors, a possible option would be for private payers to fund payments for new therapies under CED through trusts specifically created for this purpose. To be financially consequential and achieve extensive acceptance, these trusts should be funded from the health care industry at large, the government, and private donors. Additional sources of funding for the research trusts could be generated through business agreements between payers and manufacturers. Under such agreements, if insurers had previously paid for the direct cost of the technology still in evolution, effectively contributing to its R&D, manufacturers would subsequently pay, directly to the trusts, royalties generated by the new technology.
Closing the gap
New methods should be considered to close the gap between what we know and what we do, and at the same time accelerate the adoption of potentially effective innovation, without triggering a rise in insurance premiums. Decisional analysis techniques offer a more comprehensive assessment of new therapies with the utilization of real-world data and socioeconomic analyses. CED provides a rational mechanism for the provisional and conditional introduction of cross therapies, although direct payment for new technologies and procedures under CED remain controversial. Finally, commercial plans, the government and the industry could form business partnerships that would lead to funding clinical research while expediting the adoption of new treatments in ways that do not increase medical costs and premiums.
- American College of Physicians. Information on cost-effectiveness: An essential product of a national comparative effectiveness program. Ann Intern Med. 2008; 148:956–961.
- American Speech-Language-Hearing Association. Report on newborn screening draws criticism. Press release. www.asha.org/about/publications/leaderonline/archives/2001/011120_2. Nov. 20, 2001.
- Atkins D, Siegel J, Slutsky J. Making policy when the evidence is in dispute, Health Aff. 2005; 24(1):102–113.
- Centers for Medicare & Medicaid Services. National coverage determinations with data collection as a condition of coverage: Coverage with evidence development. http://www.cms.hhs.gov/mcd/ncpc_view_document.asp?id=8
- Chalkidou K, Lord J, Fischer A, Littlejohns P. Evidence-based decision making: When should we wait for more information? Health Aff. 2008: 27(6):1642–1653.
- Chan AW, Hróbjartsson A, Haahr MT Peter C. Gøtzsche; Douglas G. Altman. Published articles in randomized trials: Comparison of protocols to empirical evidence for selective reporting of outcomes. JAMA. 2004; 291(20):2457–2465.
- Claxton K, Cohen JT, Neumann PJ: When is evidence sufficient? Health Aff. 2005;24(1): 93–101.
- Dreyer NA, Garner S. Registries for robust evidence. JAMA. 2009;302(7):790–791.
- Garber AM. Evidence-based coverage policy. Health Aff. 2001; 20(5): 62–82.
- Gliklich RE, Dreyer NA, eds. Registries for evaluating patient outcomes: A user’s guide [AHRQ publication No. 07-EHC001-1, April 2007]. Agency for Healthcare Research and Quality. http://effectivehealthcare.ahrq.gov
- Griffin S, Claxton K, Palmer S, Sculpher M. Dangerous omissions: The opportunity costs of decision uncertainty. Poster session presented at the 28th Annual Meeting of the Society for Medical Decision Making. Oct. 15–18, 2006.
- Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow PD, Teutsch SM, Atkins D. Current methods of the U.S. Preventive Services Task Force: A review of the process. Am J of Prev Med. 2001;20(3):21–35.
- Health Industry Manufacturers Association. Forces reshaping the performance and contribution of the U.S. medical device industry. New York: The Wilkerson Group, 1995.
- Ioannidis J. Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials; JAMA. 1998;279(4):281–286.
- National Institute for Clinical Excellence. Guide to the methods of technology appraisal. 2004.
- Reiser SJ. Criteria for standard versus experimental therapy. Health Aff. 1994;13(3): 127–136
- Rick Love, MD vs. Blue Cross & Blue Shield Association. http://www.hmosettlements.com/settlements/bluecross/BCBS_OrderApprovingSettlement4-22-08.pdf
- Steinberg EP, Luce BR. Evidence based? Caveat emptor! Health Aff. 2005; 24(1): 80–92.
- Thompson DC, McPhillips H, Davis RL, Lieu TA, Homer CJ, Helfand M. Universal newborn screening: A summary of the evidence. JAMA. 2001; 286: 2000–2010.
- United States Preventive Task Force. (USPSTF). Newborn hearing screening: Recommendations and rationale. Am Fam Phys. 2001:64(12) 1995–1999.
These statements are the conclusions and opinions of the authors and do not necessarily represent the position of Independence Blue Cross or of its subsidiaries and affiliates.
Technology assessments relying only on scientific evidence… frequently do not evaluate the clinical effectiveness of the innovation outside experimental settings.
Once favorable bias toward a new therapy has developed, physicians and the public tend to resist implementation of randomized and comparative effectiveness studies.
More like this
- Coverage With Evidence Development Allows Early Adoption, Better Evaluation
- Tumor Treating Fields Therapy For Recurrent Glioblastoma
- Reviewing the Evidence for Using Continuous Subcutaneous Metoclopramide and Ondansetron To Treat Nausea & Vomiting During Pregnancy
- Managing High-Cost Technology: Brave New World, Old-Fashioned Fear
- Fecal Microbiota Transplantation for Treating Recurrent Clostridium difficile Infection