A blueprint for high-volume, high-quality lung cancer screening that is detecting cancer earlier—and helping to save lives
The genomic revolution is under way. Just don’t expect it to be an early money-saver.
This article and the one that follows were derived from presentations at the Lennox K. Black International Prize for Excellence in Biomedical Research – 7th Symposium, Individualized Medicine, conducted in November 2012 at Thomas Jefferson University in Philadelphia. The symposium and these articles derived from it were made possible with support from Roche Diagnostics.
The late science-fiction writer Ray Bradbury once said that sometimes “you’ve got to jump off a cliff first and build your wings on the way down.” We have jumped in terms of using genetic and genomic knowledge to guide individualized health care, and now we are in the process of building our wings. Hopefully, we can get them built before we hit the ground!
I have worked in integrated health care systems for more than 20 years, and was a solo-practice pediatrician before that, so I have had an “up close and personal” view of the U.S. health care system. That is why I liken the effort to realize the goal of “precision medicine” — that is, medicine individualized to the patient’s circumstances and guided by our fast-growing knowledge of the human genome — to “an irresistible force meeting an immovable object.”
Precision medicine is a concept that emerged through the work of Clayton Christensen of the Harvard Business School in his book The Innovator’s Prescription: A Disruptive Solution for Healthcare. By that phrase, he means “…the provision of care for diseases that can be precisely diagnosed and whose causes are understood, and that consequently can be treated with rules-based therapies that are predictably effective.” That is where I expect genomics to play a key role.
How precise is our ‘precision’?
To be honest, current next-generation genetic sequencing is not sufficiently accurate for clinical use without confirmation using a traditional sequencing methodology. We hear that whole genome sequencing is 99.99 percent accurate. Given the 6 billion base pairs in our genome, that means we have only 600,000 errors to deal with! Unfortunately, those errors aren’t random — scattered across the genome. Rather, they occur in places that we are interested in — for instance, in cytochrome P450 genes that produce enzymes important in drug metabolism. Because of all the gene homology and gene implications, it is difficult to genotype these areas. It is almost impossible to genotype the HLA region with next-generation sequencing. So there are areas that are clinically relevant, and next-generation sequencing does a poor job of getting at them. These are problems that will be resolved technically, but we cannot assume that next-generation sequencing is going to work right away.
Source: Genomic Medicine Institute, Geisinger Health System
Right now, we are really at the beginning of whole-genome sequencing, so the challenge is to determine the clinical quality threshold and the methods for quality assurance in testing. The Centers for Disease Control and Prevention (CDC) recently issued the first guidelines for next-generation sequencing laboratories — so we now have at least one agency that has put a line in the sand, saying: “Here are some things that need to be considered in terms of the analytic quality of whole-genome sequencing.”
Most genes are inaccurately characterized in terms of what the phenotype is or what the prevalence or the penetrance is. And even in the genes that we know, very few of the variants are annotated. We have only a rudimentary understanding of noncoding DNA that shows high evolutionary conservation (so-called “junk” DNA), and we have almost no information about gene–environment interactions. I think one of the reasons for this is that we have invested much more in genotyping than in phenotyping. We have difficulty generalizing across populations for a variety of reasons. As we think about the “dark matter” problem of genetics — “Where is all the missing heritability?” — the role of rare, perhaps individual familial variants is going to become increasingly important.
With regard to treatment precision, there are few examples of genomic-guided therapy. Nevertheless, this is a very promising area. In particular, the molecular characterization of tumors is going to be transformative for our approach to the treatment of cancer in the next five years. Rather than focusing on the revolution of histology that occurred more than 100 years ago, molecularly characterizing tumors and identifying the molecular drivers of tumors are going to change our approach.
We face major challenges in terms of how we design genome studies, particularly with regard to rare variants. We also lack robust intermediate outcomes or predictive biomarkers. For example, if a person looks as if he or she is at increased risk for developing diabetes, but the diabetes does not develop for 30 years, how can we determine that any interventions are going to be effective in preventing the disease unless we have intermediate markers?
You might ask, “Why are we bothering to try to move this into the clinic?” Some justifications have been offered. We hear that it is cheaper to sequence the entire genome than to do one or two genes. We also hear that having the information will save money through better prevention, through the avoidance of adverse events, and through more effective treatment.
The fact is, commercial firms are offering whole genomes in the $5,000 to $10,000 range — but that does not include the cost of confirming variants of interest. It certainly does not include the cost of storing the information, nor does it consider interpretation. Bruce Korf at the University of Alabama–Birmingham talks about the $1,000 genome with the $100,000 interpretation, and that is the problem. It takes a lot of time and energy to do the interpretation. What are the mechanisms by which we are going to pay for that? So let us not delude ourselves that a decrease in the cost of the technology is going to be reflected in a decrease in the overall cost of care.
A recent study looked at incurred medical costs after the age of 20 years in Dutch males. A normal-weight, nonsmoking Dutch male incurred a cost of 281,000 euros after the age of 20. Surprisingly, an overweight, nonsmoking Dutch male incurred a cost of 250,000 euros, and an overweight, smoking Dutch male incurred a cost of 220,000 euros. Why is this the case? The simple answer is they died! It is a dirty little secret of health economics that dead people do not incur health care costs. Of course, when a productive member of society dies prematurely, that is a negative for society. However, if the health care sector keeps a person alive, we do not receive any credit for the fact that we maintained that person as a viable, productive member of society.
We need to be honest about the economic impact of prevention — we cannot blithely say that it saves money. However, if we can identify an individual, based on his or her genome, as having a predisposition to an adverse drug reaction — such as Stevens–Johnson syndrome with carbamazepine — we can save significant amounts of money because the cost of the genotyping for these individuals is very low. Some examples that are emerging in practice include the HLA-B*5701 genotype and Abacavir; HLA-B*1502 and carbamazepine; and SLCO1B1 and simvastatin. In each of these cases, there is zero impact on efficacy, and effective alternate treatments are available.
What about efficacy? An example is the TPMT gene and 6-mercaptopurine (6-MP). 6-MP is an immunosuppressive agent that is used for several different indications, including leukemia and inflammatory and autoimmune diseases. The drug is associated with a risk of bone-marrow suppression, which is not surprising since it is targeted against leukemia. The risk of a severe life-threatening reaction related to bone-marrow suppression is between 1-in-140 and 1-in-300. 6-MP treatment outcomes have been analyzed based on TPMT genotype, and we know that the polymorphisms in this gene affect 6-MP metabolism. If a person has a non-wild-type variant that decreases his or her TPMT enzyme activity, then that person has an increased risk for complications. There is adequate evidence to demonstrate an association between these genotypes and an increased risk of bone-marrow suppression.
It has been recommended that all individuals who are going to be treated with 6-MP should be tested for the TPMT genotype. If polymorphisms are present, the drug should be avoided or the dose adjusted. The FDA has changed the labeling for 6-MP, and I know of at least one lawsuit where a physician did not test an individual who subsequently died from bone-marrow suppression.
In a study published in JAMA, Stanulla and colleagues genotyped a group of children with acute lymphoblastic leukemia (ALL). The investigators did not use the genotype to inform treatment, but they had the information available, and after treatment they looked at the outcomes. All of the children were treated with standard doses of 6-MP and other ALL drugs, and the investigators measured minimal residual disease before and after treatment. Children with at least one mutant TPMT allele had a significantly lower rate of minimal residual disease than those who were wild type (9 percent vs. 23 percent, respectively). This demonstrates that 6-MP is more efficacious in patients with at least one mutant TMPT allele, because these leukemia cells metabolize 6-MP less effectively.
Importantly, the children with minimal residual disease were 5 to 10 times more likely to experience a relapse than were those without residual disease. This finding raises some questions. For example, should we sacrifice increased efficacy to prevent adverse events?
The literature has looked at errors of omission versus errors of commission. We are three times more likely to kill a patient by not doing something than by doing something, and yet we are much more sensitive to errors of commission than to errors of omission. This directly relates to the efficacy question. Would it be better to use the same dose of 6-MP and to monitor blood counts more closely in individuals that are not wild type? Should we increase the dose of 6-MP in wild-type individuals to increase the efficacy? If my child had ALL, I would want the rate of minimal residual disease at 9 percent, not at 23 percent.
So what is the economic impact of more effective therapies? We probably could save money if we intervened earlier and prevented adverse outcomes. We certainly want more effective interventions, and we want to avoid using therapies in individuals for whom the treatments would be predictably ineffective, based on genotype. That is one way to reduce waste.
However, there is also the potential to increase costs. We are going to develop treatments for the untreatable. We now have therapies for single-gene genetic diseases, such as lysosomal storage disorders, where we can replace the enzyme. The cost of those enzyme-replacement therapies is between $40,000 and $100,000 per year. One of the biggest “wins” in the Affordable Care Act for that small population of patients was the elimination of the $1 million or $2 million lifetime cap, because these patients would reach that cap within a few years, and then they would not be able to get insurance.
If a person lives longer, he or she inevitably incurs more health care costs. I am reluctant to say that we are somehow going to decrease those costs in America, but I think it is worth looking at value — the relationship between patient-centered outcomes and the cost of care. Patient-centered outcomes include medical outcomes, treatments, prevention, safety outcomes, service outcomes, the number of visits to a physician, and the extent to which these visits disrupt the patient’s routine of daily life. This is where people talk about personal utility: “How will that information change what I do?”
In the REVEAL study, Robert Green at Harvard looked at people with at least one allele that was predictive of a higher risk of developing Alzheimer’s disease. Even though the subjects were told repeatedly that there was nothing they could do to prevent Alzheimer’s, they ate better, took vitamins, exercised more and did the kinds of things that are generally considered to be “healthier behaviors.”
In general, we do a poor job of measuring the cost of our services. Medical and service outcomes can be affected in three ways: We can improve them; we can leave them unchanged; or we can worsen them. The same is true for the cost of care: We can decrease costs; we can leave them unchanged; or we can increase them. Ideally, we want to have decreased costs with better outcomes.
Sometimes, of course, we decide that it is worthwhile to improve outcomes at an increased cost. With Gleevec or Herceptin, for example, we have made the decision, as a society — at least based on the surrogate of insurance coverage — that even though the cost of care is increased significantly by using these medications, the improved outcomes warrant the investment.
Unfortunately, much of U.S. medicine increases the costs of care while worsening the outcomes. The best example of this — one for which we can share some of the blame with lawyers — was bone marrow transplantation for advanced breast cancer. In one case, a large HMO denied a woman a bone-marrow transplant that had been recommended by a clinician. The insurer said, “This is an experimental investigational procedure that is not covered under your benefit plan. We are not going to pay for it.” They were taken to court. The woman died, and the family continued the suit. The insurer lost in court and paid more than $80 million in a judgment for not covering the therapy.
Overnight, every health plan in the country covered bone-marrow transplantation for advanced breast cancer. Then studies came out that said, “You know what? Not only does it not work, but we are killing women with this!” So the insurer had it right. They said, “This is an experimental intervention; we should not be paying for these therapies.” It has been estimated that between $500 million and $1 billion was spent in the U.S. on this futile and harmful therapy.
In the realm of genetics, we have to accept the fact that, at this point, we are a faith-based organization. I stand up in front of people and say, “You have to believe that what we do is good!” But we do not have any data when it comes to value. Genetic disorders are very rare, individually, but are more common in the aggregate. I would argue that precision medicine is moving more and more toward rare individual disorders. At the same time, aggregating the data in a meaningful way to understand value is difficult. We need to address this question because it affects issues related to reimbursement, institutional support, and professional standing.
A recent cost-effectiveness study looked at KRAS testing in tumors for erlotinib in colorectal cancer. The good news is that testing KRAS prior to the use of erlotinib is cost-effective. The bad news is that erlotinib is probably not worth what we are paying for it, because the actual improvement in survival with the use of this medication is modest at best.
In this study, the authors estimated that if KRAS testing and erlotinib therapy are used only in individuals who are predicted to be responders, the cost is approximately $650,000 per quality-adjusted year (QALY) of life. (In this country, we tend to accept $100,000 per QALY as cost-effective.) The bad news is that if clinicians do not perform KRAS testing — which is what is happening in most of the U.S. — the cost is $2.8 million per QALY. So, if we do not apply KRAS testing, we are definitely increasing the cost of care. As a society, we are going to have to decide what to do in a case like this.
I would argue that the ship has sailed. We have made the decision, as a society, that we are going to use erlotinib in individuals, but we should make sure that every one of those individuals undergoes KRAS and BRAF testing so that we do not go broke quite so quickly!
Health care systems are moving to value-based assessments, even though they are difficult to perform. In annual expenditure per capita, the U.S. health care system is number one in the world — $7,500 per person. We are nearly 50 percent higher than the next highest country, Norway, and we are almost three times as high as Japan. So what are we getting for our money? In three measures — life expectancy, infant mortality, and potential years of life lost — we are 17th in the world. Japan is first, second, and first, respectively. The Japanese are clearly getting greater value for their expenditures in health care. We are getting very poor value, but we do not seem to care about this in our national discussions. We just care about being “the best.” It took me a while to find the information, but we are number one in the world for five-year survival in breast cancer treatment.
I would like to relate all of this to integrated health care delivery systems. In an integrated system, providers, inpatient and outpatient facilities and health plans (insurers) are all under one corporate umbrella. Many of these are built around a single electronic health record, which captures information in a data warehouse. In most cases, providers are salaried as opposed to being paid strictly on a performance basis. One extreme of this model would be the Kaiser system, where everything is internalized.
Most of us live in an environment where some of our patients are in fully integrated plans, and where some are covered by other health plans. Some of our providers are employed; some of them are not. Integrated health care systems are an important place to begin to test some of the principles of precision medicine, because we have established long-term relationships with a stable population of patients. We can use emerging best-practice models that include a medical home and accountable care organization principles.
We try to align providers, hospitals, and payers to make rational decisions about the best way to allocate resources to enhance patient care in financially responsible ways. We get everybody around the table, and we say, “Let’s look at KRAS and erlotinib. How are we going to deal with this? Are we going to cover erlotinib?” If we are, we have to be certain that every patient undergoes KRAS testing first. That is the most cost-efficient way to do it.
We can then establish systems that allow us to ensure that this is happening. We can store information securely, move relevant information into the inpatient/outpatient settings, and provide the information to clinicians so that when they have to make a decision, the data they need are presented to them.
We use standards-based electronic health record (EHR) systems with clinical decision support. High-quality, efficient care is generally based on measurements and definitions. We employ measured outcomes to inform the development and implementation of evidence-based care pathways via EHRs so that we can ensure that care is delivered reliably and consistently to all patients across the system.
We track outcomes instantaneously, monitor the integrity of care pathways, and modify and improve those pathways in a timely manner in response to a rapidly changing evidence base. This is what I mean by creating a learning health care organization — one that is culturally adaptable to the acquisition, analysis, and incorporation of integrated knowledge to improve care.
As I noted earlier, we have jumped off a cliff, and now we need to build our wings on the way down. We have acquired basic genetic/genomic knowledge, and now we have to intelligently filter that knowledge and decide what to do with it. Once we make what we think are the right decisions, we have to run them through a “black box” that includes informatics, just-in-time education, care processes, and clinical decision support for quality improvement so that we can deliver the appropriate care every time, and we have to make sure that we measure our decisions against defined quality outcomes.
But there are many challenges. For example, how do we store the information, and how do we communicate it, especially to different care systems? If I have a patient from Geisinger who comes to Jefferson, how can I make sure that other clinicians get all of the information that I have, especially if I run the patient’s genome analysis?
We have to define the clinical contexts — that is, where the specific information is used — and we have to identify those contexts automatically. We have to update the information as new knowledge emerges, which, in genetics, is about every 15 minutes! We have to support models of care delivery and measure our costs and outcomes to determine the value. And finally, we have to decide to stop doing things if we are not getting good value, because everything we invest in that does not give good value represents a lost opportunity to do something worthwhile.
How can we individualize all of this at the point of care? I do not have a simple answer to that question, but I think it is going to involve mass customization. These days we can buy any color cell phone that we want, and we can design our own T-shirts.
We have phenomenal personal computers, and they have everything we want in them, but they are built using mass-production technology. Medicine, unfortunately, still practices using a 19th-century craft model. I would argue that there is high quality in medicine — but it is customized; there is a very high cost per unit; and we cannot distribute it. Mass customization basically employs high-quality, reliable processes to build things at a low cost and to distribute them widely. This is where we need to go next.
It is not going to be easy, and the right direction is unclear. Our evidence for therapies is almost all population-based. In fact, the FDA insists that we do population-based studies to determine efficacy, although now the agency is insisting that we also need to do genotyping.
In most cases, EHR systems do not support the aggregation of relevant patient data at the point of care.
As I mentioned, the number of data elements surpasses human cognitive capacity. We have a limited ability to collect outcomes from the real world to determine the effectiveness of personalized interventions.
Whether we like it or not, genomic medicine is here to stay. If it is implemented the same way all other new technologies have been, we will probably increase costs significantly while failing to improve outcomes.
Sophisticated information systems will be required to analyze, interpret, present, and store the new genomic data, and clinical success will depend on the use of mass customization, implementation science, quality improvement — and moving the United States to full integration across all health care delivery systems.
Marc S. Williams, MD, FAAP, FACMG, is a pediatrician trained in clinical genetics, and director of the Genomic Medicine Institute of the Geisinger Health System in Danville, Pa. He is the former director of the Intermountain Health Care Clinical Genetics Institute in Salt Lake City. This article is based on a presentation made at the 7th Symposium on Individualized Medicine at Philadelphia’s Thomas Jefferson University in November 2012.
The completion of the Human Genome Project, coupled with dramatic advances in technology, provided an armamentarium of high-resolution tools to interrogate human diseases, ushering in the era of genomic medicine. We can now classify diseases at the patient level, a process that has been called “personalized medicine,” “precision medicine,” “genomically informed medicine,” and “individualized medicine.”
Perhaps the best-known aspects of “individualized medicine” are therapies directed against molecular targets, and the process of verifying the presence/availability of those targets with a specific test or companion diagnostic. Indeed, the sphere of oncology clinical trials changed with the first therapy directed against a specific molecular target, HER2 in breast cancer, because eligibility was determined using a companion diagnostic test to determine whether there was high-level expression of this protein or amplification of the corresponding gene.
Axel Ullrich, PhD
The award of the biannual Lennox K. Black Prize to an international scientist was selected to be in the area of “individualized medicine.” The recipient, Axel Ullrich, PhD, discovered the HER2 molecular target in breast cancer and was the architect of the development of the targeted therapy trastuzumab. Subsequently, he has made many seminal contributions to the field of molecular oncology.
Despite the clear rationale for full adoption of the “individualized medicine” approach, the organizational and financial worlds of medicine are in transition, raising critical questions regarding “how, how much, and when.” Moreover, “personalized medicine” has been defined from a different perspective to emphasize an approach that concentrates on the individual patient — “patient-centered” medicine.
It is clear that medicine is undergoing a metamorphosis from a Procrustean decision-making process that fits individual diseases into pre-defined groups to a data-driven practice based on detailed insights into both the disease and the patient. The adoption of “individualized medicine” in the current climate of health care reform may well be a perfect opportunity.
— Stephen C. Peiper, MD, chairman of the pathology, anatomy and cell biology department at Jefferson Medical College of Thomas Jefferson University
Pediatrician/geneticist Marc S. Williams, MD, of Geisinger Health System says genomics-informed medicine will present health plans with a number of tough decisions, and it won’t always be easy to determine what is cost-effective. But he describes a sophisticated interaction that could provide plenty of bang for the buck.
My informaticist at Intermountain Healthcare, Grant Wood, built me a sample patient portal and kindly volunteered to be the “guinea pig” himself. He entered his family history and the medical conditions that he had. Although there are different options, he chose to send it to me. He clicked a button, and now we are in his electronic health record (EHR). I open my clinical desktop, and I have a message from Grant. It says that he has logged into my outpatient portal and has completed a family history record that shows an increased risk for a familial disease. The family history risk assessment is available for my review; I can view it as a pedigree or as a table, and I can upload the table to a Web form.
With one click, I upload the table to Grant’s EHR, which now says that I have reviewed his family history information. The information is there for anyone else who is viewing the EHR.
The EHR also states that I can perform a risk assessment. If I click that button, it says: “increased risk for coronary artery disease.” Why? The answer is that Grant has two first-degree relatives, males under age 55, with premature coronary artery disease.
I can add that to the problem list with another button click, which may say “family history, premature coronary artery disease,” or I could assess Grant’s cardiovascular risk using clinical data. When I click that button, a message reads: “moderate cardiovascular risk, 2+ risk factors.” On another part of the screen, a message states that Grant had a measured blood pressure that was in the hypertensive range on Jan. 18, 2010, and his lipid profile showed low HDL cholesterol in November 2011. The message also tells me that the HDL was calculated using ATP III risk factors. If I do not remember those risk factors, I can click on a button, and it will take me to another page, which says, “Here are the ATP III risk factors. Here is the evidence.” I can drill down as far as I want to go in nested information, and I am still in Grant’s EHR.
The screen also shows me an information button, which says that I should consider an inflammatory marker test to reclassify Grant’s cardiovascular risk. When I click on that button, the screen states that if I perform this test, and the marker is not elevated, then the patient’s LDL treatment goal is less than 130 mg/dl. But if the marker is elevated, then the patient has a cardiovascular risk equivalent, and I need to treat to a much tighter LDL level of less than 100 mg/dl.
The screen shows me what the inflammatory markers are — CRP or LPa2. The screen can also take me to references that will explain why those are the markers that I should use. It also gives me a link to a reference that shows me why an LDL goal is appropriate.
I could schedule Grant for an appointment, but the screen tells me that I saw him recently, and I don’t really need to bring him back in. I click another button, order the test, and notify the patient.
Back at the patient portal, Grant now has a message from me, and he opens it. The message reads: “I have reviewed your family health history. I note the increased risk for cardiovascular disease in your family. This information, combined with your personal risk factor, places you at moderate risk for cardiovascular disease. There is a blood test that can help determine treatment goals, and this test can reduce your chances of having problems. I have placed an order for that test in the system. Contact the scheduling clerk to make arrangements to have the test performed. I will contact you with results. Call or e-mail me if you have any questions.”
So I have just performed a sophisticated health transaction using efficient evidence-based processes that are highly reliable. Physicians can check at any point to make sure they agree with the decisions that the system is offering. It is an “open” decision-support process. In other words, clinicians can choose to follow or not follow the instructions, and different decision options are available.
With a quick series of clicks, I have completed a transaction that will arguably improve the evidence-based care of this patient. The next question is, “How am I going to get paid for doing that?” We haven’t solved that problem yet! — Marc S. Williams, MD, FAAP, FACMG
A blueprint for high-volume, high-quality lung cancer screening that is detecting cancer earlier—and helping to save lives
Multiple Sclerosis: New Perspectives on the Patient Journey–2019 Update
Summary of an Actuarial Analysis and Report