An instance where the ubiquitous “more research is needed” very much sums up the debate
What’s an otolaryngologist to do? For years, no academic study of any real import asked about the challenges for adult patients undergoing tonsillectomy. Now, suddenly, two studies appear and they reach different conclusions, though they both contain the ubiquitous final suggestion that “more research is needed.”
One study is “Prevalence of Complications from Adult Tonsillectomy and Impact on Health Care Expenditures” in the journal Otolaryngology — Head and Neck Surgery on April 1, henceforth referred to here as “bad tonsils.” It concludes, “Complications of adult outpatient tonsillectomies are common and may be associated with significant morbidity, health care utilization, and expenditures.”
The other is “Safety of Adult Tonsillectomy: A Population-Level Analysis of 5,968 Patients” in JAMA Otolaryngology — Head & Neck Surgery in March, henceforth referred to here as “good tonsils.” It concludes, “In the United States, adult tonsillectomy is a safe procedure with low rates of mortality and morbidity. The most common post-tonsillectomy complications were infectious in etiology, and complications were independently associated with the need for reoperation.”
The research teams do a good job of defending their work — while also appreciating that the other side has a point.
“So which one is right?” asks Dennis Scanlon, PhD, one of the authors of bad tonsils. “The truth is probably somewhere in between. Though I believe that our national sample, the complications examined, and the methods used is what makes our study strong — it still has limitations. But of course I may be biased, so others will have to decide — both articles underwent peer review.”
“I may be biased, so others will have to decide — both articles underwent peer review,” says Dennis Scanlon, PhD.
Benjamin L. Judson, MD, is the corresponding author for good tonsils. He says, “Looking closely, although some findings between the two studies are different, they are also concordant in several important areas. For example, the reoperation (or procedural intervention) rates after adult tonsillectomy were 3.09% and 3.2%. The difference in reported complication rates between the two studies likely reflects what is captured as a complication in the databases used for each.
“Prior to these two [studies], there were no population-based studies examining the safety of this very common prodecure,” says Benjamin L. Judson, MD.
“In our study only major complications are captured while the other study does a great job of capturing more minor complications, for example pain requiring additional medication treated as an outpatient.”
Scanlon also believes that this is more than about dueling studies. It is about the perplexing situation clinician executives can find themselves in these days. Is there an echo chamber effect when it comes to clinical studies?
“What should clinicians believe when they can pick an article to support whatever current views they have?” Scanlon asks. “Does shared decision making or informed consent necessitate talking about both — as though this actually happens?”
Scanlon believes that the coincidence of the two studies points to “broader questions of clinical scientific evidence and what we know and don’t know about risks and benefits despite having, as a society, pumped so many federal research dollars into academic medical centers and clinical research and most lately comparative effectiveness research and patient-centered outcomes research.”
Judson is just excited “that both these articles have been published. Prior to these two, there were no population-based studies examining the safety of this very common surgery.”
George J. Isham, MD, is the medical director and chief health officer at HealthPartners and also a member of Managed Care’s Editorial Advisory Board. He sat on the Committee on Comparative Effectiveness Research Prioritization which, under the auspices of the Institute of Medicine, released a report in 2009 about how best to allocate $1.1 billion for CER.
“Two studies with apparently different conclusions does indeed call for further research to clarify the issues,” says George J. Isham, MD, a comparative effectiveness expert.
The money was granted under the American Recovery and Reinvestment Act, aka the stimulus program.
The committee’s Initial National Priorities for Comparative Effectiveness Research (http://bit.ly/1fmdxqY) is a blueprint for determining not simply what treatments work, but what treatments work best.
The report states that “innumerable practical decisions facing patients and doctors every day do not rest on a solid foundation of knowledge about what constitutes the best choice of care. One consequence of this uncertainty is that highly similar patients experience widely varying treatment in different settings, and these patients cannot all be receiving the best care.”
Isham says that the situation with the two tonsillectomy studies not agreeing is “not at all uncommon or unexpected, given two research teams have independently and apparently without the knowledge of each other decided to study the same question.”
He says the different conclusions arise from variation in definitions of what constitutes a complication, which then causes different interpretations of low or high rates of complications.
“This could be resolved over time through a conference on this topic to understand the differences between the approaches and come to perhaps a common approach and definition of low and high relative to complication rates and then ultimately with subsequent studies, either confirm or further muddy the waters on complication rates for this procedure against the now commonly perceived standards,” says Isham.
“In the interim, two studies with apparently different conclusions does indeed call for further research to clarify the issues.”
It’s not the first time this has happened.