A program to reduce medical errors promised providers that confidentiality would be protected. Then came a problem demanding immediate attention.
Sometimes confidentiality conflicts with patient safety. I would like to report a case of this in the context of a research project in which I was involved. And, I’m glad to say, it’s a story with a happy ending.
In the department of family medicine at the University of Colorado, we’ve got a vigorous program in which we systematically collect reports of errors that occur in a network of ambulatory clinics. To my knowledge, this is the largest effort of its type currently active in the U.S.
The fun part is classifying and analyzing the hundreds of events we have accumulated, according to a detailed taxonomy that is beginning to reveal patterns of error that we have been increasingly exploiting in our efforts to improve care. The core of the process is the system we have developed for coding and classifying outcomes and processes in health care. But, that’s not germane to the ethical problem.
Our program permits anonymous error reporting, but we encourage staff to submit as many details as possible — but not patient identities — to allow better analysis. As a consequence, most of our reports contain enough facts to reveal their origins, if someone took the trouble. However, from the start of the program, we gave assurances to participants that the intent of the data collection was not to target individual providers, clinics, or staff members.
All we wanted were aggregate statistics about the kinds of errors being noticed across several practices, so we could use these to design interventions that might benefit everyone. In addition, we were careful to remove any information from reports that could allow the reconstruction of patient identities, and we deleted the raw data from our database after it was abstracted.
Initially, there was a predictable — though low intensity — concern voiced among study participants about the possible repercussions of reporting errors. This focused on two questions that have proved troublesome in practically every environment where error reporting occurs: “What protects people who commit errors from legal or administrative discipline that nonreporters would normally escape?” And, “What protects people who report errors from retribution at the hands of superiors or peers whom they discomfit?” If these can’t be answered satisfactorily, any error-reporting program cripples itself with a huge negative bias.
Luckily (actually, not by luck, but through decades of patient leadership by a corps of outstanding medical and legal thinkers), Colorado is blessed with one of the country’s more civilized and rational medical malpractice environments, to the extent that those terms can be used. As a result, it was easier for us to make a persuasive case that our goal in collecting information on the epidemiology of medical errors was altruistic, and not to serve the crass commercial and ideological purposes of the despicable tort industry.
We were able to reassure our participants with a strong track record of managing adverse events in constructive ways. Although we could not offer anyone “immunity” from all consequences of causing a patient harm (if discovered in the normal course of events), at least we could promise that we would not expose anyone to additional liability risk, or disciplinary action, because of something discovered through our voluntary reporting system. This covenant formed a foundation for trust and optimism among our medical staff about the possibility of learning from errors.
Our project received enthusiastic participation from the providers and staff in our care network. Our Web-enabled reporting form takes in numerous reports of errors in the clinics, often with analyses of how they happened, and constructive suggestions about how to prevent them from recurring. Our steering committee meets regularly, and passes on tips and quality improvement observations arising from the reports. Then, one day Murphy threw us a curve.
As it happened, we began to notice a pattern of adverse events in two clinics that indicated a systematic threat to patient safety on a daily basis. These stood out because they weren’t just “similar” problems (of which we had already a nice supply), but recurrent instances of the same error, at high enough rates to be a “Sword of Damocles” hanging over patients. Although we weren’t aware of any harm having occurred (yet…), we had a strong premonition of impending doom.
Without divulging inappropriate detail, I can describe these issues as “system failures” in which potentially important test results were being frequently misdirected. As most clinicians are aware (but not all, sad to say), this is one of the ubiquitous hazards in medical practice that demand creative management and a fair amount of vigilance.
This discovery presented us with a dilemma, like what clinical researchers face when subjects in one arm of a blinded study turn out to do so dramatically well — or poorly — in comparison with the other group, that the researchers must “break the seal” on the blinding, out of an overwhelming duty to patient welfare.
In our case, it would be necessary to report an unsafe condition to the clinic managers, to make appropriate interventions. Because of the small size of the staffs at the affected sites, revealing the content of the reports would virtually guarantee that the identities of the reporters would become obvious to their managers. Of course, this was one of the things we had promised would not occur, when we originally explained the error-reporting project to our network.
In discussing how best to handle this problem, we wondered how the involved individuals would feel at having the problems in their clinics reflected to their administrators. And, we considered how the rest of the network would react when news of the intervention inevitably echoed along the grapevine. We took note that some people might become wary of future reporting or indignant about a violation of trust. Despite this possibility, we didn’t hesitate to do the right thing and take our chances. Our project director met with the leaders of the clinics involved, and sensitive and effective remedies were put into place to the satisfaction of all concerned.
Today, the word “error” has become sanitized under the rubric of “patient safety” (a public relations spin I applaud for the momentum it has created and the problems it avoids). Still, under whatever label, research is accelerating that may improve my chances of surviving my next encounter with the health system. Error research still faces many barriers, chiefly the insane incentives of our sociopathic tort system. But the better aspects of human nature can sometimes prevail over the worst.
What makes this possible is a correct order of priorities among health care organizations where there is a culture of constructive quality improvement. I’m cheered to assure everyone that — from what I hear at patient safety meetings, from the Leapfrog Group, and similar organizations — the attitude fostered and expressed in this small case at the University of Colorado is not unusual.
Paul Lendner ist ein praktizierender Experte im Bereich Gesundheit, Medizin und Fitness. Er schreibt bereits seit über 5 Jahren für das Managed Care Mag. Mit seinen Artikeln, die einen einzigartigen Expertenstatus nachweisen, liefert er unseren Lesern nicht nur Mehrwert, sondern auch Hilfestellung bei ihren Problemen.