When Mistakes Happen…

Medical error has been described as a public health emergency. Certainly, the numbers support this.

Opioid abuse – today’s exemplar of public health emergency – claimed an estimated 42,000 Americans in 2016, or about 115 people every day.1 Using a conservative estimate, the toll of medical mistakes is about 250,000 lives every year, or about 685 people every day.2

“Whatever the precise epidemiology is, the bottom line is that medical error happens far too frequently and is an urgent problem that needs our attention,” said Thomas H. Gallagher, MD, a professor of medicine and bioethics and humanities at the University of Washington School of Medicine and director of the University of Washington Medicine Center for Scholarship in Patient Care Quality and Safety. “There has been progress in the past two decades, but not been nearly as much as folks had hoped for.”

That progress was kickstarted almost 20 years ago, with the publication of the Institute of Medicine’s (IOM’s) report “To Err is Human,” a landmark paper that launched the modern patient-safety movement.3

ASH Clinical News spoke with Dr. Gallagher and other clinician-researchers about the prevalence of medical error, its consequences for patients and physicians, and the quest to “primum non nocere.”

A Problem of Unknown Proportions

The discussion about medical mistakes has been growing louder since the 1999 IOM report: A PubMed search for the term “medical error” yields just eight hits from 1999, compared with 35 in the year 2000 and 112 in 2017.

While “To Err is Human” was eye-opening, the data it contained on the incidence of medical error were limited. The authors estimated that between 44,000 and 98,000 deaths attributable to unintentional harm occur annually, but this figure was based on two limited studies conducted during the 1980s and 1990s and is considered to grossly underestimate the true prevalence.

A 2013 review of studies published more recently – between 2008 and 2011 – found that more than 400,000 patients die prematurely each year due to preventable harm. Many more are being seriously harmed, but not killed, by medical error.4

In his BMJ report, Martin Makary, MD, MPH, a surgical oncologist at Johns Hopkins Medicine and the creator of The Surgery Checklist (an operating room checklist designed to eliminate simple mistakes and to improve patient outcomes), suggested that a more conservative number of 251,454 deaths per year is more accurate.

That places medical error as the third most common cause of death in the U.S., according to the Center for Disease Control and Prevention’s list of leading causes of death.2 It comes behind heart disease and cancer, but well ahead of chronic obstructive pulmonary disease.

Pinning down the actual figures for medical error–related mortality is a nearly futile effort, largely because death certificates list only causes with corresponding International Classification of Disease (ICD) codes, and there is no ICD code for “death by mistake.” Even the ICD-10 coding system does a poor job of capturing most types of medical error, according to Dr. Makary.

It’s not just the U.S. health-care system, either. According to the World Health Organization (WHO), 117 countries code their mortality statistics using the ICD system as the primary indicator of cause of death.

Dr. Makary has suggested that, to achieve the goal of making health care safer, the community needs to adopt a scientific approach – the same way it already shares information about diseases and treatments. “Sound scientific methods, beginning with an assessment of the problem, are critical to approaching any health threat to patients,” he wrote. “The problem of medical error should not be exempt from this approach.”

To Forgive, Divine

Medical mistakes can have devastating impacts on patients’ health, but their effects on providers can be traumatic as well. In a field where mistakes seem unacceptable, how can providers forgive themselves?

Tim Gilligan, MD, vice chairperson for education at Cleveland Clinic’s Taussig Cancer Institute, explored the emotional scars of medical errors in an essay for the health-care communication blog Cura.5 He described a patient encounter early in his career, when, after moving from a system that used electronic order entry to one that used handwritten orders, he accidentally transposed two numbers – leading to an overdose and an underdose of the two prescribed agents.

The mistake highlighted a systems error: It slipped through the cracks and went unnoticed by the nursing and pharmacy staff and became apparent only after the patient developed unexpected side effects.

“I remember wondering at that moment whether the best course of action might be jumping off the roof of the cancer institute,” he admitted. His better instincts prevailed, though, and he confirmed the erroneous doses with the pharmacy, then told the patient and his wife what had happened.

“It’s a horrible feeling to know you’ve injured someone else in a way that isn’t reparable,” he wrote. “My patient survived, and his cancer was cured, but he will bear the scars of substantially diminished hearing and renal function for the rest of his life.”

It was a feeling encapsulated by his section head, who, when told about the mistake, responded with: “I wouldn’t want to be you right now.” “It probably sounds harsh reading it on the page, but he said it a way that felt deeply empathic, as in ‘I can only imagine what’s going through your head,’” Dr. Gilligan wrote.

More than a decade later – and after continuing to see the patient until the present – he pondered the meaning of forgiveness in medicine. “How does one find peace living with consequences of those mistakes? What helped me most in the aftermath of this disaster was my patient’s forgiveness,” he recounted. “To my surprise, a bond formed between us during those open, frank, and painful conversations [about the mistake and the care he would receive going forward].”

“I can take some comfort that multiple safety measures were put in place as a result of what I did. And I can continue to try to atone by striving to provide impeccable care to the patients I see,” Dr. Gilligan concluded. “But I’m still human and I’m still imperfect. And in medicine, that can feel unacceptable.”

Dr. Gilligan is not alone. In a survey of more than 3,000 physicians, doctors reported that medical errors led to substantial emotional and job-related distress, including: increased anxiety about future errors (61%), loss of confidence (44%), sleeping difficulties (42%), reduced job satisfaction (42%), and harm to their reputation (13%).6 Respondents’ job-related stress levels increased regardless of whether they were involved in serious errors or “near-misses.”

After serious errors, physicians were more likely to be distressed if they reported being dissatisfied with error disclosure to patients or perceived a greater risk of being sued. Only 10 percent agreed that health-care organizations adequately supported them in coping with error-related stress.

Breaking Down “Deny and Defend”

Traditionally, medical culture has seen physicians uncomfortable with admitting mistakes, especially to patients. “Deny and defend” may have been the prevailing practice in the past, but things have changed. “Today it’s all about transparency,” said Dr. Gallagher.

The Collaborative for Accountability and Improvement, a coalition of physicians and quality-and-safety experts, has championed communication and resolution programs (CRPs). These programs offer health-care institutions a more compassionate, patient-centered approach for responding to patients who have been harmed while receiving care. Piloted in the Department of Veterans Affairs Health System and select academic medical centers, CRPs are now used by approximately 200 diverse U.S. organizations and are increasingly being adopted as integral elements of patient safety.7

“We’ve come to realize that it is human impulse to not be transparent, and that’s why programs like CRPs are so important,” said Dr. Gallagher, who also is executive director of the Collaborative for Accountability and Improvement. “CRPs rewire that reflex, they help make the expectation to be open and honest the norm, and they provide the supports in the organization that really make that normalization a reality.”

When he started studying medical error and disclosures two decades ago, “the prevailing notion was that doctors weren’t being more open with patients because they worried about being sued, but this myth has been debunked,” Dr. Gallagher continued. In keeping with the idea that hurt feelings are the impetus for more lawsuits than actual medical malpractice, several studies have indicated that disclosing errors is not associated with increased liability or costs.

In one study, Allen Kachalia, MD, JD, chief quality officer at Brigham and Women’s Hospital, and colleagues examined trends in liability outcomes at four academic centers and community hospitals that implemented a CRP, then compared those data with trends at peer hospitals without CRP implementation. The findings were supportive of open communications and resolution, the authors reported.8 “None of the hospitals experienced worsening liability trends after CRP implementation, which suggests that transparency, apology, and proactive compensation can be pursued without adverse financial consequences.”

According to Dr. Gallagher, adoption of CRPs has peaked in the past 18 to 24 months, subsequent to the Agency for Healthcare Research and Quality releasing its Communication and Optimal Resolution (CANDOR) Process toolkit in 2016. The toolkit contains eight modules that help institutions progress from identifying an unexpected patient harm event to resolving that event.9 Of note, module 6 is titled “Care for the Caregiver” and focuses on providing emotional support to caregivers following a CANDOR event (or an unexpected event that causes patient harm). See the SIDEBAR for more information about the CANDOR process.

“In our field, there is a sense of growing momentum, as more and more big health systems adopt CRPs,” said Dr. Gallagher. “We’re still seeing challenges around inconsistent implementation, but as physicians become more aware of CRPs and how to access them, we hope that, when something unexpected happens, they are prepared.” He anticipates that a majority of institutions will have a CRP in place within the next few years.

The Errors of Others

Standardized CRPs among health-care facilities may help solve the problem of “pre-referral errors” or “interfacility medical errors,” when mistakes are picked up by downstream providers from different organizations.

When errors are identified by providers who work in the same facility, there are clear ways of revealing and managing these issues: incident reporting, patient safety committees, root cause analyses, etc. Those mechanisms simply don’t exist between different facilities; there are no clear professional norms regarding disclosure when physicians discover errors that were made at other institutions.

“This is really not a problem that’s ever been explicitly defined or described in the literature, and there are no guidelines for what to do in these scenarios,” said Lesly Dossett, MD, MPH, assistant professor of surgery in the division of surgical oncology at the University of Michigan.

Dr. Dossett has been exploring this gray area in her research. Her interest in this type of medical error was born during her clinical fellowship at a major cancer center, when she noticed how often cases of mismanagement and misdiagnosis would come up during tumor boards.

“Everyone would sit around discussing cases and say things like, ‘Wow, who would ever do that?’” she related to ASH Clinical News. “And I noticed that nobody disclosed the conversations to the patient or provided feedback to the referring physician. That bothered me.”

When she delved deeper, she found that specialists, in particular, seem to struggle with these conversations because they are unsure about what to do and they are uncomfortable providing negative feedback.

Dr. Dossett said she was most intrigued by the fact that surgeons felt reticent to “project superiority,” according to interviews of cancer specialists from two National Cancer Institute–designated Cancer Centers.10 “We’re specialists, which means we have years of extra training to be able to deal with those rare problems, and that’s why the patient was referred to us in the first place,” she said. “Yet, we don’t want to alienate our referral base, even when dealing with rare problems that generalists would not be expected to have much experience with.”

Most specialists who participated in the interviews also expressed the belief that disclosure of medical errors “provided no benefit to patients and might unnecessarily add to their anxiety about their diagnoses or prognoses.” They often did not reveal mistakes to patients or only partially revealed mistakes.

“Some participants expressed the belief that disclosure should come from the responsible physician,” the authors reported. But, paradoxically, most respondents also noted that, “in many cases, they did not believe the responsible physician was aware of the error.”

Interfacility errors aren’t just misses by referring physicians; they can take many forms, Dr. Dossett continued. “It can happen when, say, I operate on somebody and forget to restart the anticoagulant before I discharge him or her. The patient has a stroke and ends up in an outside emergency department,” she said. “It’s the same type of problem and somebody needs to provide feedback to me that we made an error.”

When Dr. Dossett and colleagues queried 30 specialists heavily reliant on external referrals about this issue, they learned that few respondents practiced regular, explicit feedback, despite recognizing its importance.11 Some cited time limitations and other structural barriers as obstacles, but the main barriers to providing feedback involved psychological discomfort, like fear of conflict, negativity, and jeopardizing future referrals. Medico-legal uncertainty and risk were also cited as hindering factors.

“After conducting a comprehensive review of all the federal and state laws in Michigan to see if there would be any case law that would support a requirement for feedback, [we] found no case law that would justify a requirement for disclosure,” she added.

Joseph Jacobson, MD, MSc, chief quality officer at the Dana-Farber Cancer Institute, said he only occasionally sees examples of pre-referral error of the type Dr. Dossett is studying but agreed that there is no structure or requirement for reporting such an event. “As far as I know, it doesn’t trigger a report to our board of registration in medicine or our department of public health,” he said.

Dr. Dossett’s team also reviewed 130 ethics codes from large health-care societies like the American Medical Association and found that medical error was not included in their guidelines. “You are obligated to report a physician if you believe he or she is acting fraudulently – overbilling and things like that – or is impaired,” Dr. Dossett said, “but, if he or she made an error, there is no obligation to report.”12

Her team is now working on developing guidelines to manage interfacility errors. “It’s a complex problem and I don’t have the answer yet, but I can tell you that every time I tell people this is what I’m doing, they launch into a story about a patient they saw the other week.”

The Role of Health IT

Beyond the mistakes of other providers, the potential for system failures – especially during the referral process – concerns Dr. Jacobson. Collecting all the primary data necessary for making an informed treatment recommendation can be challenging, and there is often a lingering worry that an important piece of clinical information isn’t available at the consultation.

“When I speak to patients, one of their greatest fears is that many different specialists are involved in their care and they aren’t sure, for example, that the surgeon who told them they need an operation reached the recommendation based on review of all the data and input from the whole team,” Dr. Jacobson recounted. While he doesn’t doubt that system failures occur, he said he lacks the information to speak about that risk with patients because those data are not collected.

“We have not come up with reliable systems for communicating key data across institutions, and I think it’s possible that this can put patients at risk, either because of inadequate data available to make the treatment recommendation or inordinate delays that could adversely affect an outcome,” he said.

Like checklists, electronic health records (EHRs) were once expected to revolutionize patient safety by eliminating medication errors and other communications failings. However, the benefits seen in earlier studies are not being borne out in real-world clinical settings.

EHR usability is a major concern for patient safety, as shown in a recent study conducted in a pediatric patient population.13 Of 9,000 patient safety reports from three health-care institutions made between 2012 and 2017 that were likely related to EHR use, 36 percent had “usability challenges” that contributed to a patient safety event, like a system defaulting to an incorrect date and time for a medication order, which led to a missed dose. Researchers estimated that nearly 19 percent of these might have resulted in patient harm.

“EHRs in their current iteration, even within an organization, are inadequate communication tools,” said Dr. Jacobson.

“EHRs can go either way,” said Lisa Hicks, MD, MSc, a staff hematologist at St. Michael’s Hospital and assistant professor at the University of Toronto and chair of ASH’s Committee on Quality. “Certainly, for standardization of medical order entry and drug order entry, we think they improve patient safety.”

However, she added, “when EHRs are too onerous and create an increasing burden for physicians, there is a potential for them to increase burnout and increase the medical error rate.”

Researchers from Pascal Metrics, a patient safety organization, found that EHRs could be used to predict patient harm before it happens.14 They tested a method of extracting safety indicators from EHRs to identify harm and its precursors in two large community hospitals, finding that the EHR-based analytics tool detected and predicted harm in real time.

Dr. Jacobson’s dream is a bit simpler. “At the top of my wish list is a simple, asynchronous, closed-loop communication system between key providers who have to weigh in on a therapeutic decision,” he commented. “In this type of system, I would know that when I sent Mrs. Jones to Dr. X my question would be answered, and that Dr. X had access to all the data that he needed at the time of the consultation to render an opinion. We could all agree on the pathology and the imaging and sign off on the final treatment decision.” Ideally, he added, there would also be a means of sharing some details of the process with the patient.

“We have not come up with reliable systems for communicating key data across institutions, and I think it’s possible that this can put patients at risk.”

—Joseph Jacobson, MD, MSc

Human Error or System Failure?

The topic of medical error got a big boost in the popular press with the publication of The Checklist Manifesto by Atul Gawande, MD, MPH, in 2009. Dr. Gawande – a surgeon, writer, researcher, and now CEO of the recently formed health-care venture from Amazon, Berkshire Hathaway, and JP Morgan – differentiates two types of errors: errors of ignorance, or mistakes made because science has given us only a partial understanding; and errors of ineptitude, where the knowledge exists but was not applied appropriately. As science advances, he suggested, the balance has shifted from ignorance to ineptitude, leaving clinicians with simply too much to remember.

It is the errors of ineptitude that Dr. Gawande targeted with checklists – tools long used in the aviation industry (and advocated for in the health-care industry) to avoid overreliance on memory, improve team communication, and formalize workflow. The approach reframes the discussion about medical mistakes, from one about “bad actors” to one about how systems can be made safer.

In an expansion of the concept, in 2009, Dr. Gawande and colleagues tested implementation of the WHO’s 19-item Surgical Safety Checklist in eight hospitals in eight countries representing a variety of economic circumstances.15 Adherence to these checklists decreased the rates of adverse events, surgical mortality, surgical-site infection, and unplanned reoperation, compared with the period before the checklists were implemented.

Despite its ability to improve outcomes, the WHO checklist is only a partial solution, albeit one that is widely used in operating rooms across the world. Making a dent in preventable medical errors will require successful implementation of safety checklists at a large scale and buy-in from all stakeholders, including hospital CEOs.16

Practice guidelines and protocols are another way to reduce practice variation and prevent errors. In addition to aiding memory, they also serve to set performance standards and expectations for health-care professionals.

ASH’s Committee on Quality is focused on developing evidence-based practice guidelines for several areas of hematology practice. ASH just released the first six of 10 clinical practice guidelines on venous thromboembolism in late November, and future efforts will cover immune thrombocytopenia, sickle cell disease, leukemia in older patients, and von Willebrand disease. Reducing excessive resource use, experts contend, reduces the opportunity for preventable medical mistakes, like medication errors or unnecessary transfusions.

“On the Committee on Quality, we are trying to develop evidence-based guidelines, which is one way to try to disseminate best practices, decrease practice variation, and ultimately improve safety by ensuring that there’s high-quality guidance available for physicians,” she told ASH Clinical News.

Dr. Hicks also chairs the Choosing Wisely Task Force for ASH. Choosing Wisely is a medical stewardship initiative led by the American Board of Internal Medicine Foundation in collaboration with U.S. professional medical societies. In 2013 and 2014, Dr. Hicks and her team identified 10 hematologic tests and treatments that hematologists and their patients should question.17,18

“When we start to view things through the lens of overuse, we’ll find examples in every field, just like there are examples of medical error in every field,” she said.

A New Era of Patient Safety

“To err is human, but errors can be prevented,” wrote the authors of the IOM report in 1999. In the almost 20 years since publication of this transformative report, progress – albeit variable and inconsistent – has been made to do just that, authors wrote in an editorial summarizing the past two decades of the patient safety movement.18 New approaches are needed to address both prior and emerging areas of risk. The next challenge will likely by to find tools that allow organizations to “measure any reduction of harm both inside and outside the hospital, continuously and routinely.”

If the time since the IOM report might be considered the Bronze Age of patient safety, when “primitive” tools were developed to begin measuring and managing patient error, the next decade promises to be the Golden Age, where “vast improvement in patient safety” is realized, they concluded. —By Debra L. Beck


References

  1. HHS.gov. Opioid crisis statistics. Accessed October 29, 2018.
  2. Makary MA, Daniel M. Medical error—the third leading cause of death in the US. BMJ. 2016;353:i2139.
  3. Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. Washington (DC): National Academies Press (US); 2000.
  4. James JT. A new, evidence-based estimate of patient harms associated with hospital care. J Patient Saf. 2013;9:122-8.
  5. Gilligan T. Trespass and forgiveness. Cura. Accessed November 9, 2018.
  6. Waterman AD, Garbutt J, Hazel E, et al. The emotional impact of medical errors on practicing physicians in the United States and Canada. Jt Comm J Qual Patient Saf. 2007;33:467-76.
  7. Collaborative for Accountability and Improvement. Communication & resolution programs. Accessed October 29, 2018.
  8. Gallagher TH, Mello MM, Sage WM, et al. Can communication-and-resolution programs achieve their potential? Five key questions. Health Aff (Millwood). 2018;37:1845-52.
  9. Kachalia A, Sands K, Van Niel M, et al. Effects of a communication-and-resolution program on hospitals’ malpractice claims and costs. Health Aff (Millwood). 2018;37:1836-44.
  10. Communication and Optimal Resolution (CANDOR) Toolkit. Accessed October 29, 2018.
  11. Dossett LA, Kauffmann RM, Lee JS, et al. Specialist physicians’ attitudes and practice patterns regarding disclosure of pre-referral medical errors. Ann Surg. 2018;267:1077-83.
  12. Dossett LA, Kauffmann RM, Miller J, et al. The challenges of providing feedback to referring physicians after discovering their medical errors. J Surg Res. 2018;232:209-16.
  13. Ratwani RM, Savage E, Will A, et al. Identifying electronic health record usability and safety challenges in pediatric settings. Health Aff (Millwood). 2018;37:1752-9.
  14. Classen D, Li M, Miller S, Ladner D. An electronic health record-based real-time analytics program for patient safety surveillance and improvement. Health Aff (Millwood). 2018;37:1805-12.
  15. Haynes AB, Weiser TG, Berry WR, et al. A surgical safety checklist to reduce morbidity and mortality in a global population. N Engl J Med. 2009;360:491-9.
  16. Berry WR, Edmondson L, Gibbons LR, et al. Scaling safety: the South Carolina surgical safety checklist experience. Health Aff (Millwood). 2018;37:1779-86.
  17. Hicks LK, Bering H, Carson KR, et al. The ASH Choosing Wisely® campaign: five hematologic tests and treatments to question. Blood. 2013; 122:3879-83.
  18. Hicks LK, Bering H, Carson KR, et al. Five hematologic tests and treatments to question. Blood. 2014;124:3524-8.
  19. Bates DW, Singh H. Two decades since To Err is Human: An assessment of progress and emerging priorities in patients Safety. Health Aff (Millwood). 2018;37:1736-43.

The Agency for Healthcare Research and Quality’s (AHRQ’s) Communication and Optimal Resolution (CANDOR) process outlines how health-care institutions and practitioners can respond in a timely, thorough, and just way when unexpected events cause patient harm.

The toolkit was developed with expert input and lessons learned from the AHRQ’s $23-million Patient Safety and Medical Liability grant initiative launched in 2009. Below is a representation of the flow of the CANDOR process, which begins with identification of an event that involves harm. This activates a set of coordinated postevent processes. The steps of “response and disclosure” and “investigation and analysis” can recur, the developers note, and happen concurrently until they are completed.

Visit ahrq.gov/CANDOR for more information and detailed modules and toolkits.

Source: AHRQ. Communication and Optimal Resolution (CANDOR) Toolkit. Accessed October 29, 2018.

  1. Don’t transfuse more than the minimum number of red blood cell (RBC) units necessary to relieve symptoms of anemia or to return a patient to a safe hemoglobin range (7 to 8 g/dL in stable, non-cardiac in-patients).
  2. Don’t test for thrombophilia in adult patients with venous thromboembolism (VTE) occurring in the setting of major transient risk factors (surgery, trauma or prolonged immobility).
  3. Don’t use inferior vena cava (IVC) filters routinely in patients with acute VTE.
  4. Don’t administer plasma or prothrombin complex concentrates for non-emergent reversal of vitamin K antagonists (i.e. outside of the setting of major bleeding, intracranial hemorrhage or anticipated emergent surgery).
  5. Limit surveillance computed tomography (CT) scans in asymptomatic patients following curative-intent treatment for aggressive lymphoma.
  6. Don’t treat with an anticoagulant for more than three months in a patient with a first venous thromboembolism (VTE) occurring in the setting of a major transient risk factor.
  7. Don’t routinely transfuse patients with sickle cell disease (SCD) for chronic anemia or uncomplicated pain crisis without an appropriate clinical indication.
  8. Don’t perform baseline or routine surveillance computed tomography (CT) scans in patients with asymptomatic, early stage chronic lymphocytic leukemia (CLL).
  9. Don’t test or treat for suspected heparin induced thrombocytopenia (HIT) in patients with a low pre-test probability of HIT.
  10. Don’t treat patients with immune thrombocytopenic purpura (ITP) in the absence of bleeding or a very low platelet count.