Root cause analysis of cases involving diagnosis
-
Mark L. Graber
, Gerard M. Castro
Abstract
Diagnostic errors comprise the leading threat to patient safety in healthcare today. Learning how to extract the lessons from cases where diagnosis succeeds or fails is a promising approach to improve diagnostic safety going forward. We present up-to-date and authoritative guidance on how the existing approaches to conducting root cause analyses (RCA’s) can be modified to study cases involving diagnosis. There are several diffierences: In cases involving diagnosis, the investigation should begin immediately after the incident, and clinicians involved in the case should be members of the RCA team. The review must include consideration of how the clinical reasoning process went astray (or succeeded), and use a human-factors perspective to consider the system-related contextual factors in the diagnostic process. We present detailed instructions for conducting RCA’s of cases involving diagnosis, with advice on how to identify root causes and contributing factors and select appropriate interventions.
“Safety is an emergent property of systems; it does not reside in a person, device or department of an organization or system. Safety cannot be purchased or manufactured; it is not a feature that is separate from the other components of the system…The state of safety in any system is always dynamic; continuous systemic change insures that hazard and its management are constantly changing.” “People continuously create safety.”
Richard I Cook. How Complex Systems Fail [1]
Introduction
Diagnostic error is the most common, the most costly and the most catastrophic of all medical errors [2, 3] and represents the largest threat to patient safety in healthcare today. Although the factors that contribute to diagnostic errors are now well established, healthcare organizations have been slow to address the problem in a meaningful way. Improving the quality and safety of healthcare will require improving diagnostic outcomes by learning from cases of diagnostic error (Safety 1) and from cases where the diagnostic process works well (Safety 2).
Root cause analysis (RCA) is the most widely-used approach to study safety events, and comprehensive resources are already available on how to conduct ‘system focused’ investigations from the Institute for Healthcare Improvement [4], the American Society for Healthcare Risk Management [5] The Joint Commission [6], and the Department of Veterans Affairs [7]. Any or all of these guidelines can be used to study cases involving diagnosis, with appropriate modifications to also consider the cognitive aspects of medical decision-making [8, 9]. The key differences are presented in Table 1.
Comparing system-focused RCA to diagnosis-focused RCA.
System-focused RCA | Systems PLUS diagnosis-focused RCA | |
---|---|---|
Safety issue in the cases examined | The focus is squarely on system issues. Cases involving individual performance, including clinician judgment, are sent for peer review. | The focus is on diagnosis, and considering system-related and cognitive issues, and the human factors that tie them together. Applies to many or most cases previously sent for peer review. |
Where was the incident? | Typically inpatient care | Inpatient and ambulatory care PLUS cases involving care transitions. |
RCA team members | Core members: Patient safety staff, clinician experts. Seldom included: Involved clinicians and affected patients\family members. | Same core members PLUS the involved clinicians and staff with expertise in clinical reasoning and cognition PLUS patients\family members, if appropriate. |
Steps of the RCA | Gather all the facts. Where did things go wrong? Why? How can this kind of problem be prevented going forward? Share lessons learned. | Same as system approach but start immediately PLUS include analysis and interventions focused on cognitive and contextual factors related to diagnosis. |
Recommended actions | Focus on finding strong interventions. Avoid emphasis on education, training, reinforcing policy, and other weak actions. | Strong interventions PLUS education, as it may be more effective as an intervention in diagnosis- than in the system-focused RCA. |
Recommended practices in RCAs of cases involving diagnosis
As in system-focused RCAs, an immediate response is needed after an adverse diagnosis-related safety event to ensure the clinical situation has been stabilized, the patient and family have been informed, hospital leadership is aware, and any artifacts relevant to the event are preserved. The subsequent steps of the RCA process are diagrammed in Figure 1.

Steps of an RCA for cases involving diagnosis.
The usual advice when conducting an RCA is to begin a detailed review as soon as possible, usually within 72 h. In cases involving diagnostic errors, the review should begin even sooner, immediately if possible, in order to capture the context in which the events unfolded. Delays make it increasingly difficult for the involved staff, patients, and family members to remember the situation that existed at the time of the event, and these details are critical. Were the clinicians distracted? Were they tired or ill? How many other patients were they caring for that day? How many other admissions were waiting? Were there delays speaking with family members or consultants? What conversations took place that were not captured in the medical record?
Expect to find both systems-related and cognitive factors at work in cases involving diagnosis [10]. In a series of 100 diagnostic error cases in internal medicine practice, 74 % involved cognitive issues [11]. Similarly, cognitive issues were also identified in 92 % of 209 cases in the ED [81] and in all but one of another 21 ED cases [119]. Understanding the cognitive perspective is, therefore, critical to unraveling most cases of diagnostic error. It is very appropriate to point out that considering the cognitive aspects of care would be equally beneficial in RCA’s of cases not involving diagnosis; the core issues are the same.
Step 1: Deciding is this a diagnostic error?
Some diagnostic errors are easy to recognize. These are cases where the correct diagnosis became clear at some point, and looking back there is agreement that the correct diagnosis was missed or could have been made much earlier.
In other cases, it is more difficult to say that a diagnostic error occurred. For instance, there are very few clear guidelines about the timeliness of diagnosis. How long SHOULD it take to diagnose a particular infection, cancer, or cardiovascular condition? In many cases, the initial presentation is non-specific, and the condition and diagnostic process evolves over time.
There are now four definitions of diagnostic error in active use (Table 2). The Graber definition is foundational but can only be used in retrospect; the Schiff definition focuses on identifying the steps in the diagnostic process where errors occurred; Singh defines diagnostic error as a missed opportunity to have made the correct diagnosis, a definition that is now widely used in prospective research studies because it focuses on the diagnostic process, where the ultimate diagnosis is not yet known. The NAM definition focuses specifically on timeliness and accuracy, and adds the all-important patient viewpoint because the diagnostic process is not complete until it has been successfully communicated to the patient. If in doubt, a useful tool to help determine whether a case reflects a diagnostic error is the 12-question Revised Safer Diagnosis Checklist developed by Hardeep Singh and colleagues (Table 3) [12].
Four definitions of diagnostic error.
Date | 2005 | 2009 | 2014 | 2015 |
---|---|---|---|---|
Author | Mark Graber [10] | Gordon Schiff [13] | Hardeep Singh [14] | NAM [15] |
Definition | Diagnostic error is unintentionally delayed, wrong, or missed as judged from the eventual appreciation of more definitive information. | Diagnostic error is any error of omission or commission in the course of the diagnostic process. | Diagnostic error reflects a ‘missed opportunity,’ to have made the correct diagnosis based on retrospective review. | The failure to establish an accurate and timely explanation of the patient’s health problem or (the failure to) communicate that explanation to the patient. |
The revised safer Dx checklist.

Extensive debates on whether a particular case does or does not reflect diagnostic error are inappropriate; healthcare organizations should focus instead on understanding and improving the diagnostic process.
Step 2: Case triage: peer review or RCA?
In cases of medical error, organizations need to decide whether to evaluate the case from the perspective of improving quality and safety, or from a disciplinary perspective that is concerned with clinical competence. Use of a standardized approach such as the United Kingdom’s National Patient Safety Agency (NPSA) Incident Decision Tree, based on Reason’s “Culpability Tree” or Marx’s “Just Culture” model, can help organizations determine which incidents can be routed to safety analysis and which will require peer review/disciplinary action [16]. Marx’s “Just Culture” model can be summarized most simply as follows:
Console human error
Coach at-risk behavior
Punish reckless behavior
Triage decisions should be based on evaluating the individual decisions made in the case and the actions taken, not on the outcome of the particular case.
The bottom line is that in the absence of incapacity, deliberate harm, or a pattern of reckless behavior, most cases of diagnostic error should be prioritized for RCA not for peer or competency review.
Step 3: Convening the RCA team
The team responsible for conducting the RCA must have sufficient knowledge and experience to understand what transpired in a given case and be able to propose relevant solutions. As in system-related RCA practices, the team should comprise roughly 4–6 core members and invite ad hoc staff members or consultants as needed to understand key issues. A physician who understands the diagnostic process and is knowledgeable about the role of cognition in clinical reasoning should be part of the team, along with a nurse familiar with the care setting. A medical librarian should be included in cases involving knowledge-related issues [17].
The involved clinicians. The clinical staff who were involved in the incident should routinely be included as members of the RCA review team. These individuals have first-hand knowledge of what happened and are most invested in ensuring a comprehensive and detailed safety review that reaches appropriate conclusions and proposes actions to reduce future risk [18].
Physicians may be reluctant to discuss their role in diagnostic error cases and will need reassurance that these discussions are concerned with learning and practice improvement, not with criticizing and assigning blame. A guide for involving physicians and nurses in RCAs is included as Supplemental Material, Appendix A.
Patients and families. In system-focused RCAs, the patients or families who have been harmed by the safety breakdown may be interviewed but they are generally excluded from the RCA team. In diagnosis-related cases, involving patients or families in some fashion is a must. The patient or family can provide critical details on the events regarding the case and what was said, providing key contextual information that is typically missing from electronic medical record notes [19, 20]. There is also substantial value in restoring a relationship of trust, as the patient and family see that the organization is doing its best to understand what happened and how care can be improved going forward. Interviews with families after safety-related deaths have found that many want to participate in safety reviews and believe that their perspective is highly relevant to the analysis [21, 22]. A guide for involving patient/family members in the RCA process is available as Supplemental Material, Appendix B.
…assume that patients and families will be partners in investigation and where possible engage them fully from the beginning…
At a minimum, the patient or family should be interviewed. In some cases, depending on the patient or family and their interest, it may also be appropriate to include them on the RCA team. Including patients or families poses challenges [23], and in some states in the US, legal issues may preclude their involvement. The foremost challenge is handling the emotions and expectations that inevitably arise when the injured parties first meet with representatives of the organization. If the patient or family is willing to participate in the RCA, they will benefit from an orientation to the RCA process and what their role in this will involve.
An alternative to their direct participation is to have a patient advocate sit in for (or with) the patient or family during the RCA meetings. Experienced advocates will be familiar with medical language, standard medical care processes, and safety analyses, and can report back to (or explain things to) the patient or family as needed. Someone from the organization’s patient-family advisory council (PFAC) may be available to serve in this role or to help identify a suitable advocate.
Finally, it is very appropriate to check back with the patient or family as the analysis and action plans unfold and to share the RCA findings and recommendations; do they believe that the proposed interventions will solve the problem?
Step 4: Identify root causes and contributing factors
The steps for conducting RCA’s of diagnosis-related safety events are essentially the same as those in a system-focused RCA, although the particular details of some steps will differ.
The initial interviews and fact-finding. One key difference is the timing of the initial safety event review: The review should begin immediately to better capture the cognitive and human factors elements that might have contributed to the case. The involved staff and the patient and family should be interviewed as soon as possible because accurate recall of what happened and what else was happening at the time will fade within days.
A neutral party, ideally a peer, or someone with experience in these situations and knowledgeable about cognitive error, should conduct the interviews in a private setting, free from interruptions. The interviews should start with open-ended questions with little interruption, and the interviewees should be encouraged to freely associate about the event and try to recall their thoughts and feelings. Structured questions may be helpful, and questions should also probe the context of care [24]. The ASHRM Root Cause Analysis Playbook contains a detailed introduction to RCA interviews [5], and a starter set of questions are listed in the box below.
How were you involved in this case? What was your role?
What happened? (With as much detail as possible)
How did the case unfold? What do you recall about the sequence of events?
What else was happening? What was it like at the time? Who else was involved?
Were the patient’s medical records available for review? Were they complete?
Was communication between the patient and the staff clear and effective?
What facts were available? What things were not known at the time?
Was there anything about the patient or the situation that was unusual or that evoked any particular emotions or feelings?
What were you thinking? What were you feeling?
What were you first considering? Why? Did anything else come to mind?
Did the diagnosis seem obvious? How certain were you about your impressions?
Using formal cognitive interviewing techniques (Supplemental Material, Appendix C) with the involved clinicians will help them recall more facts and insights about the case, and more details about specific issues. Organizations with sufficient resources should invest in training their safety staff in cognitive interviewing.
Where did things go wrong in the diagnostic process?
A good starting point for extracting lessons from diagnosis-related safety events is considering which steps of the diagnostic process worked well and which did not (Figure 2).

Steps of the diagnostic process.
The best resource for this task is the ‘DEER’ taxonomy from Schiff and colleagues’ Diagnostic Error and Evaluation and Research project, which breaks out the steps of the diagnostic process (Table 4) [13]. Isolating where in the diagnostic process problems developed will be valuable to the safety review, allowing the RCA team to compare the actual event to best practices. Involving subspecialists can be helpful in this regard, and they may also be aware of other cases that involved similar breakdowns in a particular step. Aggregating data from multiple RCAs can help identify specific issues that are ripe for performance improvement initiatives.
The ‘DEER’ taxonomy.

Identifying root causes: The Fishbone Diagram. Once the RCA team has an initial understanding of what happened, they need to consider what factors contributed to the event. There are many different approaches that can be used to identify causal and contributing factors [25], [26], [27], [28], and a collection of different taxonomies is presented in Supplemental Material, Appendix D. Teams can use whatever approach they are most familiar with or that seems most appropriate for the case.
We recommend using an analysis framework that recognizes four dimensions: The context of care, the clinician’s reasoning process, the patient and family perspective, and the particulars of the case itself. Fishbone (Ishikawa) diagrams provide a convenient way to visualize at a glance the four domains, and this approach can be used for both cases of diagnostic success and error [29]. A generic version is illustrated in Figure 3, and the key domains and subdomains can and should be modified depending on the details of the case.

A generic Fishbone diagram for considering root causes of diagnostic error.
The case . Each case is unique. The particulars of how the patient’s condition presents and evolves, how they describe it, and when they seek care can all determine whether the diagnosis will be easy and correct, or problematic. The exact same disease can present in uncommon ways in different patients. In some cases the speed of diagnosis is not important, and in others it can be critical, including cases of stroke, aortic dissection, and sepsis where delays can be disastrous. Cases that present in the classic, textbook manner will usually be recognized and diagnosed quickly and accurately. Conversely, diagnosis in cases that present in atypical fashion, or involve unusual or rare conditions will often be delayed.
The patient . Characteristics of the patient or the family can play a role in contributing to diagnostic error. Some of these reflect patient characteristics that evoke affective bias, where the clinician is put off by the patient’s age, sex, personality, or ethnic background, or perhaps by a coexisting mental health condition or a rude comment. Patients who are angry, drunk, or confrontational often evoke reactions and emotions on the part of the clinician that detract from clear clinical reasoning.
A common concern in cases of diagnostic error is whether the patient clearly communicated their symptoms. Communication failures can be encountered with infants, patients who don’t speak English, and patients who are intoxicated, intubated, or unconscious, among others. Communication problems can even occur in awake and alert patients if they misunderstand a question, or do not accurately explain their symptoms or course of illness. Patients missing scheduled appointments or tests can sometimes be a factor contributing to delayed diagnoses.
The context of care. The context of care includes a very wide set of factors that can support or sometimes degrade diagnosis. System-related issues are identified in most cases of diagnostic error [10, 30]. Safety officers typically have extensive experience exploring system-related aspects of safety events, and these same dimensions apply to diagnostic errors. An overview of system factors relevant to diagnosis is illustrated in Figure 4 [25] and a detailed approach is presented in Supplemental Material, Appendix E.

A general framework for considering root causes of safety events.
Some of the most critical contextual elements include considering:
Access to care; Communication; Care coordination;
Access to expertise and second opinions; Access to appropriate imaging and tests;
Health informatics systems and resources;
Culture, especially teamwork; Human factors issues; and
Diagnostic setting and circumstances.
It is especially important to consider whether human factors, sometimes referred to as ‘error promoting conditions’ may have derailed the diagnostic process. Fatigue, stress, illness, production pressure, cognitive overload, burnout, distractions and a host of other human factor considerations are often identified as contributing factors in cases of diagnostic error. Identifying the role these factors may have played, and understanding how they arose may be the most important findings in a given case and may provide key insights for optimizing the diagnostic process in the future.
Clinical reasoning. Clinical reasoning is the clinician’s ability to synthesize all the available information of the case to arrive at the most likely diagnostic possibilities, based on his\her knowledge and experience. Faulty clinical reasoning is a factor in most cases of diagnostic error. This domain involves an exploration of the cognitive aspects of diagnosis, the part of the RCA process that differs most from reviews of system-related cases. The key elements of the clinical reasoning process are illustrated in Figure 5.

The elements of clinical reasoning.
Knowledge: The ability to make a timely, accurate diagnosis depends on the ability to recognize or identify the condition based on knowledge acquired during training or experience. There are over 10,000 known conditions, but medical training, and most textbooks, typically cover only those that are common, less than 1,000 conditions. Although every clinician has probably seen the most common conditions, not every clinician will have learned about or seen unusual presentations of these conditions, or the many rare diseases that inevitably present at some point in time. Only 3 % of faulty diagnoses in one study were due to faulty knowledge [10].
Case information: Diagnosis requires obtaining a complete and detailed medical history, conducting an appropriate physical examination, understanding the diagnostic test results, and reviewing available consult reports. In that same study, 14 % of diagnostic errors involved situations where key data was either not available, not sought, or was available but misinterpreted [31]. Cases with handoffs predispose to problems in this domain because information is often lost or distorted passing from one person or care site to the next.
Synthesis: Synthesis represents the cognitive tasks involved in considering the diagnostic possibilities. Errors in this step may reflect either breakdowns in critical thinking (System 2) or in the subconscious, intuitive aspects of diagnosis (System 1) [24, 32, 33]. Faulty syntheses is by far the leading cause of diagnostic error, encountered in 83 % of cases [34].
System 1 and System 2: Fast and slow thinking. The dual processing paradigm, thinking fast vs. thinking slow [35], is the best framework for understanding the nature of diagnosis and the cognitive aspects of diagnostic error [36] (Figure 6). In this framework, diagnosis starts with whether the clinician recognizes (Yes/No) a symptom or sign or a collection of findings.
System 1, the intuitive system: If the symptoms and findings are recognized, as they most often are, the diagnosis emerges within milliseconds using a subconscious automatic process referred to as System 1. This is an intuitive process that is often successful in reaching the correct answer but can go astray due to cognitive bias or other factors, leading to diagnostic error.
System 2, the rational system: If the findings are not recognized, the clinician needs to stop and think. This is System 2, the purposeful, deliberate, and hopefully rational process of reviewing what is known and consciously considering what the answer might be. System 2 is much slower than System 1. System 2 is considered a more reliable approach to finding the correct answer, but it is also occasionally wrong. In practice, diagnosis typically involves some mix of the intuitive and rational systems.
![Figure 6:
System 1 vs. System 2 Cognition. Reproduced with permission from Vanderbyl [31], which was derived with permission from a Neurofied figure (https://neurofied.com) by Philip Jordanov.](/document/doi/10.1515/dx-2024-0102/asset/graphic/j_dx-2024-0102_fig_006.jpg)
System 1 vs. System 2 Cognition. Reproduced with permission from Vanderbyl [31], which was derived with permission from a Neurofied figure (https://neurofied.com) by Philip Jordanov.
The dual process paradigm describes not only diagnosis but how we process most information in our daily lives, where most things and situations are recognized and we know automatically how to respond or what to do. However, sometimes we encounter a novelty that requires conscious thought. Think about how you learned to ride a bike or play a musical instrument – these were stressful and difficult tasks early on, but with time became effortless and automatic. Similarly, first year students process most clinical problems using System 2, and as they acquire knowledge and familiarity, they gradually transition to the intuition and reflexivity of System 1 to handle things.
Many breakdowns in clinical reasoning reflect inappropriate shortcuts or assumptions, and many involve the subconscious tendencies we all have that detract from optimal cognition. The cognitive biases encountered in diagnosis and diagnostic error are the same ones found in everyday life and are simply part of our human nature. Over 175 biases are catalogued in Wikipedia [37]. A few of the biases that are most commonly encountered in cases involving diagnosis include these (see Supplemental Material, Appendix F for others):
Premature closure (also known as ‘search satisficing’).
This bias is our human tendency to be too quickly satisfied with the first diagnosis that comes to mind that explains most of the key findings in the case. Most everyone with a dog is familiar with this tendency – we just fall in love with the first puppy/dog and don’t go searching other litters or shelters. Herbert Simon received the Nobel prize for describing this concept in the field of economics. He called it ‘satisficing’ [38]. In diagnosis, satisficing is the opposite of optimizing, constructing a differential diagnosis of the likely possibilities.
Context errors. If you see a patient with a chief complaint of vomiting, you automatically start thinking of gastrointestinal (GI) causes. The diagnosis may well be a GI problem, but if you focus strictly on the GI context, you may not consider other ‘don’t miss’ causes, like poisoning, sepsis, or intracranial hypertension, among others. If you are looking in the wrong context, you will never make the correct diagnosis.
Anchoring is our tendency to be satisfied with a new or pre-established diagnosis without rethinking the case. In support of our initial belief, we tend to favor evidence consistent with it and discount evidence against it.
Affective bias reflects our subconscious tendency to favor certain people and disfavor others, whether because of their age, sex, socio-economic status, ethnicity, appearance, or behavior. Their medical conditions may be yet another factor that influences cognition. For example, there is good evidence that patients with mental health disorders or a history of drug abuse are treated differently in diagnostic settings [39, 40].
Will cognitive bias derail diagnosis? The situation, patient, clinician, and particulars of the case are all relevant factors in determining whether or not cognitive bias will be a factor in a given case [41]. In RCAs of diagnostic error, it may be important to review each of these dimensions (Figure 7, modified with permission from Dror [41]).

Factors influencing whether a cognitive bias will be encountered.
There is considerable variability in clinical decision making, and the likelihood that a given clinician will be adversely influenced by bias [42]. Each clinician has had different exposures and experiences during their education and training, and as a result, diagnosis in practice is idiosyncratic. Two clinicians presented with the identical patient story and set of medical facts may come up with very different impressions of what the diagnosis might be.
Gender differences provide an interesting example of this variability [42]. Female physicians, for example, tend to be more effective in encouraging patient engagement and questioning than male physicians, and are more likely to explore psychosocial issues. They are less comfortable with uncertainty, order more tests and consults and are more compliant with clinical guidelines. They also exhibit less implicit racial bias than their male counterparts.
A significant body of evidence has now made it clear that cognitive biases manifest themselves automatically and unconsciously over a wide range of human decision making. Besides their psychology and sociology origins, they are now acknowledged in business, marketing, the judicial system and many other domains. Events on the world stage are influenced by them. It is important for everyone to recognise just how pervasive biases are and the need to mitigate them.
Pat Croskerry: Our better angels and black boxes [43].
Step 5: Crafting interventions
Given that any type of diagnostic error is likely to recur, and perhaps repeatedly in both the same organization and more broadly, the goal of each RCA is to consider interventions that will minimize this possibility. The RCA review is incomplete if it does not include at least one high-priority recommendation.
…(An RCA investigation is) meaningless to patients if it did not lead to action and change……A properly crafted process or outcome measure should be specific, quantifiable, and provide a timeline on when it is going to be assessed.
Charles et al. How to perform a root cause analysis for workup and future prevention of medical errors: a review. [44]
Specific errors point to specific solutions, but the most important interventions for improving diagnosis will center around leadership and culture. If leadership strongly endorses the goal of achieving diagnostic excellence, if champions for diagnostic safety are visible amongst their colleagues, if teamwork is the norm, and if errors and solutions are openly discussed, the prognosis for improving diagnostic outcomes is bright. Until these foundational elements are established, they should be included as recommendations in every RCA.
System-related factors are identified in most cases of diagnostic error. However, because organizations have extensive experience reviewing the problems and interventions in this domain, we will not consider them in detail except to point out the important progress made in ‘catching’ diagnosis-related safety breakdowns before they lead to patient harm. The Kaiser-Permanente – Southern California health care organization has pioneered the use of ‘safety net’ systems to catch potential delays in diagnosis that could lead to harm. Examples include electronic monitoring to ensure that patients with a positive test for fecal occult blood receive endoscopic evaluation, and patients with escalating PSA values are evaluated in Urology. The safety net concept is well established in the United Kingdom as a primary care intervention and is effective in reducing delays in cancer diagnosis.
Another area of active research concerns interventions to tackle lapses in follow-up care. These represent ‘low-hanging fruit’ in efforts to improve the reliability of diagnosis, including failures to follow-up on incidental findings, abnormal screening tests, alertable test results, tests pending at discharge, and patients with concerning but non-specific symptoms.
Cognitive errors. Addressing cognitive errors is likely to be a new challenge for healthcare organizations. “Hardwired” solutions, such as forcing functions, top the list of possible interventions whereas education and training are viewed as weaker choices [45], [46], [47]. It is worthwhile noting that these ‘strength’ hierarchies are passed down as wisdom from the sages more than evidence-based conclusions. Education and training interventions may actually have greater impact on cognitive reasoning skills and may be perfectly reasonable solutions in certain cases for the following reasons:
There is no course on diagnosis in medical education today. Doctors and the many other clinicians involved in the diagnostic process have never received formal training on clinical reasoning or on critical thinking in general. Most have only passing familiarity with decision support resources.
Clinicians are not generally aware of the many ways that human factor elements can impact diagnosis, for better or for worse.
Clinicians generally have never learned about heuristics and biases, and that human decision-making is beset by universal subconscious tendencies that can detract from best judgment. Inappropriately, many or most view themselves as unbiased and immune from affective influence.
Finally, clinicians tend to think of themselves as excellent decision-makers. Overconfidence is the rule, and though the concept of calibration may be appreciated at a subconscious level, it is not something clinicians think about as a critical determinant of their skill in diagnosis.
All of these issues may potentially be addressed productively through education. Simulation training is more likely to engender retained knowledge and skills than book learning, and many of the interventions proposed to address cognitive issues are ripe for simulation-based training. Several authorities make the point that education that is content and even case-specific is likely to be a more effective intervention than general education on clinical reasoning and bias. Examples of focused education include practice on differentiating diseases with similar presentations, and practice expanding a differential diagnosis list.
For cases where clinical reasoning is a key issue, Croskerry divides cognitive interventions into those focusing specifically on steps the individual clinician can take to avoid error, and those that use system-based approaches. Table 5 presents options for improving diagnosis. The relative impact of these suggestions has not yet been evaluated.
Interventions to improve diagnosis.

Interventions focused on the individual. These are the ‘cognitive pills for cognitive ills’ that primarily involve metacognitive skills, meaning, being able to consider the actions and decisions one is making and reflect on whether these can be improved. “Stop and Think” captures the essence of these interventions. If clinicians could routinely adopt this approach, it would provide them the opportunity to employ effective forcing functions that would improve diagnostic decision-making [48, 49]. Asking, “what else could this be?”; and “what emotions could be affecting my judgment in this case?” are worthwhile questions to ask in every case involving diagnosis.
After metacognition, the most consistent advice to address cognitive error is to promote awareness of the cognitive and affective biases leading to diagnostic error. A cardinal principle of cognitive psychology is that many cognitive biases are indeed hardwired and cannot be unlearned [35], others are acquired [50]. The key in either case is to know that biases can be mitigated by learning to recognize them in one’s own thinking, or in the diagnostic decisions made by others. This, in turn, provides the opportunity to reconsider decisions before there is harm [51, 52]. Clinicians involved in diagnosis should know about the dual-processing framework and how biases can detract from optimal decision-making.
In cases where affective bias played a role, implicit bias training may be helpful for clinician education [53], along with organizational efforts to promote equity in access to care and services [39].
There is early evidence that cognitive interventions may be effective in addressing diagnostic error [54], [55], [56], but organizations should keep abreast of new research likely to emerge in this area in the future. It seems likely that interventions that are more specific and case-focused may have more impact over those that are more general, like “Stop and Think”.
Interventions focused on the system. Diagnosis is especially dependent on the context of care. When developing interventions to address diagnostic error, interventions should target these contextual connections. Diagnostic decision-making is inherently error-prone, and interventions that either help the clinician make these decisions, or involve others in this process, will likely be beneficial. Below are several recommendations:
Improve teamwork. This was the top recommendation in the National Academy of Medicine report on Improving Diagnosis in Health Care [15]. If group-think problems can be avoided, team members can provide fresh perspectives on a case and help catch cognitive errors. Involving the patient, family, and nurse colleagues are positive steps that can improve diagnosis organization-wide.
Get second opinions and consults. There is strong evidence that second opinions improve diagnosis in pathology and radiology. Second opinions result in important changes in a diagnosis for between 2 and 5 % of cases [57], [58], [59]. It is highly likely that even greater benefit will be seen in frontline diagnostic settings. A study of second opinions in the ED found that consulting a colleague about active cases reduced diagnostic errors by one-third [60]. Improving access to expert consultants is another avenue likely to improve diagnosis.
Group-based (collective) diagnosis is an emerging area with substantial potential to reduce the likelihood of error [61, 62]. Crowd-based decisions may be much more accurate than the ‘stop and think’ approach of reconsidering a case on your own.
Provide decision support. Checklists, mnemonics, and various other decision aides are available to help with diagnosis but are underutilized [63]. These can be helpful tools for metacognition by helping the clinician think of conditions and organ systems they had not considered. More sophisticated, web-based tools to aid in differential diagnosis have been available for some time, and have demonstrated value [64]. It is likely that emerging AI-based systems will be even better, and integrations in which suggestions are ‘pushed’ to clinicians instead of them having to search for potential choices will be especially helpful.
Make it easier: Reduce any error-promoting factors; improve access to knowledge sources. Many clinicians believe they would do a better job with diagnosis if they just had adequate time to think. Production pressures and distractions should be minimized, and it is worthwhile offloading time-consuming tasks that detract clinicians from patient care (e.g. negotiating with insurance companies on the patient’s behalf, many prescription renewals).
Provide feedback [65] to improve calibration. The best diagnosticians are those with profound expertise and experience in a given area, and those with modest expertise who are well calibrated, meaning they have a good sense of what they know, what they don’t know, and when they need to slow down and seek help. Providing feedback to clinicians is an effective way to improve calibration, and organizations should consider ways for clinicians to learn whether their initial diagnostic impressions are correct or incorrect [66, 67].
Discussion
Root cause analysis has become the most accepted approach to address adverse safety events in health care organizations. With suitable adaptations to consider human factors and the cognitive issues surrounding clinical reasoning, RCAs can also be used productively to review cases of diagnostic error (see Supplemental Material, Appendix G for examples of completed RCA’s).
Trowbridge et al. pioneered the use of RCAs for cases of diagnostic error, using fishbone diagrams to help identify contributing factors [29].
Gurley et al. used a formal RCA to understand the system-related and cognitive issues involved in a case of epidural abscess that was missed in the emergency department [68].
Dadlez et al. used mini-RCAs to study three problems in the diagnostic process: missed actions on abnormal lab tests, missed hypertension, and missed adolescent depression. They conducted 184 mini-RCAs on cases from 28 different practices and identified several common breakdown points, and appropriate generalizable interventions [69].
Su et al. reviewed 61 cases from EDs using a fishbone diagram to consider root causes, 89 % of which included cognitive issues [70].
Giardina et al. reviewed 111 RCAs of cases encountered in Veterans Affair’s settings related to team-based diagnosis-related decision-making [71]. Similarly, Zenati et al. conducted an RCA to consider team-related cognitive issues involved in a near-miss medication error [72], illustrating that including cognitive analyses in RCA investigations can be effectively applied to other patient safety events outside of diagnosis.
The ultimate goal of RCAs is to improve the safety and quality of health care using the lessons extracted from adverse safety events that demonstrate inherent flaws in the process of care. An alternative and complementary approach is to apply the Safety 2 perspective, which is to extract lessons from what went right in a given case or across cases dealing with similar problems. There are many key advantages of using System 2 approach:
There are many more cases to learn from. Diagnosis succeeds far more often than it fails.
Safety 2 discussions are easier than discussions that focus on error. Clinicians are more likely to report cases and participate in Safety 2 analyses and discussions.
Safety 2 analyses are more suitable for prospective analyses, avoiding the problems of hindsight bias. Aggregating cases offers the potential to identify practice variation.
Safety 2 analyses can reveal novel approaches to problem-solving. Individual clinicians or small practice groups may have created unique solutions to problems that might not have been discovered otherwise.
Safety 2 discussions can reveal the elements of resilience that so often are critical in surmounting the inherent barriers in health care delivery.
Safety 2 work enhances the culture of safety, and the willingness of clinicians to work on safety concerns.
Safety leaders have advocated for using Safety 2 reviews to complement traditional RCAs that use the Safety 1 retrospective approach. We encourage organizations to also consider combining both approaches to study a particular problem. In a case of missed diagnosis, for example, pair the Safety 1 RCA analysis with a Safety 2 review of a case where the diagnosis was established quickly and accurately. What contextual factors might explain the different outcomes?
Over the past 20 years, there has been substantial progress in understanding diagnostic error. We now appreciate the size of the problem – diagnostic errors are common in every setting and may cause substantial harm. We have also learned a great deal about where and why these errors occur, including an expanding understanding of how cognition plays a critical role in both diagnostic success and diagnostic error. A host of interventions have been proposed to address the various factors that contribute to harm from these errors. The time has come to begin seriously studying which of these interventions work and which ones offer the most benefit. Root cause analysis provides a critically important tool for health care organizations to identify and learn from their own cases, which hopefully will provide the motivation necessary to begin addressing the problem.
Sujai Manohar. The Diagnosis Funnel. With permission.

Funding source: Gordon and Betty Moore Foundation
Acknowledgments
We are grateful for editorial reviews by Christine Holzmueller, MS and illustrations by Kristen Garcia, Garcia Alfaro LLC.
-
Research ethics: Not applicable.
-
Informed consent: Not applicable.
-
Author contributions: The authors accept responsibility for the entire content of this manuscript and approved its submission.
-
Competing interests: The authors state no conflicts of interest.
-
Research funding: Funding was provided by a grant from the Gordon and Betty Moore Foundation to The Leapfrog Group and The Society to Improve Diagnosis in Medicine.
-
Data availability: Not applicable.
-
Role of the sponsor: The sponsors played no role in the design, development, or review of the material presented.
References
1. Cook, RI. How complex systems fail. 1998. Available at: https://howcomplexsystemsfail/.Search in Google Scholar
2. Newman-Toker, D, Schaffer, A, Yu-Moe, W, Nassery, N, Saber Tehrani, A, Clements, G, et al.. Serious misdiagnosis-related harms in malpractice claims: the “Big Three” – vascular events, infections, and cancers. Diagnosis 2019;6:227–40. https://doi.org/10.1515/dx-2019-0019.Search in Google Scholar PubMed
3. ECRI Institute. Top 10 patient safety concerns for healthcare organizations. 2019. Available at: wwwecriorg/patientsafetytop10.Search in Google Scholar
4. National Patient Safety Foundation. RCA2: improving root cause analyses and actions to prevent harm. 2016. Available at: https://wwwihiorg/resources/tools/rca2-improving-root-cause-analyses-and-actions-prevent-harm#downloads.Search in Google Scholar
5. Association, AH. ASHRM root cause analysis Playbook. 2016.Search in Google Scholar
6. The Joint Commission. Root cause analysis in health care; A Joint commission guide to analysis and corrective action of sentinel and adverse events, 7th ed. Oakbrook Terrace, IL: Joint Commission Resources; 2020.Search in Google Scholar
7. VHA National Center for Patient Safety (NCPS). Guide to performing a root cause analysis (revision 10-20-2020). 2021. Available at: patientsafetyvagov/docs/RCA_Guidebook_10212020pdf.Search in Google Scholar
8. Voelker, R. Treat systems, not errors, experts say. JAMA 1996;276:1537–8. https://doi.org/10.1001/jama.276.19.1537.Search in Google Scholar
9. Reason, J. Human error: models and management. BMJ 2000;320:768–70. https://doi.org/10.1136/bmj.320.7237.768.Search in Google Scholar PubMed PubMed Central
10. Graber, ML, Franklin, N, Gordon, R. Diagnostic error in internal medicine. Arch Intern Med 2005;165:1493–9. https://doi.org/10.1001/archinte.165.13.1493.Search in Google Scholar PubMed
11. Graber, M. Diagnostic errors in medicine: a case of neglect. Joint Comm J Qual Patient Saf 2005;31:106–13. https://doi.org/10.1016/s1553-7250(05)31015-4.Search in Google Scholar PubMed
12. Singh, H, Khanna, A, Spitzmueller, C, Meyer, A. Recommendations for using the Revised Safer Dx Instrument to help measure and improve diagnostic safety. Diagnosis 2019;6:315–23. https://doi.org/10.1515/dx-2019-0012.Search in Google Scholar PubMed
13. Schiff, GD, Hasan, O, Kim, S, Abrams, R, Cosby, K, Lambert, B, et al.. Diagnostic error in medicine - analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–7. https://doi.org/10.1001/archinternmed.2009.333.Search in Google Scholar PubMed
14. Singh, H. Helping health care organizations to define diagnostic errors as missed opportunities in diagnosis. Joint Comm J Qual Patient Saf 2014;40:99–101.10.1016/S1553-7250(14)40012-6Search in Google Scholar PubMed
15. Balogh, E, Miller, B, Ball, J. Improving diagnosis in health care. Washington DC: National Academy of Medicine; 2015.10.17226/21794Search in Google Scholar PubMed
16. Marx, D. Patient safety and the “Just Culture”: a primer for health care executives. Columbia university and the medical event reporting system for transfusion medicine. 2001.Search in Google Scholar
17. Sollenberger, J, Holloway, R. The evolving role and value of libraries and librarians in health care. JAMA 2013;310:1231–2. https://doi.org/10.1001/jama.2013.277050.Search in Google Scholar PubMed
18. Heher, YK. A brief guide to root cause analysis. Cancer Cytopathol 2017;125:79–82. https://doi.org/10.1002/cncy.21819.Search in Google Scholar PubMed
19. Vincent, C, Carthey, J, Macrae, C, Amalberti, R. Safety analysis over time: Seven major changes to adverse event investigations. Implement Sci 2017;12:151. https://doi.org/10.1186/s13012-017-0695-4.Search in Google Scholar PubMed PubMed Central
20. Vincent, CA, Coulter, A. Patient safety: what about the patient? Qual Saf Health Care 2002;11:76–80. https://doi.org/10.1136/qhc.11.1.76.Search in Google Scholar PubMed PubMed Central
21. Wiig, S, Hibbert, P, Braithwaite, J. The patient died: what about involvement in the investigation process? Int J Qual Health Care 2020;32:342–6. https://doi.org/10.1093/intqhc/mzaa034.Search in Google Scholar PubMed PubMed Central
22. National Health Service England. Engaging and involving patients, families and staff following a patient safety incident; 2022. Available at: https://wwwenglandnhsuk/patient-safety/incident-response-framework/engaging-and-involving-patients-families-and-staff-following-a-patient-safety-incident/.Search in Google Scholar
23. Kok, J, Leistikow, I, Bal, R. Patient and family engagement in incident investigations: exploring hospital manager and incident investigators’ experiences and challenges. J Health Serv Res Pol 2018;23:252–61. https://doi.org/10.1177/1355819618788586.Search in Google Scholar PubMed PubMed Central
24. Zwaan, L, Thijs, A, Wagner, C, van der Wal, G, Timmermans, D. Relating faults in diagnostic reasoning with diagnostic errors and patient harm. Acad Med 2012;87:149–56. https://doi.org/10.1097/acm.0b013e31823f71e6.Search in Google Scholar PubMed
25. Lawton, R, McEachan, R, Giles, S, Sirriyeh, R, Watt, I, Wright, J. Development of an evidence-based framework of factors contributing to patient safety incidents in hospital settings: a systematic review. BMJ Qual Saf 2012;21:369–80. https://doi.org/10.1136/bmjqs-2011-000443.Search in Google Scholar PubMed PubMed Central
26. Latino, R. Are all root cause analyses approaches created equal? Rochester, NY: University of Rochester; 2011.Search in Google Scholar
27. Hooftman, J, Dijkstra, A, Suurmeijer, I, van der Bij, A, Paap, E, Zwaan, L. Common contributing factors of diagnostic error: a retrospective analysis of 109 serious adverse event reports from Dutch hospitals. BMJ Qual Saf 2023. https://doi.org/10.1136/bmjqs-2022-015876.Search in Google Scholar PubMed PubMed Central
28. Baartmans, M, Hooftman, J, Zwaan, L, van Schoten, S, Erwich, J, Wagner, C. What can we learn from in-depth analysis of human errors resulting in diagnostic errors in the emergency department: an analysis of serious adverse event reports. J Patient Saf 2022;18:e1135–41. https://doi.org/10.1097/pts.0000000000001007.Search in Google Scholar
29. Trowbridge, R, Salvador, D, Roy, M, Botler, J. A restructured root cause analysis process for diagnostic error. In: Abstract – 4th International Diagnostic Error in Medicine Conference. Chicago, IL: Society to Improve Diagnosis in Medicine; 2011.Search in Google Scholar
30. Thammasitboon, S, Thammasitboon, S, Singhal, G. System-related factors contributing to diagnostic errors. Curr Probl Pediatr Adolesc Health Care 2013;43:242–7. https://doi.org/10.1016/j.cppeds.2013.07.004.Search in Google Scholar PubMed
31. Vanderbyl, D. Saving tax dollars; saving lives; using nudge theory to eliminate outdated emergency locator transmitters (ELTs). Masters of Arts Thesis, University of Manitoba, CA; 2019. https://doi.org/10.13140/RG.2.2.27543.32163.Search in Google Scholar
32. Croskerry, P. Diagnostic failure: a cognitive and affective approach. Advances in patient safety: from research to implementation. 2. AHRQ Publication Nos. 050021. Rockville, MD: Agency for Health Care Research and Quality; 2005:241–54 pp.Search in Google Scholar
33. Croskerry, P. The cognitive autopsy; A root cause analysis of medical decision making, 1st ed. New York, NY: Oxford University Press; 2020.10.1093/med/9780190088743.001.0001Search in Google Scholar
34. Apkon, M, Mattera, JA, Lin, Z, Herrin, J, Bradley, EH, Carbone, M, et al.. A randomized outpatient trial of a decision-support information technology tool. Arch Intern Med 2005;165:2388–94. https://doi.org/10.1001/archinte.165.20.2388.Search in Google Scholar PubMed
35. Kahneman, D. Thinking fast and slow. New York: Farrar, Strauss and Giroux; 2011.Search in Google Scholar
36. Croskerry, P. A universal model of diagnostic reasoning. Acad Med 2009;84:1022–8. https://doi.org/10.1097/acm.0b013e3181ace703.Search in Google Scholar PubMed
37. Benson, B. Cogntive bias cheat sheet; an organized list of cognitive biases because thinking is hard. Better Humans 2016.Search in Google Scholar
38. Simon, HA. Wikipedia; 2023:https://en.wikipedia.org/wiki/Herbert_A._Simon.Search in Google Scholar
39. McDonald, K. Achieving equity in diagnostic excellence. JAMA 2022;327:1955–6. https://doi.org/10.1001/jama.2022.7252.Search in Google Scholar PubMed
40. Piccardi, C, D’etollenaere, J, Vanden Bussche, P, Willems, S. Social disparities in patient safety in primary care: a systematic review. Int J Equity Health 2018;17:114. https://doi.org/10.1186/s12939-018-0828-7.Search in Google Scholar PubMed PubMed Central
41. Dror, I. Human expert performance in forensic decision-making: Seven different sources of bias. Aust J Forensic Sci 2017;49:541–7. https://doi.org/10.1080/00450618.2017.1281348.Search in Google Scholar
42. Croskerry, P. Individual variabillity in clinical decision making and diagnosis. In: Croskerry, P, Cosby, K, Graber, M, Singh, H, editors. Diagnosis – interpreting the shadows, 1st ed. Boca Raton: CRC Press; 2017.10.1201/9781315116334Search in Google Scholar
43. Croskerry, P. Our better angels and black boxes. BMJ Emerg Med J 2016;33:242–4. https://doi.org/10.1136/emermed-2016-205696.Search in Google Scholar PubMed
44. Charles, R, Hood, B, Derosier, JM, Gosbee, JW, Li, Y, Caird, MS, et al.. How to perform a root cause analysis for workup and future prevention of medical errors: a review. Patient Saf Surg 2016;10:20. https://doi.org/10.1186/s13037-016-0107-8.Search in Google Scholar PubMed PubMed Central
45. Muller, B, Luttel, D, Schut, D, Blazejewski, T, Pommee, M, Muller, H, et al.. Strength of safety measures introduced by medical practices to prevent a recurrence of patient safety incidents: an observational study. J Patient Saf 2022;18:444–8. https://doi.org/10.1097/pts.0000000000000953.Search in Google Scholar PubMed
46. Hettinger, A, Fairbanks, R, Hegde, S, Rackoff, A, Wreathall, J, Lewis, V, et al.. An evidenced-based toolkit for the development of effective and sustainable root cause analysis system safety solutions. J Healthc Risk Manag 2013;33:11–20. https://doi.org/10.1002/jhrm.21122.Search in Google Scholar PubMed PubMed Central
47. Hibbert, P, Thomas, M, Deakin, A, Runciman, W, Braithwaite, J, Lomax, S, et al.. Are root cause analyses recommendations effective and sustainable? An observational study. Int J Qual Health Care 2018;30:124–31. https://doi.org/10.1093/intqhc/mzx181.Search in Google Scholar PubMed
48. Herzog, S, Hertwig, R. Think twice and then: combining or choosing in dialectical bootstrapping? J Exp Psychol Learn Mem Cognit 2014;40:218–22. https://doi.org/10.1037/a0034054.Search in Google Scholar PubMed
49. Fujisaki, I, Yang, K, Ueda, K. On an effective and efficient method for exploiting the wisdom of the inner crowd. Nature Portfolio 2023;13:3608. https://doi.org/10.1038/s41598-023-30599-8.Search in Google Scholar PubMed PubMed Central
50. Stanovich, K. Rationality and the reflective mind. New York, NY: Oxford University Press; 2011.10.1093/acprof:oso/9780195341140.001.0001Search in Google Scholar
51. Dror, I. Cognitive and human factors in expert decision making: six fallacies and the eight sources of bias. Anal Chem 2020;92:7998–8004. https://doi.org/10.1021/acs.analchem.0c00704.Search in Google Scholar PubMed
52. Dror, I. A novel approach to minimize error in the medical domain: cognitive neuroscientific insights into training. Med Teach 2011;33:34–8. https://doi.org/10.3109/0142159x.2011.535047.Search in Google Scholar PubMed
53. Schnierle, J, Christian-Braithwaite, N, Louisias, M. Implicit bias: what every pediatrician should know about the effect of bias on health and future directions. Curr Probl Pediatr Adolesc Health Care 2019;49:34–44. https://doi.org/10.1016/j.cppeds.2019.01.003.Search in Google Scholar PubMed PubMed Central
54. Lambe, K, O’Reilly, G, Kelly, B, Curristan, S. Dual-process cognitive interventions to enhance diagnostic reasoning: a systematic review. BMJ Qual Saf 2016;25:808–20. https://doi.org/10.1136/bmjqs-2015-004417.Search in Google Scholar PubMed
55. Ludolph, R, Schulz, PJ. Debiasing health-related judgments and decision making: a systematic review. Med Decis Making 2018;38:3–13. https://doi.org/10.1177/0272989x17716672.Search in Google Scholar
56. Staal, J, Hooftman, J, Gunput, S, Mamede, S, Frens, M, Van den Broek, W, et al.. Effect on diagnostic accuracy of cognitive reasoning tools for the workplace setting: systematic review and meta-analysis. BMJ Qual Saf 2022;31:899–910. https://doi.org/10.1136/bmjqs-2022-014865.Search in Google Scholar PubMed PubMed Central
57. Nakhleh, R, Nose, V, Colasacco, C, Fatheree, L, Lillemoe, T, McCrory, F, et al.. Interpretive diagnostic error reduction in surgical pathology and cytology: guideline from the college of American Pathologists Pathology and Laboratory Quality Center and the Association of Directors of Anatomic and Surgical Pathology. Arch Pathol Lab Med 2016;140:29–40. https://doi.org/10.5858/arpa.2014-0511-sa.Search in Google Scholar PubMed
58. Swapp, RE, Aubry, MC, Salomao, DR, Cheville, JC. Outside case review of surgical pathology for referred patients: the impact on patient care. Arch Pathol Lab Med 2013;137:233–40. https://doi.org/10.5858/arpa.2012-0088-oa.Search in Google Scholar
59. Eakins, C, Ellis, W, Pruthi, S, Johnson, D, Hernanz-Schulman, M, Yu, C, et al.. Second opinion interpretations by specialty radiologists at a a pediatric hospital: rate of disagreement and clinical implications. Am J Roentgenol 2012;1991:916–20.10.2214/AJR.11.7662Search in Google Scholar PubMed
60. Freund, Y, Goulet, H, Leblanc, J, Bokobza, J, Ray, P, Maignan, M, et al.. Effect of systematic physician cross-checking on reducing adverse events in the emergency department: the CHARMED cluster randomized trial. JAMA Intern Med 2018;178:812–9. https://doi.org/10.1001/jamainternmed.2018.0607.Search in Google Scholar PubMed PubMed Central
61. Kurvers, R, Herzog, S, Hertwig, R, Krause, J, Carney, P, Bogart, A, et al.. Boosting medical diagnostics by pooling independent judgments. Proc Natl Acad Sci USA 2016;113:8777–82. https://doi.org/10.1073/pnas.1601827113.Search in Google Scholar PubMed PubMed Central
62. Barnett, M, Boddupalli, D, Nundy, S, Bates, D. Comparative accuracy of diagnosis by collective intelligence of multiple physicians vs individual physicians. JAMA Netw Open 2019;2:e190096. https://doi.org/10.1001/jamanetworkopen.2019.0096.Search in Google Scholar PubMed PubMed Central
63. Ely, JW, Graber, ML, Croskerry, P. Checklists to reduce diagnostic errors. Acad Med 2011;86:307–13. https://doi.org/10.1097/acm.0b013e31820824cd.Search in Google Scholar
64. Graber, M. Reaching 95%: decision support tools are the surest way to improve diagnosis now. BMJ Qual Saf 2022;31:415–8. https://doi.org/10.1136/bmjqs-2021-014033.Search in Google Scholar PubMed
65. Manohar, S. The diagnosis funnel. Ann Intern Med 2023;2023. https://doi.org/10.7326/G22-0060.Search in Google Scholar PubMed
66. Zwaan, L, Hautz, WE. Bridging the gap between uncertainty, confidence and diagnostic accuracy: calibration is key. BMJ Qual Saf 2019;28:352–5. https://doi.org/10.1136/bmjqs-2018-009078.Search in Google Scholar PubMed
67. Meyer, A, Upadhyay, D, Collins, C, Fitzpatrick, M, Kobylinski, M, Bansal, A, et al.. A program to provide clinicians with feedback on their diagnostic performance in a learning health system. Joint Comm J Qual Patient Saf 2021;47:120–6. https://doi.org/10.1016/j.jcjq.2020.08.014.Search in Google Scholar PubMed
68. Gurley, K, Edlow, J, Burstein, J, Grossman, S. Errors in decisionmaking in emergency medicine: the case of the landscaper’s back and root cause analysis. Ann Emerg Med 2021;77:203–9. https://doi.org/10.1016/j.annemergmed.2020.05.031.Search in Google Scholar PubMed
69. Dadlez, NM, Adelman, J, Bundy, DG, Singh, H, Applebaum, JR, Rinke, ML. Contributing factors for pediatric ambulatory diagnostic process errors: project RedDE. Pediatr Qual Saf 2020;5:e299. https://doi.org/10.1097/pq9.0000000000000299.Search in Google Scholar PubMed PubMed Central
70. Su, C-F, Chu, C-M, Yuan, Y-J, Peng, C-C, Feng, C-C, Chao, S-L, et al.. Use of a modified fishbone diagram to analyze diagnostic errors in emergency physicians. J Acute Med 2017;7:149–57. https://doi.org/10.6705/j.jacme.2017.0704.003.Search in Google Scholar PubMed PubMed Central
71. Giardina, T, King, B, Ignaczak, A, Paull, D, Hoeksema, L, Mills, P, et al.. Root cause analysis reports help identify common factors in delayed diagnosis and treatment of outpatients. Health Aff 2013;8:1368–75. https://doi.org/10.1377/hlthaff.2013.0130.Search in Google Scholar PubMed PubMed Central
72. Zenati, M, Leissner, K, Zorca, S, Kennedy-Metz, L, Yule, S, Dias, R. First reported use of team cognitive workload for root cause analysis in cardiac surgery. Semin Thorac Cardiovasc Surg 2019;31:394–6. https://doi.org/10.1053/j.semtcvs.2018.12.003.Search in Google Scholar PubMed PubMed Central
Supplementary Material
This article contains supplementary material (https://doi.org/10.1515/dx-2024-0102).
© 2024 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Should APTT become part of thrombophilia screening?
- Review
- n-3 fatty acids and the risk of atrial fibrillation, review
- Guidelines and Recommendations
- Root cause analysis of cases involving diagnosis
- Opinion Papers
- What is diagnostic safety? A review of safety science paradigms and rethinking paths to improving diagnosis
- Interprofessional clinical reasoning education
- Original Articles
- Quality of heart failure registration in primary care: observations from 1 million electronic health records in the Amsterdam Metropolitan Area
- Typology of solutions addressing diagnostic disparities: gaps and opportunities
- Diagnostic errors and characteristics of patients seen at a general internal medicine outpatient clinic with a referral for diagnosis
- Cost-benefit considerations of the biased diagnostician
- Delayed diagnosis of new onset pediatric diabetes leading to diabetic ketoacidosis: a retrospective cohort study
- Monocyte distribution width (MDW) kinetic for monitoring sepsis in intensive care unit
- Are shortened aPTT values always to be attributed only to preanalytical problems?
- External Quality Assessment (EQA) scheme for serological diagnostic test for SARS-CoV-2 detection in Sicily Region (Italy), in the period 2020–2022
- Recent mortality rates due to complications of medical and surgical care in the US
- Short Communication
- The potential, limitations, and future of diagnostics enhanced by generative artificial intelligence
- Case Report – Lessons in Clinical Reasoning
- Lessons in clinical reasoning – pitfalls, myths, and pearls: a case of persistent dysphagia and patient partnership
- Letters to the Editor
- The ‘curse of knowledge’: when medical expertise can sometimes be a liability
- A new approach for identifying innate immune defects
Articles in the same Issue
- Frontmatter
- Editorial
- Should APTT become part of thrombophilia screening?
- Review
- n-3 fatty acids and the risk of atrial fibrillation, review
- Guidelines and Recommendations
- Root cause analysis of cases involving diagnosis
- Opinion Papers
- What is diagnostic safety? A review of safety science paradigms and rethinking paths to improving diagnosis
- Interprofessional clinical reasoning education
- Original Articles
- Quality of heart failure registration in primary care: observations from 1 million electronic health records in the Amsterdam Metropolitan Area
- Typology of solutions addressing diagnostic disparities: gaps and opportunities
- Diagnostic errors and characteristics of patients seen at a general internal medicine outpatient clinic with a referral for diagnosis
- Cost-benefit considerations of the biased diagnostician
- Delayed diagnosis of new onset pediatric diabetes leading to diabetic ketoacidosis: a retrospective cohort study
- Monocyte distribution width (MDW) kinetic for monitoring sepsis in intensive care unit
- Are shortened aPTT values always to be attributed only to preanalytical problems?
- External Quality Assessment (EQA) scheme for serological diagnostic test for SARS-CoV-2 detection in Sicily Region (Italy), in the period 2020–2022
- Recent mortality rates due to complications of medical and surgical care in the US
- Short Communication
- The potential, limitations, and future of diagnostics enhanced by generative artificial intelligence
- Case Report – Lessons in Clinical Reasoning
- Lessons in clinical reasoning – pitfalls, myths, and pearls: a case of persistent dysphagia and patient partnership
- Letters to the Editor
- The ‘curse of knowledge’: when medical expertise can sometimes be a liability
- A new approach for identifying innate immune defects