Home Diagnostic excellence: turning to diagnostic performance improvement
Article Open Access

Diagnostic excellence: turning to diagnostic performance improvement

  • Andrew Auerbach ORCID logo EMAIL logo , Katie Raffel , Irit R. Rasooly ORCID logo and Jeffrey Schnipper
Published/Copyright: September 16, 2025
Diagnosis
From the journal Diagnosis

Abstract

The field of diagnostic excellence has advanced considerably in the past decade, reframing diagnosis as a patient safety priority and highlighting the prevalence and harms of diagnostic error. Foundational evidence now supports the development of Diagnostic Excellence Programs; organizational initiatives designed to reduce diagnostic errors and improve system-level and individual performance. While early studies established the epidemiology of diagnostic error across inpatient, emergency, and ambulatory care, newer approaches emphasize continuous, systematic surveillance to inform targeted improvements. Emerging frameworks, such as the DEER Taxonomy and root cause or success cause analyses, help classify drivers of both failures and successes in diagnostic processes. Effective programs must address system factors, including electronic health record design, workload, team structures, and communication, while also enhancing individual clinician performance through feedback, diagnostic reflection, cross-checks, and coaching. Patient engagement represents a critical but underdeveloped dimension; strategies such as structured communication frameworks, patient-family advisory councils, and electronic tools co-designed with patients aim to foster shared diagnostic decision-making and improve transparency. Artificial intelligence (AI) holds promise to accelerate measurement, streamline clinical workflows, reduce cognitive load, and support communication, though careful implementation and oversight are required to ensure safety. Ultimately, Diagnostic Excellence Programs will succeed by embedding diagnostic safety into institutional standards of care, providing clinicians with ongoing, psychologically safe opportunities for recalibration, and leveraging AI to scale surveillance and improvement activities.

Introduction

Over the last decade, the field of diagnostic excellence has developed rapidly, with a growing body of evidence describing the prevalence of diagnostic errors and strategies to evaluate underlying diagnostic processes. Diagnosis is now firmly positioned as a part of patient safety, as opposed to being solely part of educational or training programs.

Some aspects of the field of diagnostic excellence have developed more slowly. Evidence for how to integrate diagnostic process improvements into clinical operations and interventions to help physicians improve performance is nascent. While principles of shared decision making have been established in other contexts, approaches to meaningfully engaging patients in improving diagnostic safety are still in their infancy. The role of new technologies, particularly artificial intelligence (AI), in supporting diagnosis is largely undefined.

Though it predates the “diagnostic excellence” terminology, the National Academy of Medicine (NAM) 2015 report [1] defined the broad outlines of “Diagnostic Excellence Programs” as those that seek to reduce harms related to missed or delayed diagnoses [2], [3], [4] and employ an organizational approach to improving system-level and individual diagnostic performance. Building on precedents such as the medication safety and (more broadly) the patient safety movement, Diagnostic Excellence Programs can meet the field’s translational challenges by embedding diagnostic excellence into organizational standards of care, supporting interventions to improve diagnostic processes, and providing a framework for testing and validating new methods for evaluating diagnostic processes.

In this article, we will review how evolution in the field of diagnostic excellence has provided the foundational evidence to define how Diagnostic Excellence Programs (Table 1), such as systems to accurately measure diagnostic error and strong executive sponsorship [5], 6], can catalyze efforts to enhance physician performance and patient communication to improve diagnostic outcomes in a healthcare system.

Table 1:

Summary of recommendations for diagnostic excellence programs.

Domain Key takeaways
System-based changes
  1. Implement continuous diagnostic error measurement and integrate findings into existing safety programs.

  2. Improve team structures, EHR usability, and communication to reduce cognitive load.

  3. Target improvement in specific conditions (e.g., sepsis, stroke) using defined processes and technology-enabled follow-up.

Provider-based changes
  1. Shift focus from remediation to continuous performance improvement.

  2. Use decision support, second opinions, and feedback tools to enhance diagnostic thinking.

  3. Employ coaching and reflective practices to develop clinicians’ diagnostic skills.

Patient engagement
  1. Emphasize communication of diagnostic rationale, uncertainty, and shared diagnostic decision-making.

  2. Develop structured communication based on principles of shared decision-making and communicating uncertainty, utilize tools like “be the expert on you” and “60 s to improve diagnostic safety.”

  3. Use digital tools to identify and address diagnostic gaps from the patient’s perspective.

AI and diagnostic excellence
  1. Use AI to scale monitoring for diagnostic errors.

  2. Surface critical information, reduce alert fatigue, and identify vulnerable diagnostic moments.

  3. Emphasize human oversight using measures of diagnostic excellence and gradual implementation to ensure safety and efficacy.

Move from measurement to understanding of diagnostic errors

Key to improving diagnostic safety and framing effective Diagnostic Excellence Programs is a clear understanding of the prevalence of and harms related to diagnostic error [7], 8]. Evidence is emerging to describe the scope of diagnostic errors in ambulatory, emergency department, and inpatient settings [9], [10], [11], [12], [13], [14]. Current epidemiologic evidence is based on studies which have employed diverse methods, producing a wide range of estimates of prevalence. On the low end of estimates are papers that estimated diagnostic error by examining data from previously published studies [15]. These estimates are likely low compared to later studies largely because the primary studies did not apply a systematic approach to identifying or adjudicating events, likely leading to under-detection, while later studies directly examined charts or administrative data. Alternately, data from studies of unexpected findings found at autopsy estimate the rate as high as 20 %, depending on the era of the study [9], 12], [16], [17], [18]. Direct review of medical records using structured tools and rigorous adjudication methods can detect errors with high sensitivity and specificity, and such reviews place error rates closer to 25 % among hospitalized patients [9], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21].

As a result of heterogeneous approaches to measuring diagnostic processes and errors, the mantra of diagnostic error measurement can be distilled down to the statement: ‘if you look for them, you will find them.’ The weight of evidence suggests the rate of diagnostic errors, and the potential opportunity for Diagnostic Excellence Programs to make a real difference in patient care, is much higher than estimated 10 years ago.

As the field moves beyond characterizing the epidemiology of errors, Diagnostic Excellence Programs will need to simultaneously increase the scope and reduce the burden of monitoring for diagnostic errors. Continuous, systematic surveillance of diagnostic errors, their causes, and their harms is a necessary step to informing and driving improvements, much as it is for all other safety and quality gaps [22]. In the ambulatory setting, Kaiser Permanente Southern California has leveraged its electronic health information system to create a patient safety surveillance system that supports timely diagnosis of conditions ranging from cancer to Chlamydia infections, from infant hearing impairment to chronic kidney disease by closing the loop on symptoms, test results, and referrals [23], 24]. Alternatively, approaches using administrative data to compare initial symptoms and diagnoses to eventual (or final) diagnoses have been proposed to screen for diagnostic errors [11] more efficiently. Although this approach is promising for speeding measurement, few data describe its applicability outside a few settings and conditions, thereby limiting its operational utility [21]. Automation of error screening, use of sampling methods on a large scale [25], and case identification (likely leveraging AI, as we will discuss) will be key in moving beyond narrow and relatively small groups of patients to a broader understanding of diagnostic opportunities within the health system.

Diagnostic Excellence Programs will need to systematically identify processes associated with diagnostic outcomes, such as a lack of timely access to diagnostic tests and procedures or staffing shortages that lead to unsafe conditions. By utilizing tools such as the DEER Taxonomy to classify diagnostic process improvement opportunities [26], [27], [28] and diagnosis-focused root cause analyses [29], health systems can elucidate the factors contributing to diagnostic error and design interventions that will result in diagnostic process improvement. It is also essential to characterize drivers of diagnostic excellence; approaches such as the “success cause analysis” allow institutions to learn about strategies that result in favorable diagnostic accounts and hardwire/reinforce those strategies/processes [30].

Along with broader measurement approaches, Diagnostic Excellence Programs also require more expansive understandings of contributors to diagnostic errors. Most research studies have had difficulty separating systems-driven diagnostic issues (for example, the contribution of electronic health record [EHR] design) from issues such as anchoring bias. Few studies have explicitly measured actual cognitive load or have been able to grapple effectively with the range of clinical scenarios, team structures, or task loads that influence how clinicians work; these will need to be key considerations for programs that want to create a complete picture of diagnostic performance.

System-based interventions to improve diagnosis

Optimal organizational approaches for quality improvement are well understood [31] and include leadership endorsement, presence of clinical champions, measurement capabilities, communication and outreach tools, and appropriate resources. Robust Diagnostic Excellence Program implementation toolkits (MeasureDx [6]) grounded in organizational theory are available, but the degree to which they are in current use or their utility in comparison to existing improvement programs is as yet unknown.

Going forward, we believe that the field can advance most rapidly by extending previous organizational change models to implement Diagnostic Excellence Programs. Like all healthcare outcomes, diagnostic outcomes are the product of systems of care – systems that include people, organizations, physical environments, tools and technology, and delineated tasks [32], 33]. While the clinician is an essential element of the work system, a decade of systems safety literature suggests that systems improvements are required to realize broad gains in diagnostic quality and safety.

System-based interventions begin with robust monitoring and measurement [34], which in turn involve the issues of validity and scaling of measurement we discussed earlier [22], [23], [24]. Human and patient-centered design principles underlie effective programs with the Systems Engineering Initiative for Patient Safety (SEIPS) and SEIPS 2.0 models providing important frameworks for integrating Human Factors and Ergonomics (HFE) in healthcare quality and patient safety improvement [32], 33].

Systems may want to focus improvement efforts within defined diagnoses or conditions (e.g., recognition of stroke in adults or appendicitis in children) for which there are clearly defined processes and outcome measures, and multifactorial systems opportunities for improvement. Emerging research in symptom-diagnosis pairs can form the basis of these measures [11]. Working within defined areas may make leveraging technology to support closed-loop follow-up of unexpected or abnormal actionable imaging, laboratory, or vital sign changes more feasible, for example. Improving the diagnostic process around imaging may include structured entry of recommendations by radiologists, prompting referring practitioners to explicitly disagree, agree, or modify these recommendations [35], and creating and enforcing policies and procedures for escalating levels of contact to ensure diagnostic closure. Laboratory and vital sign abnormalities could be linked to early warning systems that accurately identify hospitalized patients at risk for clinical deterioration as signs of a possible diagnostic error. Such signals could trigger a diagnostic pause, escalate for team evaluation, or prompt cross-check/second opinion programs [36], which may assist individual providers’ performance [11], [26], [27], [28], [29], [3035].

Health systems have a role in designing EHRs and team structures that facilitate diagnosis, aiming to reduce cognitive load resulting from complex workspaces, high volumes of tasks, and interruptions [37]. Communication across the care system – regarding follow-up, diagnostic uncertainty, and reasoning – is a key driver of diagnostic performance. Systems can support diagnostic excellence by attempting to flag patients with diagnostic uncertainty or to structure communication across interdisciplinary teams about deteriorating patients [38]. Systems have a core role in evidence-based EHR redesign and training clinicians in EHR best practices, with strong evidence suggesting that thoughtful EHR redesign can improve performance on diagnostic tasks [39], [40], [41]. Specific focus areas should include chart synthesis, ways to heighten the salience of unexpected or essential information, and designing clinician-centered decision support, many of which will be the target of artificial intelligence (AI) tools. Finally, the volume and type of communication between teams is often driven by infrastructure supported by health systems; Diagnostic Excellence Programs can help guide communication systems that can reduce interruptions and improve cognitive performance, while also improving team functions [42].

Team structure may also influence clinicians’ diagnostic capabilities – the mechanism of action may be by reducing workload through shared models of care (e.g., APP-MD teams) [43] or, more likely, through models that leverage principles of collective intelligence. Collective intelligence models may include scheduled multidisciplinary meetings (e.g., tumor board), single clinician or non-clinician consultation models (e.g., e-consults or external second opinions [36]), or virtual collaboration (e.g., mobile applications) [44].

Clinician-supporting interventions to support diagnostic processes

Earlier thinking attributed gaps in diagnostic reasoning to deficiencies in professional conduct or clinical knowledge, leading to the development of peer-review programs. However, evidence developed in the last decade strongly supports the idea that most diagnostic errors are related to gaps in clinician diagnostic processes such as assessment or decision-making [14], and that these problems are influenced by cognitive biases – systematic and predictable shortcuts in our thinking that can lead to error [45]. Environmental context, team structure, cognitive load, and health system resources driven at the system level can uncover or magnify cognitive biases. Emerging theories such as situativity [46], [47], [48], and increasing recognition of the role of resilience and Safety II concepts [49] have helped expand understanding drivers of diagnostic errors and are well-positioned to inform features of system-wide interventions, as well as programs focused on improving individual clinicians’ performance.

There is a broad and active literature examining possible approaches to improving clinician (primarily physician and advanced practice provider) cognitive processes involved with diagnosis, with interventions such as decision support or guides (as mentioned above) being one aspect of helping physicians’ diagnostic accuracy during a clinical encounter. Approaches representing a separate step include diagnostic feedback, prompted diagnostic reflection, and collective intelligence through second opinion or cross-check programs [36], 50]. While this literature suggests a wide range of approaches, few as yet have been tested in multicenter studies or using rigorous trial designs.

Effective Diagnostic Excellence Programs must simultaneously address system factors (such as team structures) and individual clinician factors contributing to individual diagnostic performance resiliency. While tools such as alerts or second opinion programs may be organized at the system level, their level of effect is among providers; this contrasts with monitoring and measurement programs, which, while organized at the system level, do not target individuals.

Cognitive forcing questions, such as a diagnostic time-out or pause (e.g., “what else could this be?”), may counteract cognitive bias [51], [52], [53], [54], [55] but are challenging to integrate into routine practice even with training or awareness of diagnostic metacognition. Rather, pauses may be best timed to specific events, such as return visits in the outpatient setting [56] or a structured peer-peer cross-check [36] at the end of an ED shift. Cross-check, time out, and second opinion models have many parallels to debriefs and checklist programs but differ in their focus on the cognitive processes of diagnosis, rather than the system-based checks needed to ensure process adherence.

The development and refinement of clinical decision-making is a career-long endeavor for clinicians and can impact diagnostic accuracy; coaching and self-reflection approaches on performance, mainly after the fact, are key approaches. Clinicians can, in theory, calibrate their diagnosis and diagnostic decisions once the final diagnosis is established, though in practice, they often are unaware of patient outcomes. Programs that prompt clinicians to review patients’ outcomes after an encounter may support improved diagnosis; automated capabilities to assist in gathering this information are available [57], [58], [59], though longer-term adoption as a part of reflective clinical practice remains a considerable challenge. One recent model focusing on case review and reflection is represented by CalibrateDx [60], an approach that includes the use of a structured tool to guide physicians’ self-directed case reviews, reflection on clinical performance as part of those reviews, and guidance around how to apply learnings from case reviews in future practice. CalibrateDx is currently being tested in several settings with results pending.

It is critical to point out that the clinician-development component of Diagnostic Excellence Programs we propose here must be fundamentally different than remediation or peer-review programs employed as part of regulatory or medical board activities at many hospitals. Peer review and remediation programs are predominantly corrective and used primarily for severe performance problems. To improve diagnosis, the focus must shift towards enhancing performance generally and for all providers via consistent, relatively frequent approaches aligned with a standard of care that seeks to improve provider performance as a part of professional growth, rather than as a punitive or purely corrective measure.

Interventions to improve patient engagement in the diagnostic process are critical

The NAM report emphasized that an optimal diagnosis is one that is accurate, timely, and communicated to the patient. Patient engagement in the diagnostic process shares common lineage with well-developed fields such as shared decision making, where communication around treatment choices has traditionally been a focus, but which has broadened to consider diagnostic steps as well [61]. Patient-centered communication may not only support accurate diagnosis but also prevent inappropriate testing and promote diagnostic stewardship [62]. Communicating the rationale for and understanding of a diagnosis while also conveying uncertainty is also core to a truly shared diagnostic process and has a strong relationship to system factors (such how EHRs present data) and physician expertise [63], [64], [65], [66], and has been highlighted as part of international diagnostic excellence recommendations [67].

Progress in this area has been greatly facilitated by the development and growth of the leading patient safety organizations, including the Partnership for Patient Safety (P4Ps) [68]; the Pulse Center for Patient Safety, Education & Advocacy [69]; the Community Improving Diagnosis in Medicine (CIDM) Patient Engagement Committee [70]; and the World Health Organization (WHO) Patients for Patient Safety Program [71]. In addition, this work has been advanced by the growth of patient-family advisory councils (PFACs), which consist of patients and family members who have received care at an organization and administrators, clinicians, and staff. PFACs provide a mechanism to seek and learn from the patient and family perspective, promote a culture of patient- and family-centered care (PFCC), and guide PFCC implementation [72].

Engagement frameworks and tools for patients and families have been developed and are part of ongoing evaluation. The Agency for Healthcare Research and Quality-sponsored “Toolkit for Engaging Patients to Improve Diagnostic Safety” contains two strategies, “Be the Expert on You” and “60 Seconds to Improve Diagnostic Safety.” [73]. The first helps prepare patients and caregivers to communicate their health stories to clinicians clearly and concisely through written prompts and is discussed elsewhere in this issue. The second strategy prompts clinicians to conduct reflective listening, without interruption, for 1 min at the start of a patient encounter. Both strategies are being evaluated as part of an AHRQ/RAND study [4], including small group diagnostic reflection/calibration and establishing institution-level measurement strategies. Initiatives are underway to adapt these for the inpatient setting as part of the Achieving Diagnostic Excellence through Prevention and Teamwork (ADEPT) study [25], leveraging tools developed in a prior single-center Patient Safety Learning Lab study [53]. These include a Patient Diagnosis Questionnaire (PDQ), where patients and their caregivers are asked several questions about their experience with the diagnostic process shortly after being admitted. These questions explore patients’ understanding of their diagnosis, whether they think it is correct, whether they are improving, and if there are any parts of their health story that the medical team may be overlooking. ADEPT is also working to adapt outpatient tools to the inpatient setting using structured communication techniques such as the SHARE framework: Summarizing, Hypothesizing together, Aligning with the patient’s experience, Reviewing for red flags, and Encouraging dialogue.

Electronic tools may also aid in understanding diagnostic opportunities and increase shared decision-making. One tool (OurDX) [74], co-designed with patients and families, helps identify, describe, and analyze patient-reported diagnostic breakdowns and provide that information to the inpatient treatment team. Preliminary results show potential to surface diagnostic blind spots. In all these efforts, the goals are to enhance bidirectional communication between patients and clinicians, develop strategies to support and evaluate communication about diagnosis (including communication of diagnostic uncertainty), promote systems that identify patients’ diagnostic questions and concerns, and encourage shared decision-making in the diagnostic process.

AI and the future of diagnostic excellence

The future of AI as a core aspect of the diagnostic process is months away, not years. A first opportunity is in the role of commercially available AI tools, including medical-specific tools such as Open Evidence or general “reasoning” large language models such as GPT 03, Claude 4 Opus, and Gemini 2.5 Pro, to support clinical decision-making by patients and clinicians. Uptake of commercially available tools is well underway but is somewhat separate – more an educational and reinforcing step – from an integrated Diagnostic Excellence Program.

Measuring diagnostic errors and opportunities is hugely time-intensive, and it is here where AI is likely to have its first significant impact – by summarizing charts and records to identify cases and events where a diagnostic gap exists. AI tools may use diagnosis-based algorithms [11] or risk-based ones to find populations of patients where diagnostic problems are most likely. In this way, AI-enabled measurement [75] will permit assessment at scale and facilitate broader improvement activity.

AI methods can also support more streamlined workspaces, where vital data is surfaced earlier or more noticeably. AI-derived decision support in the context of diagnostic excellence can be judicious in the timing and amount of diagnostic feedback given to reduce or avoid alert fatigue. With more complex data and analytic tools, it is feasible to identify the “moments” of diagnostic vulnerability and resilience to support diagnostic excellence.

With current technology, we can easily imagine ambient AI transcribing and synthesizing conversations with patients and team members, accessing the entirety of the medical record to find key data, examining outside sources of information to guide evidence-based care, and providing care and communication recommendations to the team, patient, and family. This end-to-end AI-enabled experience contains several constituent tasks that are bugaboos in clinical care – processing complex data, clear communication, and synthesis of thinking into a coherent plan. Each step requires development and validation steps, and fundamental questions about the balance of automation versus human cross-checking are necessary at each stage (for example, the need to review source data underlying summarized notes to ensure information is not hallucinated or misinterpreted).

Summary: making meaningful changes in diagnostic performance

Healthcare delivery systems have ample evidence to design their own optimal Diagnostic Excellence Programs (Figure 1). Efforts by the Leapfrog Group to pilot measures of organizational approaches to measure and reduce diagnostic errors could be catalytic in making Diagnostic Excellence Programs a high institutional priority [76], but the need for immediate improvement is self-evident. Hospitals already have large programs that track safety incidents, adverse events, and mortality, so leveraging existing infrastructure and people to effect change is a natural place to start for this essential work. Expanding and scaling diagnostic excellence monitoring across entire physical and virtual health systems should be a first goal.

Figure 1: 
Towards an integrated diagnostic excellence program that monitors and improves diagnostic safety.
Figure 1:

Towards an integrated diagnostic excellence program that monitors and improves diagnostic safety.

A second key step is to embrace the principle that to improve diagnosis, you must also identify opportunities to enhance the clinical performance of individuals and teams. Clinical performance is deeply impacted by system-based issues such as EHR design or workload. Still, it is also strongly influenced by cognitive pitfalls and biases, which can be independent from system-based factors and require a more supportive and improvement-focused program than the one we currently have.

Third, clinical performance improvement should be approached as an ongoing aspect of clinical practice rather than a periodic and high-stakes activity. From this viewpoint, a peer-review model is not optimal for ongoing clinical performance improvement; rather, frequent low-intensity and meaningful opportunities for diagnostic coaching, feedback, and recalibration will be key. Meaningful programs will be able to strike a balance between the psychological safety needed to truly reflect on cases or hear feedback, and specific and actionable feedback that may point out opportunities for improvement.

The final step is to anticipate the central role of artificial intelligence at the core of each step and to make the necessary investments in the programs and people needed to ensure AI tools are deployed safely and effectively. We cannot use AI to fix the staffing or census problems, for example, but we can use AI to reduce cognitive load and improve communication between each other and our patients. Importantly, we can move healthcare and diagnostic excellence into this new era by understanding the safety gaps and creating teams to monitor performance.


Corresponding author: Andrew Auerbach, San Francisco Department of Medicine, Division of Hospital Medicine, University of California, San Francisco, CA, USA, E-mail:

Award Identifier / Grant number: R18HS29366-03, K08HS028682.

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: Drs. Auerbach, Raffel, and Schnipper are supported by AHRQ R18 HS29366-03. Dr. Rasooly is supported by AHRQ K08HS028682.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

References

1. National Academies. Improving diagnosis in health care. Washington, DC: National Academies Press; 2015.Search in Google Scholar

2. Centers for Disease Control and Prevention. Core elements of hospital diagnostic excellence (DxEx). CDC. https://www.cdc.gov/patient-safety/hcp/hospital-dx-excellence/index.html [accessed July 2025].Search in Google Scholar

3. The Joint Commission Journal on Quality and Patient Safety 50th Anniversary Article Collections: Diagnostic Excellence. Joint Comm J Qual Patient Saf 2024;50:817–18. https://doi.org/10.1016/j.jcjq.2024.09.003.Search in Google Scholar

4. Implementing diagnostic excellence across systems (IDEAS) agency for healthcare research and quality. https://www.ahrq.gov/diagnostic-safety/ideas-project/index.html [accessed July 2025].Search in Google Scholar

5. Scott, IA, Crock, C. An organisational approach to improving diagnostic safety. Aust Health Rev 2023;47:261–7. https://doi.org/10.1071/ah22287.Search in Google Scholar PubMed

6. Agency for Healthcare Research and Quality. Measure dx: a resource to identify, analyze, and learn from diagnostic safety events. https://www.ahrq.gov/diagnostic-safety/tools/measure-dx.html [accessed July 2025].Search in Google Scholar

7. Shojania, KG, Dixon-Woods, M. Estimating deaths due to medical error: the ongoing controversy and why it matters. BMJ Qual Saf 2017;26:423–8. https://doi.org/10.1136/bmjqs-2016-006144.Search in Google Scholar PubMed

8. Hunter, MK, Singareddy, C, Mundt, KA. Framing diagnostic error: an epidemiological perspective. Front Public Health 2024;12:1479750. https://doi.org/10.3389/fpubh.2024.1479750.Search in Google Scholar PubMed PubMed Central

9. Custer, JW, Winters, BD, Goode, V, Robinson, KA, Yang, T, Pronovost, PJ, et al.. Diagnostic errors in the pediatric and neonatal ICU: a systematic review. Pediatr Crit Care Med 2015;16:29–36. https://doi.org/10.1097/pcc.0000000000000274.Search in Google Scholar

10. Dhaliwal, G, Shojania, KG. The data of diagnostic error: big, large and small. BMJ Qual Saf 2018;27:499–501. https://doi.org/10.1136/bmjqs-2018-007917.Search in Google Scholar PubMed

11. Liberman, AL, Newman-Toker, DE. Symptom-disease pair analysis of diagnostic error (SPADE): a conceptual framework and methodological approach for unearthing misdiagnosis-related harms using big data. BMJ Qual Saf 2018;27:557–66. https://doi.org/10.1136/bmjqs-2017-007032.Search in Google Scholar PubMed PubMed Central

12. Winters, B, Custer, J, Galvagno, SMJr., Colantuoni, E, Kapoor, SG, Lee, H, et al.. Diagnostic errors in the intensive care unit: a systematic review of autopsy studies. BMJ Qual Saf 2012;21:894–902. https://doi.org/10.1136/bmjqs-2012-000803.Search in Google Scholar PubMed

13. Auerbach, AD, Astik, GJ, O’Leary, KJ, Barish, PN, Kantor, MA, Raffel, KR, et al.. Prevalence and causes of diagnostic errors in hospitalized patients under investigation for COVID-19. J Gen Intern Med 2023;38:1902–10. https://doi.org/10.1007/s11606-023-08176-6.Search in Google Scholar PubMed PubMed Central

14. Auerbach, AD, Lee, TM, Hubbard, CC, Ranji, SR, Raffel, K, Valdes, G, et al.. Diagnostic errors in hospitalized adults who died or were transferred to intensive care. JAMA Intern Med 2024;184:164–73. https://doi.org/10.1001/jamainternmed.2023.7347.Search in Google Scholar PubMed PubMed Central

15. Gunderson, CG, Bilan, VP, Holleck, JL, Nickerson, P, Cherry, BM, Chui, P, et al.. Prevalence of harmful diagnostic errors in hospitalised adults: a systematic review and meta-analysis. BMJ Qual Saf 2020;29:1008–18. https://doi.org/10.1136/bmjqs-2019-010822.Search in Google Scholar PubMed

16. Shojania, KG, Burton, EC, McDonald, KM, Goldman, L. Changes in rates of autopsy-detected diagnostic errors over time: a systematic review. JAMA 2003;289:2849–56. https://doi.org/10.1001/jama.289.21.2849.Search in Google Scholar PubMed

17. Schwanda-Burger, S, Moch, H, Muntwyler, J, Salomon, F. Diagnostic errors in the new millennium: a follow-up autopsy study. Mod Pathol 2012;25:777–83. https://doi.org/10.1038/modpathol.2011.199.Search in Google Scholar PubMed

18. Cifra, CL, Jones, KL, Ascenzi, JA, Bhalala, US, Bembea, MM, Newman-Toker, DE, et al.. Diagnostic errors in a PICU: insights from the morbidity and mortality conference. Pediatr Crit Care Med 2015;16:468–76. https://doi.org/10.1097/pcc.0000000000000398.Search in Google Scholar PubMed

19. Dalal, AK, Plombon, S, Konieczny, K, Motta-Calderon, D, Malik, M, Garber, A, et al.. Adverse diagnostic events in hospitalised patients: a single-centre, retrospective cohort study. BMJ Qual Saf 2025;34:377–88. https://doi.org/10.1136/bmjqs-2024-017183.Search in Google Scholar PubMed

20. Newman-Toker, DE, Peterson, SM, Badihian, S, Hassoon, A, Nassery, N, Parizadeh, D, et al.. Diagnostic errors in the emergency department: a systematic review. Bethesda, MD: Agency for Healthcare Research and Quality (AHRQ); 2022.10.23970/AHRQEPCCER258Search in Google Scholar PubMed

21. Mane, KK, Rubenstein, KB, Nassery, N, Sharp, AL, Shamim, EA, Sangha, NS, et al.. Diagnostic performance dashboards: tracking diagnostic errors using big data. BMJ Qual Saf 2018;27:567–70. https://doi.org/10.1136/bmjqs-2018-007945.Search in Google Scholar PubMed

22. Perry, MF, Melvin, JE, Kasick, RT, Kersey, KE, Scherzer, DJ, Kamboj, MK, et al.. The diagnostic error index: a quality improvement initiative to identify and measure diagnostic errors. J Pediatr 2021;232:257–63. https://doi.org/10.1016/j.jpeds.2020.11.065.Search in Google Scholar PubMed

23. Imley, T, Kanter, MH, Timmins, R, Adams, AL. Creating a safety net process to improve colon cancer diagnosis in patients with rectal bleeding. Perm J 2022;26:21–7. https://doi.org/10.7812/tpp/22.034.Search in Google Scholar

24. Danforth, KN, Smith, AE, Loo, RK, Jacobsen, SJ, Mittman, BS, Kanter, MH. Electronic clinical surveillance to improve outpatient care: diverse applications within an integrated delivery system. EGEMS 2014;2:1056. https://doi.org/10.13063/2327-9214.1056.Search in Google Scholar PubMed PubMed Central

25. Schnipper, JL, Raffel, KE, Keniston, A, Burden, M, Glasheen, J, Ranji, S, et al.. Achieving diagnostic excellence through prevention and teamwork (ADEPT) study protocol: a multicenter, prospective quality and safety program to improve diagnostic processes in medical inpatients. J Hosp Med 2023;18:1072–81. https://doi.org/10.1002/jhm.13230.Search in Google Scholar PubMed PubMed Central

26. Griffin, JA, Carr, K, Bersani, K, Piniella, N, Motta-Calderon, D, Malik, M, et al.. Analyzing diagnostic errors in the acute setting: a process-driven approach. Diagnosis 2021;9:77–88. https://doi.org/10.1515/dx-2021-0033.Search in Google Scholar PubMed PubMed Central

27. Schiff, GD, Hasan, O, Kim, S, Abrams, R, Cosby, K, Lambert, BL, et al.. Diagnostic error in medicine: analysis of 583 physician-reported errors. Arch Intern Med 2009;169:1881–7. https://doi.org/10.1001/archinternmed.2009.333.Search in Google Scholar PubMed

28. Schiff, GD, Kim, S, Abrams, R, Cosby, K, Lambert, B, Elstein, AS, et al.. Diagnosing diagnosis errors: lessons from a multi-institutional collaborative project. In: Advances in patient safety: from research to implementation (volume 2: concepts and methodology). Bethesda, MD, US: Agency for Healthcare Research and Quality; 2005.Search in Google Scholar

29. Graber, ML, Castro, GM, Danforth, M, Tilly, JL, Croskerry, P, El-Kareh, R, et al.. Root cause analysis of cases involving diagnosis. Diagnosis 2024;11:353–68. https://doi.org/10.1515/dx-2024-0102.Search in Google Scholar PubMed

30. Parkash, V, Musser, L, Krouss, M, Baer, H, Bajaj, K. Success cause analysis: learning from what works to advance safety. Institute for Healthcare Improvement. https://www.ihi.org/library/blog/success-cause-analysis-learning-what-works-advance-safety [accessed July 2025].Search in Google Scholar

31. Li, S-A, Jeffs, L, Barwick, M, Stevens, B. Organizational contextual features that influence the implementation of evidence-based practices across healthcare settings: a systematic integrative review. Syst Rev 2018;7:72. https://doi.org/10.1186/s13643-018-0734-5.Search in Google Scholar PubMed PubMed Central

32. Carayon, P, Wetterneck, TB, Rivera-Rodriguez, AJ, Hundt, AS, Hoonakker, P, Holden, R, et al.. Human factors systems approach to healthcare quality and patient safety. Appl Ergon 2014;45:14–25. https://doi.org/10.1016/j.apergo.2013.04.023.Search in Google Scholar PubMed PubMed Central

33. Carayon, P, Wooldridge, A, Hoonakker, P, Hundt, AS, Kelly, MM. SEIPS 3.0: human-centered design of the patient journey for patient safety. Appl Ergon 2020;84:103033. https://doi.org/10.1016/j.apergo.2019.103033.Search in Google Scholar PubMed PubMed Central

34. Singh, H, Graber, ML, Kissam, SM, Sorensen, AV, Lenfestey, NF, Tant, EM, et al.. System-related interventions to reduce diagnostic errors: a narrative review. BMJ Qual Saf 2012;21:160–70. https://doi.org/10.1136/bmjqs-2011-000150.Search in Google Scholar PubMed PubMed Central

35. Kapoor, N, Khorasani, R. Beyond the AJR: the need for high-reliability systems to create and track actionable follow-up recommendations in radiology reports. AJR Am J Roentgenol 2023;220:905. https://doi.org/10.2214/ajr.22.28579.Search in Google Scholar PubMed

36. Freund, Y, Goulet, H, Leblanc, J, Bokobza, J, Ray, P, Maignan, M, et al.. Effect of systematic physician cross-checking on reducing adverse events in the emergency department: the CHARMED cluster randomized trial. JAMA Intern Med 2018;178:812–9. https://doi.org/10.1001/jamainternmed.2018.0607.Search in Google Scholar PubMed PubMed Central

37. Knees, M. Cognitive load theory and its impact on diagnostic accuracy. Agency for healthcare research and quality; 2025. https://www.ahrq.gov/diagnostic-safety/resources/issue-briefs/dxsafety-cognitive-load.html [accessed July 2025].Search in Google Scholar

38. Ipsaro, AJ, Patel, SJ, Warner, DC, Marshall, TL, Chan, ST, Rohrmeier, K, et al.. Declaring uncertainty: using quality improvement methods to change the conversation of diagnosis. Hosp Pediatr 2021;11:334–41. https://doi.org/10.1542/hpeds.2020-000174.Search in Google Scholar PubMed

39. Ahmed, A, Chandra, S, Herasevich, V, Gajic, O, Pickering, BW. The effect of two different electronic health record user interfaces on intensive care provider task load, errors of cognition, and performance. Crit Care Med 2011;39:1626–34. https://doi.org/10.1097/ccm.0b013e31821858a0.Search in Google Scholar PubMed

40. Stephenson, LS, Gorsuch, A, Hersh, WR, Mohan, V, Gold, JA. Participation in EHR based simulation improves recognition of patient safety issues. BMC Med Educ 2014;14:224. https://doi.org/10.1186/1472-6920-14-224.Search in Google Scholar PubMed PubMed Central

41. Mohan, V, Garrison, C, Gold, JA. Using a new model of electronic health record training to reduce physician burnout: a plan for action. JMIR Med Inf 2021;9:e29374. https://doi.org/10.2196/29374.Search in Google Scholar PubMed PubMed Central

42. Sloane, JF, Donkin, C, Newell, BR, Singh, H, Meyer, AND. Managing interruptions to improve diagnostic decision-making: strategies and recommended research agenda. J Gen Intern Med 2023;38:1526–31. https://doi.org/10.1007/s11606-022-08019-w.Search in Google Scholar PubMed PubMed Central

43. Knees, M. Impact of clinician care team model on risk of diagnostic errors among adults who transferred to intensive care or died. https://shmabstracts.org/abstract/impact-of-clinician-care-team-model-on-risk-of-diagnostic-errors-among-adults-who-transferred-to-intensive-care-or-died/ [accessed 1 Jul 2025].Search in Google Scholar

44. Sims, MH, Bigham, J, Kautz, H, Halterman, MW. Crowdsourcing medical expertise in near real time. J Hosp Med 2014;9:451–6. https://doi.org/10.1002/jhm.2204.Search in Google Scholar PubMed

45. Croskerry, P. From mindless to mindful practice--cognitive bias and clinical decision making. N Engl J Med 2013;368:2445–8. https://doi.org/10.1056/nejmp1303712.Search in Google Scholar PubMed

46. Merkebu, J, Battistone, M, McMains, K, McOwen, K, Witkop, C, Konopasky, A, et al.. Situativity: a family of social cognitive theories for understanding clinical reasoning and diagnostic error. Diagnosis 2020;7:169–76. https://doi.org/10.1515/dx-2019-0100.Search in Google Scholar PubMed

47. Holmboe, ES, Durning, SJ. Understanding the social in diagnosis and error: a family of theories known as situativity to better inform diagnosis and error. Diagnosis 2020;7:161–4. https://doi.org/10.1515/dx-2020-0080.Search in Google Scholar PubMed

48. Durning, SJ, Artino, AR. Situativity theory: a perspective on how participants and the environment can interact: AMEE guide no. 52. Med Teach 2011;33:188–99. https://doi.org/10.3109/0142159x.2011.550965.Search in Google Scholar PubMed

49. Choi, JJ. What is diagnostic safety? A review of safety science paradigms and rethinking paths to improving diagnosis. Diagnosis 2024;11:369–73. https://doi.org/10.1515/dx-2024-0008.Search in Google Scholar PubMed

50. Singh, H, Connor, DM, Dhaliwal, G. Five strategies for clinicians to advance diagnostic excellence. BMJ 2022;376:e068044. https://doi.org/10.1136/bmj-2021-068044.Search in Google Scholar PubMed

51. Prakash, S, Sladek, RM, Schuwirth, L. Interventions to improve diagnostic decision making: a systematic review and meta-analysis on reflective strategies. Med Teach 2019;41:517–24. https://doi.org/10.1080/0142159x.2018.1497786.Search in Google Scholar

52. Yale, S, Cohen, S, Bordini, BJ. Diagnostic time-outs to improve diagnosis. Crit Care Clin 2022;38:185–94. https://doi.org/10.1016/j.ccc.2021.11.008.Search in Google Scholar PubMed

53. Garber, A, Garabedian, P, Wu, L, Lam, A, Malik, M, Fraser, H, et al.. Developing, pilot testing, and refining requirements for 3 EHR-integrated interventions to improve diagnostic safety in acute care: a user-centered approach. JAMIA Open 2023;6:ooad031. https://doi.org/10.1093/jamiaopen/ooad031.Search in Google Scholar PubMed PubMed Central

54. Trowbridge, RL. Twelve tips for teaching avoidance of diagnostic errors. Med Teach 2008;30:496–500. https://doi.org/10.1080/01421590801965137.Search in Google Scholar PubMed

55. Graber, ML, Kissam, S, Payne, VL, Meyer, AN, Sorensen, A, Lenfestey, N, et al.. Cognitive interventions to reduce diagnostic error: a narrative review. BMJ Qual Saf 2012;21:535–57. https://doi.org/10.1136/bmjqs-2011-000149.Search in Google Scholar PubMed

56. Huang, GC, Kriegel, G, Wheaton, C, Sternberg, S, Sands, K, Richards, J, et al.. Implementation of diagnostic pauses in the ambulatory setting. BMJ Qual Saf 2018;27:492–7. https://doi.org/10.1136/bmjqs-2017-007192.Search in Google Scholar PubMed

57. El-Kareh, R, Hasan, O, Schiff, GD. Use of health information technology to reduce diagnostic errors. BMJ Qual Saf 2013;22:ii40–51. https://doi.org/10.1136/bmjqs-2013-001884.Search in Google Scholar PubMed PubMed Central

58. El-Kareh, R, Roy, C, Williams, DH, Poon, EG. Impact of automated alerts on follow-up of post-discharge microbiology results: a cluster randomized controlled trial. J Gen Intern Med 2012;27:1243–50. https://doi.org/10.1007/s11606-012-1986-8.Search in Google Scholar PubMed PubMed Central

59. Mathews, BK, Fredrickson, M, Sebasky, M, Seymann, G, Ramamoorthy, S, Vilke, G, et al.. Structured case reviews for organizational learning about diagnostic vulnerabilities: initial experiences from two medical centers. Diagnosis 2020;7:27–35. https://doi.org/10.1515/dx-2019-0032.Search in Google Scholar PubMed

60. Calibrate Dx: A Resource To Improve Diagnostic Decisions | Agency for Healthcare Research and Quality.Search in Google Scholar

61. Yuan, J, Xu, F, Sun, Y, Ren, H, Chen, M, Feng, S. Shared decision-making in the management of pulmonary nodules: a systematic review of quantitative and qualitative studies. BMJ Open 2024;14:e079080. https://doi.org/10.1136/bmjopen-2023-079080.Search in Google Scholar PubMed PubMed Central

62. Epstein, RM, Franks, P, Shields, CG, Meldrum, SC, Miller, KN, Campbell, TL, et al.. Patient-centered communication and diagnostic testing. Ann Fam Med 2005;3:415–21. https://doi.org/10.1370/afm.348.Search in Google Scholar PubMed PubMed Central

63. Burns, A, Donnelly, B, Feyi-Waboso, J, Shephard, E, Calitri, R, Tarrant, M, et al.. How do electronic risk assessment tools affect the communication and understanding of diagnostic uncertainty in the primary care consultation? A systematic review and thematic synthesis. BMJ Open 2022;12:e060101. https://doi.org/10.1136/bmjopen-2021-060101.Search in Google Scholar PubMed PubMed Central

64. Cox, CL, Miller, BM, Kuhn, I, Fritz, Z. Diagnostic uncertainty in primary care: what is known about its communication, and what are the associated ethical issues? Fam Pract 2021;38:654–68. https://doi.org/10.1093/fampra/cmab023.Search in Google Scholar PubMed PubMed Central

65. Dahm, MR, Cattanach, W, Williams, M, Basseal, JM, Gleason, K, Crock, C. Communication of diagnostic uncertainty in primary care and its impact on patient experience: an integrative systematic review. J Gen Intern Med 2023;38:738–54. https://doi.org/10.1007/s11606-022-07768-y.Search in Google Scholar PubMed PubMed Central

66. Moulder, G, Harris, E, Santhosh, L. Teaching the science of uncertainty. Diagnosis 2023;10:13–8. https://doi.org/10.1515/dx-2022-0045.Search in Google Scholar PubMed

67. World Health Organization. Diagnostic errors: technical series on safer primary care. World Health Organization. https://iris.who.int/bitstream/handle/10665/252410/9789241511636-eng.pdf [accessed July 2025].Search in Google Scholar

68. Partnership for patient safety. https://p4ps.net/ [accessed July 2025].Search in Google Scholar

69. Pulse center for patient safety and advocacy. https://www.pulsecenterforpatientsafety.org/ [accessed July 2025].Search in Google Scholar

70. Community improving diagnosis in medicine board. https://www.improve-dx.org/board-directors [accessed July 2025].Search in Google Scholar

71. World Health Organization. Patients for patient safety. https://www.who.int/initiatives/patients-for-patient-safety [accessed July 2025].Search in Google Scholar

72. Institute for Patient and Family Centered Care. Effective patient and family advisory councils. https://www.ipfcc.org/bestpractices/sustainable-partnerships/engaging/effective-pfacs.html [accessed July 2025].Search in Google Scholar

73. Bradford, A, Singh, H. Toolkit for engaging patients to improve diagnostic safety. Agency for Healthcare Research and Quality. https://www.ahrq.gov/diagnostic-safety/tools/engaging-patients-improve.html [accessed July 2025].Search in Google Scholar

74. Bell, SK, Harcourt, K, Dong, J, DesRoches, C, Hart, NJ, Liu, SK, et al.. Patient and family contributions to improve the diagnostic process through the OurDX electronic health record tool: a mixed method analysis. BMJ Qual Saf 2024;33:597–608. https://doi.org/10.1136/bmjqs-2022-015793.Search in Google Scholar PubMed PubMed Central

75. Zimolzak, AJ, Wei, L, Mir, U, Gupta, A, Vaghani, V, Subramanian, D, et al.. Machine learning to enhance electronic detection of diagnostic errors. JAMA Network Open 2024;7:e2431982. https://doi.org/10.1001/jamanetworkopen.2024.31982.Search in Google Scholar PubMed PubMed Central

76. Recognizing excellence in diagnosis: leapfrog group; Available from: https://www.leapfroggroup.org/sites/default/files/Files/Recognizing%20Excellence%20in%20Diagnosis%20Report.pdf.Search in Google Scholar

Received: 2025-08-01
Accepted: 2025-08-18
Published Online: 2025-09-16

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/dx-2025-0107/html
Scroll to top button