Home Direct observation of depression screening: identifying diagnostic error and improving accuracy through unannounced standardized patients
Article
Licensed
Unlicensed Requires Authentication

Direct observation of depression screening: identifying diagnostic error and improving accuracy through unannounced standardized patients

  • Alan Schwartz ORCID logo EMAIL logo , Steven Peskin , Alan Spiro and Saul J. Weiner
Published/Copyright: March 18, 2020

Abstract

Background

Depression is substantially underdiagnosed in primary care, despite recommendations for screening at every visit. We report a secondary analysis focused on depression of a recently completed study using unannounced standardized patients (USPs) to measure and improve provider behaviors, documentation, and subsequent claims for real patients.

Methods

Unannounced standardized patients presented incognito in 217 visits to 59 primary care providers in 22 New Jersey practices. We collected USP checklists, visit audio recordings, and provider notes after visits; provided feedback to practices and providers based on the first two visits per provider; and compared care and documentation behaviors in the visits before and after feedback. We obtained real patient claims from the study practices and a matched comparison group and compared the likelihood of visits including International Classification of Diseases, 10th Revision (ICD-10) codes for depression before and after feedback between the study and comparison groups.

Results

Providers significantly improved in their rate of depression screening following feedback [adjusted odds ratio (AOR), 3.41; 95% confidence interval (CI), 1.52–7.65; p = 0.003]. Sometimes expected behaviors were documented when not performed. The proportion of claims by actual patients with depression-related ICD-10 codes increased significantly more from prefeedback to postfeedback in the study group than in matched control group (interaction AOR, 1.41; 95% CI, 1.32–1.50; p < 0.001).

Conclusions

Using USPs, we found significant performance issues in diagnosis of depression, as well as discrepancies in documentation that may reduce future diagnostic accuracy. Providing feedback based on a small number of USP encounters led to some improvements in clinical performance observed both directly and indirectly via claims.

Acknowledgments

Support for this project was provided by a grant from the Robert Wood Johnson Foundation to the American College of Physicians and the Institute for Practice and Provider Performance Improvement, Inc. We gratefully acknowledge the assistance of the American College of Physicians and Horizon Blue Cross Blue Shield of New Jersey staff, particularly Kathleen Feeney, Elizabeth Rubin, Cari Miller, and Yan Zhang. However, the findings do not represent official statements of any of these organizations, and any errors are the authors’.

  1. Author contributions: All the authors have accepted responsibility for the entire content of this submitted manuscript and approved submission.

  2. Research funding: Grant from the Robert Wood Johnson Foundation, Funder Id: http://dx.doi.org/10.13039/100000867, Grant Number: 73791 to the American College of Physicians and the Institute for Practice and Provider Performance Improvement, Inc.

  3. Employment or leadership: A. Schwartz, SJW, and A. Spiro conducted this work as employees and board members of the Institute for Practice and Provider Performance Improvement, Inc., with support from the Robert Wood Johnson Foundation and American College of Physicians. A. Spiro was also an employee of Blue Health Intelligence during the period when the parent study was performed. SP is an employee of Horizon Blue Cross Blue Shield of New Jersey, and neither SP nor Horizon was compensated for their participation.

  4. Honorarium: None declared.

  5. Competing interests: The funding organization(s) played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the report for publication.

  6. Disclosures: None declared.

References

1. Weiner SJ, Schwartz A. Directly observed care: can unannounced standardized patients address a gap in performance measurement? J Gen Intern Med 2014;29:1183–7.10.1007/s11606-014-2860-7Search in Google Scholar

2. Stange KC, Zyzanski SJ, Jaen CR, Callahan EJ, Kelly RB, GillandersWR, et al. Illuminating the ‘black box’. A description of 4454 patient visits to 138 family physicians. J Fam Pract 1998;46:377–89.Search in Google Scholar

3. National Center for Quality Assurance. Available from: http://www.ncqa.org/HEDISQualityMeasurement/PerformanceMeasurement.aspx.Search in Google Scholar

4. Agency for Healthcare Research and Quality. CAHPS Clinician & Group Surveys. Available at: https://cahps.ahrq.gov/Surveys-Guidance/CG/index.html. Last accessed May 10, 2014.Search in Google Scholar

5. Weiner S, Schwartz A. Contextual errors in medical decision making: overlooked and understudied. Acad Med 2016;91:657–62.10.1097/ACM.0000000000001017Search in Google Scholar

6. Weiner S, Schwartz A, Weaver F, Goldberg J, Yudkowski R, Sharma G, et al. Contextual errors and failures in indivdualizing patient care. Ann Int Med 2010;153:69–75.10.7326/0003-4819-153-2-201007200-00002Search in Google Scholar

7. Schwartz A, Weiner S, Binns-Calvey A. Unannounced standardize patient assessment of the roter interaction analysis system: the challenge of measuring patient-centered communication. Jt Comm J Qual Patient Saf 2013;39:83–8.Search in Google Scholar

8. Schwartz A, Weiner S, Binns-Calvey A. Comparing announced with unannounced standardized patients in performance assessment. Jt Comm J Qual Patient Saf 2013;39:83–8.10.1016/S1553-7250(13)39012-6Search in Google Scholar

9. Schwartz A, Weiner S, Weaver F, Yudkowsky R, Sharma G, Binns-Calvey A, et al. Uncharted territory: measuring costs of diagnostic errors outside the medical record. BMJ Qual Saf 2012;21:918–24.10.1136/bmjqs-2012-000832Search in Google Scholar PubMed

10. Luck J, Peabody JW. Using standardised patients to measure physicians’ practice: validation study using audio recordings. Br Med J 2002;325:679.10.1136/bmj.325.7366.679Search in Google Scholar PubMed PubMed Central

11. Peabody JW, Luck J, Glassman P, Dresselhaus TR, Lee M. Comparison of vignettes, standardized patients, and chart abstraction: a prospective validation study of 3 methods for measuring quality. J Am Med Assoc 2000;283:1715–22.10.1001/jama.283.13.1715Search in Google Scholar PubMed

12. Krane NK, Anderson D, Lazarus CJ, Termini M, Bowdish B, Chauvin S, et al. Physician practice behavior and practice guidelines: using unanounced standardized patients to gather data. J Gen Int Med 2009;24:53–6.10.1007/s11606-008-0826-3Search in Google Scholar PubMed PubMed Central

13. Peabody J, Luck J, Glassman P, Jain S, Hansen J, Spell M, et al. Measuring the quality of physician practice by using clinical vignettes: a prospective validation study. Ann Intern Med 2004;141:771–80.10.7326/0003-4819-141-10-200411160-00008Search in Google Scholar PubMed

14. Culver JO, Bowen DJ, Reynolds SE, Pinsky LE, Press N, Burke W. Breast cancer risk communication: assessment of primary care physicians by standardized patients. Genet Med 2009;11: 735–41.10.1097/GIM.0b013e3181b2e5ebSearch in Google Scholar PubMed

15. Srinivasan M, Franks P, Meredith LS, Fiscella K, Epstein RM, Kravitz RL. Connoisseurs of care? Unannounced standardized patients’ ratings of physicians. Med Care 2006;44:1092–8.10.1097/01.mlr.0000237197.92152.5eSearch in Google Scholar

16. Fiscella K, Franks P, Srinivasan M, Kravitz RL, Epstein R. Ratings of physician communication by real and standardized patients. Ann Fam Med 2007;5:151–8.10.1370/afm.643Search in Google Scholar

17. Glassman P, Luck J, O’Gara E, Peabody J. Using standardized patients to measure quality: evidence from the literature and a prospective study. Jt Comm J Qual Patient Saf 2000;26:644–53.10.1016/S1070-3241(00)26055-0Search in Google Scholar

18. Luoma JB, Martin CE, Pearson JL. Contact with mental health and primary care providers before suicide: a review of the evidence. Am J Psychiatry 2002;159:909–16.10.1176/appi.ajp.159.6.909Search in Google Scholar PubMed PubMed Central

19. Schwartz A, Weiner SJ. Collecting and using hidden quality data to enhance value-based care. Horizon Blue Cross Blue Shield of New Jersey’s 6th Annual Value-Based Program Summit. Iselin, NJ, 2018. Available at: http://www.i3pi.com/sites/all/themes/burnt/downloads/Horizon%20Summit.Oct%202%202018.ppsx. Accessed 3/1/2020.Search in Google Scholar

Received: 2019-12-26
Accepted: 2020-02-26
Published Online: 2020-03-18
Published in Print: 2020-08-27

©2020 Walter de Gruyter GmbH, Berlin/Boston

Articles in the same Issue

  1. Frontmatter
  2. Editorials
  3. Progress understanding diagnosis and diagnostic errors: thoughts at year 10
  4. Understanding the social in diagnosis and error: a family of theories known as situativity to better inform diagnosis and error
  5. Sapere aude in the diagnostic process
  6. Perspectives
  7. Situativity: a family of social cognitive theories for understanding clinical reasoning and diagnostic error
  8. Clinical reasoning in the wild: premature closure during the COVID-19 pandemic
  9. Widening the lens on teaching and assessing clinical reasoning: from “in the head” to “out in the world”
  10. Assessment of clinical reasoning: three evolutions of thought
  11. The genealogy of teaching clinical reasoning and diagnostic skill: the GEL Study
  12. Study design and ethical considerations related to using direct observation to evaluate physician behavior: reflections after a recent study
  13. Focused ethnography: a new tool to study diagnostic errors?
  14. Phenomenological analysis of diagnostic radiology: description and relevance to diagnostic errors
  15. Original Articles
  16. A situated cognition model for clinical reasoning performance assessment: a narrative review
  17. Clinical reasoning performance assessment: using situated cognition theory as a conceptual framework
  18. Direct observation of depression screening: identifying diagnostic error and improving accuracy through unannounced standardized patients
  19. Understanding context specificity: the effect of contextual factors on clinical reasoning
  20. The effect of prior experience on diagnostic reasoning: exploration of availability bias
  21. The Linguistic Effects of Context Specificity: Exploring Affect, Cognitive Processing, and Agency in Physicians’ Think-Aloud Reflections
  22. Sequence matters: patterns in task-based clinical reasoning
  23. Challenges in mitigating context specificity in clinical reasoning: a report and reflection
  24. Examining the patterns of uncertainty across clinical reasoning tasks: effects of contextual factors on the clinical reasoning process
  25. Teamwork in clinical reasoning – cooperative or parallel play?
  26. Clinical problem solving and social determinants of health: a descriptive study using unannounced standardized patients to directly observe how resident physicians respond to social determinants of health
  27. Sociocultural learning in emergency medicine: a holistic examination of competence
  28. Scholarly Illustrations
  29. Expanding boundaries: a transtheoretical model of clinical reasoning and diagnostic error
  30. Embodied cognition: knowing in the head is not enough
  31. Ecological psychology: diagnosing and treating patients in complex environments
  32. Situated cognition: clinical reasoning and error are context dependent
  33. Distributed cognition: interactions between individuals and artifacts
Downloaded on 6.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/dx-2019-0110/html
Scroll to top button