Home Medicine Lights and shadows of artificial intelligence in laboratory medicine
Article Open Access

Lights and shadows of artificial intelligence in laboratory medicine

  • Giuseppe Lippi ORCID logo EMAIL logo and Mario Plebani ORCID logo
Published/Copyright: February 24, 2025

Although a universally accepted definition does not exist, artificial intelligence (AI) is commonly described as the use of computational systems to simulate human intelligence, with the purpose of performing a vast array of tasks that require reasoning, learning and decision-making [1]. The term “artificial intelligence” was originally coined by John McCarthy and colleagues in the mid-1950s, when it was defined as “the science and engineering of making intelligent machines that exhibit critical thinking comparable to humans” [2]. Since then, remarkable technological advancements have enhanced the power and use of AI tools, making them an integral part of both personal and professional domains. Key components of AI include machine learning (ML), which encompasses algorithms designed to improve autonomously through experience, and deep learning (DL), a subset of ML that uses neural networks with multiple layers to analyze complex data. Neural networks themselves are computational models that replicate the structure of the human brain, enabling capabilities such as pattern recognition and predictive modeling [3]. Generative AI, a specific subset of AI, uses DL models, and is hence primarily focused on generating novel content, including text (e.g., large language models, LLMs), images, music and so forth [3].

In healthcare, AI systems have been developed to improve efficiency and accuracy, particularly in tasks such as diagnosing diseases from imaging data and processing large amount of clinical and diagnostic information [4]. Due to the extensive use of technology and the potential to generate an immense amount of clinical information (commonly referred to as “big data”) in the form of laboratory test results, AI has gained significant momentum in the field of laboratory medicine [5]. A recent survey conducted by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) revealed considerable interest in AI training among laboratory professionals, even if only a minority (approximately 25 %) of surveyed laboratories reported having active AI projects in place [6].

Regardless of the varying perceptions among laboratory professionals, it is indisputable that AI will continue to play an increasingly prominent role in organization and operations of medical laboratories. Several AI tools are already widely deployed in clinical laboratories, such as ML models for optimizing sample processing and quality control management [7], automated verification and validation modules integrated with the laboratory information system (LIS) [8], digital morphology techniques in laboratory hematology [9] and for assisting diagnostic reasoning across various medical fields [10], as well as tools for supporting creation and editing of documents. Additionally, DL models are playing an essential role in managing and interpreting the massive datasets generated through genomics and proteomics research.

A paradigmatic example of how AI can support clinical decision-making has been provided by a recent single-blind randomized clinical trial, in which physicians specializing in family, internal or emergency medicine were randomized to either use conventional resources alone or to have access to the same resources supplemented with LLM [10]. Although the use of LLM did not significantly improve clinical reasoning compared to the use of conventional resources alone (median diagnostic reasoning score per case: 76 vs. 74 %; p=0.60), the median diagnostic reasoning score for LLM alone was 92 %, thus offsetting the scores of both physician cohorts by over 16 %.

Although it is increasingly clear that AI will become an integral part of organization and activities of medical laboratory services worldwide, significant challenges remain (Table 1). The primary issue is the limited flexibility of AI in interpreting laboratory data. In contrast to human cognition, which can use intuitive judgment and gestalt reasoning [11], AI systems currently lack the adaptive capacity to interpret data with equivalent contextual understanding. The same principle applies to image recognition. While DL models can be trained to identify digital images of prototype cell types or other components in body fluids, the inherent heterogeneity in the presentation of these elements – especially the variability of abnormalities across different physiological or pathological states – limits the accuracy of classification. This challenge persists whether using specialized digital cell imaging systems or more general AI tools, like ChatGPT [12]. In all these cases the current generation of AI tools would still need interactive learning frameworks and human oversight for reviewing the initial classification. Analogously, the interpretation of human symptoms and individual laboratory data by AI often relies on predefined algorithms that are designed to identify patterns based on existing data and conditions. Nevertheless, diseases can present in many diverse ways, and the range of possible laboratory abnormalities associated with different conditions can vary widely, making it challenging to always provide accurate or comprehensive interpretations [5], 13]. Additional limitations include the potential for overreliance on AI systems, which could completely replace human reasoning; limited flexibility and adaptability, leading to errors in interpreting complex or rare laboratory abnormalities, as previously exemplified; the lack of standardization, as different AI tools may yield inconsistent answers to the same clinical question [14]; the regulatory challenges that must drive integration into routine laboratory practice; the inherent cost of acquiring and regularly updating the software; the complexity between human and AI interaction; the lack of transparency and explainability due to the often unidentified nature of the algorithms; accountability concerns when errors occur; ethical and legal issues related to privacy, data security and potential bias in decision-making [15]. Finally, the use of an external validation set is always needed to ensure reproducibility and generalizability of any AI tool or algorithm.

Table 1:

The lights and shadows of artificial intelligence (AI) in laboratory medicine.

Advantages
  1. Possibility to streamline process and activities by automating routine tasks

  2. Autoverification and autovalidation of laboratory data, enhancing efficiency and accuracy

  3. Assistance in digital image recognition for improved diagnostic accuracy

  4. Support in interpretation of laboratory data, aiding clinical decision-making

  5. Facilitated creation and editing of documents

  6. Management and analysis of big data


Limitations

  1. Limited flexibility and adaptability in handling unexpected or complex data

  2. Potential errors in interpreting complex or rare laboratory abnormalities

  3. Overreliance on AI, potentially diminishing human clinical judgment

  4. Lack of standardization, leading to inconsistent results between different AI tools

  5. Regulatory requirements, which may slow down AI integration

  6. High costs related to setup, maintenance and training for AI systems

  7. Complexity in human–AI interaction

  8. Lack of transparency and exploitability in AI decision-making processes

  9. Poor accountability for errors made in data interpretation

  10. Ethical and legal concerns, including data privacy issues and algorithmic bias

  11. Need for external validation to ensure reproducibility and generalizability

In conclusion, while AI holds promise to revolutionize laboratory medicine by enhancing efficiency, diagnostic accuracy and data management, it also carries significant challenges. The integration of AI into routine laboratory practice must be approached with caution, addressing key unresolved issues such as risk of overreliance, limited flexibility and potential for bias and errors. Regulatory, ethical and legal concerns must also be addressed to ensure that AI is deployed transparently and accountably. As AI continues to evolve, its role in laboratory medicine services will expand, but its successful implementation will require active collaboration among laboratory professionals, technology developers, policymakers and healthcare managers.


Corresponding author: Prof. Giuseppe Lippi, MD, Section of Clinical Biochemistry, University Hospital of Verona, Piazzale L.A. Scuro, 10, 37134 Verona, Italy, E-mail:

  1. Research ethics: Not applicable.

  2. Informed consent: Not applicable.

  3. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  4. Use of Large Language Models, AI and Machine Learning Tools: None declared.

  5. Conflict of interest: The authors state no conflict of interest.

  6. Research funding: None declared.

  7. Data availability: Not applicable.

  8. Article Note: A translation of this article can be found here: https://doi.org/10.1515/almed-2025-0039.

References

1. Amisha, MP, Pathania, M, Rathaur, VK. Overview of artificial intelligence in medicine. J Fam Med Prim Care 2019;8:2328–31.10.4103/jfmpc.jfmpc_440_19Search in Google Scholar PubMed PubMed Central

2. McCarthy, J, Minsky, ML, Rochester, N, Shannon, CE. A proposal for the Dartmouth Summer Research Project on artificial intelligence, August 31, 1955. AI Mag 2006;27:12–4.Search in Google Scholar

3. Hulsen, T. Literature analysis of artificial intelligence in biomedicine. Ann Transl Med 2022;10:1284.10.21037/atm-2022-50Search in Google Scholar PubMed PubMed Central

4. Bekbolatova, M, Mayer, J, Ong, CW, Toma, M. Transformative potential of AI in healthcare: definitions, applications, and navigating the ethical landscape and public perspectives. Healthcare (Basel) 2024;12:125.10.3390/healthcare12020125Search in Google Scholar PubMed PubMed Central

5. Cadamuro, J, Cabitza, F, Debeljak, Z, De Bruyne, S, Frans, G, Perez, SM, et al.. Potentials and pitfalls of ChatGPT and natural-language artificial intelligence models for the understanding of laboratory medicine test results. An assessment by the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) Working Group on Artificial Intelligence (WG-AI). Clin Chem Lab Med 2023;61:1158–66.10.1515/cclm-2023-0355Search in Google Scholar PubMed

6. Cadamuro, J, Carobene, A, Cabitza, F, Debeljak, Z, De Bruyne, S, van Doorn, W, et al.. A comprehensive survey of artificial intelligence adoption in European laboratory medicine: current utilization and prospects. Clin Chem Lab Med 2025;63:692–703. https://doi.org/10.1515/cclm-2024-1016.Search in Google Scholar PubMed

7. You, J, Seok, HS, Kim, S, Shin, H. Advancing laboratory medicine practice with machine learning: swift yet exact. Ann Lab Med 2025;45:22–35.10.3343/alm.2024.0354Search in Google Scholar PubMed PubMed Central

8. Guidi, GC, Poli, G, Bassi, A, Giobelli, L, Benetollo, PP, Lippi, G. Development and implementation of an automatic system for verification, validation and delivery of laboratory test results. Clin Chem Lab Med 2009;47:1355–60.10.1515/CCLM.2009.316Search in Google Scholar PubMed

9. Kratz, A, Lee, SH, Zini, G, Riedl, JA, Hur, M, Machin, S. Digital morphology analyzers in hematology: ICSH review and recommendations. Int J Lab Hematol 2019;41:437–47.10.1111/ijlh.13042Search in Google Scholar PubMed

10. Goh, E, Gallo, R, Hom, J, Strong, E, Weng, Y, Kerman, H, et al.. Large language model influence on diagnostic reasoning: a randomized clinical trial. JAMA Netw Open 2024;7:e2440969.10.1001/jamanetworkopen.2024.40969Search in Google Scholar PubMed PubMed Central

11. Cervellin, G, Borghi, L, Lippi, G. Do clinicians decide relying primarily on Bayesians principles or on Gestalt perception? Some pearls and pitfalls of Gestalt perception in medicine. Intern Emerg Med 2014;9:513–9.10.1007/s11739-014-1049-8Search in Google Scholar PubMed

12. Negrini, D, Pighi, L, Tosi, M, Lippi, G. Evaluating the accuracy of ChatGPT in classifying normal and abnormal blood cell morphology. Clin Chem Lab Med 2025. https://doi.org/10.1515/cclm-2024-1469. [Epub ahead of print].Search in Google Scholar PubMed

13. Meyer, A, Soleman, A, Riese, J, Streichert, T. Comparison of ChatGPT, Gemini, and Le Chat with physician interpretations of medical laboratory questions from an online health forum. Clin Chem Lab Med 2024;62:2425–34.10.1515/cclm-2024-0246Search in Google Scholar PubMed

14. Lippi, G, Mattiuzzi, C, Favaloro, EJ. Reliability of generative artificial intelligence in identifying the major risk factors for venous thrombosis. Blood Coagul Fibrinolysis 2024;35:354–5.10.1097/MBC.0000000000001322Search in Google Scholar PubMed

15. Pennestrì, F, Banfi, G. Artificial intelligence in laboratory medicine: fundamental ethical issues and normative key-points. Clin Chem Lab Med 2022;60:1867–74.10.1515/cclm-2022-0096Search in Google Scholar PubMed

Published Online: 2025-02-24

© 2025 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Articles in the same Issue

  1. Frontmatter
  2. Editorial
  3. Lights and shadows of artificial intelligence in laboratory medicine
  4. Luces y sombras de la inteligencia artificial en la medicina de laboratorio
  5. Review / Artículo de Revisión
  6. Fundamentals of lipoprotein(a) request and quantification in the clinical laboratory
  7. Aspectos fundamentales en la solicitud y determinación de la lipoproteína(a) en el laboratorio clínico
  8. Original Article / Artículo Original
  9. Evaluating seven bioinformatics platforms for tertiary analysis of genomic data from whole exome sequencing in a pilot group of patients
  10. Evaluación de siete programas bioinformáticos para el análisis terciario de datos genómicos generados a partir de la secuenciación del exoma completo en un grupo piloto de pacientes
  11. Parameters of glycemic variability in continuous glucose monitoring as predictors of diabetes: a prospective evaluation in a non-diabetic general population
  12. Parámetros de variabilidad glucémica de la monitorización continua de glucosa como predictores de diabetes: evaluación prospectiva en una población general sin diabetes
  13. Reference intervals of hematological parameters in the Chilean adult population and the Mapuche ethnic group
  14. Intervalos de referencia de parámetros hematológicos en población chilena adulta y en la etnia mapuche
  15. Lipid metabolism in overweight/obese children vs. normal weight children in a north-eastern region of Spain
  16. Estudio del metabolismo lipídico en niños aragoneses con sobrepeso/obesidad vs. niños normopeso
  17. Short Communication / Comunicación Breve
  18. Evaluating the research parameters available on the Sysmex® XN-series hematology analyzers as markers of dysplasia in peripheral blood
  19. Valoración de los parámetros de investigación de los analizadores hematológicos de la serie XN (Sysmex®) como marcadores de displasia en sangre perifèrica
  20. Evaluation of an alternative centrifugation protocol for reducing total turnaround time
  21. Evaluación de un protocolo de centrifugación alternativo que permita reducir el tiempo de respuesta total
  22. Case Report / Caso Clínico
  23. Jordans’ anomaly in Chanarin-Dorfman syndrome
  24. Anomalía de Jordans en síndrome de Chanarin-Dorfman
  25. Letter to the Editor / Carta al Editor
  26. Considerations about the use of glucometers in glucose tolerance tests
  27. Consideraciones acerca del uso de glucómetros durante la prueba de tolerancia oral a la glucosa
  28. Reply to: “Considerations about the use of glucometers for testing glucose tolerance”
  29. Respuesta a la carta al editor: “Consideraciones acerca del uso de glucómetros durante la prueba de tolerancia oral a la glucosa” (https://doi.org/10.1515/almed-2024-0108)
Downloaded on 5.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/almed-2025-0024/html
Scroll to top button