Abstract
Despite substantial gains facilitated by Artificial Intelligence (AI) in recent years, it has to be applied very cautiously in sensitive domains like medicine due to the lack of explainability of many methods in this field. We aim to provide a system to overcome these issues of medical AI applications by means of our concept of medical operational AI detailed in this paper. We make use of various methods of AI and utilize knowledge graphs in particular. The latter is continuously updated by medical experts based on medical literature such as peer-reviewed papers and standard online sources such as UpToDate. We thoroughly derive a multi-level system tackling the corresponding challenges. In particular, its design encompasses (i) holistic diagnostic assistance on a macro level, (ii) predicitions and detailed suggestions for specific medical domains on a micro level, as well as (iii) AI-based optimizations of the overall system on a meta level. We detail practical merits of medical operational AI and discuss the state of the art beyond our solution.
Introduction
Despite the substantial gains facilitated by AI in recent years, the corresponding intricate systems often come with a substantial drawback: a lack in explainability. In general so-called Explainable Artificial Intelligence (XAI) makes AI intelligible to its (human) users [1]. Especially in sensitive domains such as medicine, AI is required to provide not only an accurate prediction but also transparent reasons for that prediction [2]. We aim to overcome these issues of AI applications in sensitive medical domains. In further pursuing our research vision [cf. 3, 4], we develop a system explicating, how AI can be used in medical domains. In particular, that system’s design encompasses (i) holistic diagnostic assistance on a macro level, (ii) predicitions and detailed suggestions for specific medical domains on a micro level, as well as (iii) AI-based optimizations of the overall system on a meta level.
We detail the subsequent approach towards a sensitive application of AI methods in medical practice as follows:
We provide a short overview of XAI – one of the prerequisites of applying AI in a sensitive domain such as medicine.
We outline the concept of medical operational AI and develop a multi-level approach for applying AI in medical practice.
We outline our current state of implementing that multi-level approach in medical practice.
We discuss the state of the art and compare it to our solution.
Materials and methods
We rely on various forms of AI (Throughout the remaining parts of this paper, we refrain from using the term of Machine Learning (ML) adjacent to AI for the sake of comprehensibility) in our contributions, which we consider as a family of systems, aiming for human behavior or intelligence [5]. Often enough, these AI systems rely on concepts of artificial neural networks and deep learning [cf. 6]. Despite their strengths, their output and decisions are usually opaque and non-interpretable [7]. Interpretability enables the user of an AI model to make sense of its constituent components and thus helps to understand the required calculations and logical decisions made to produce an output based on a certain input.
We use the term Medical Operational AI to differentiate our approach from both regular medical operations as well as medical research AI. While nowadays regular medical operations are usually digitized or to some degree even digitally automated, their focus rather lies on documentation of medical processes and communication among different branches of the medical system. In part due to the sensitive nature of the medical domain, applications of state-of-the-art AI technologies are often limited to smaller clinical trials and research projects. Medical Operational AI aims to bridge that gap between state-of-the-art AI and regular medical operations.
We make use of the method of a knowledge graph [8, 9] to represent medical knowledge and its influence on a plethora of diagnoses. In general, a knowledge graph aims to represent a body of knowledge by representing its concepts via nodes connected via their respective interrelations [8]. Then, reasoning on such a knowledge graph means to uncover new relations [10]. In order to do so, we use a rule-based system. Rule-based systems rely on a processing paradigm analogous to ”IF-THEN rules” of medical guidelines. Thus, they do not only resemble existing decision guidelines in the medical domain, but are considered explainable by-design, since these rules explicate how a decision is made and which data is involved.
Ji et al. [11] provide an overview of different processing paradigms for knowledge graphs, like knowledge representation learning, knowledge acquisition, temporal knowledge graphs, and knowledge aware applications. We consider medical operational AI on both a knowledge graph as well as patient data a special case of knowledge reasoning as a part of knowledge acquisition, since it aims to discover hidden relations between that data and medical graph entities such as diseases. In this case, knowledge reasoning provides an answer to questions like.
Which disease is most likely given the provided patient data?
Such a knowledge reasoning process interrelates patient data with information from the knowledge graph.
To ensure the correctness of our medical knowledge graph, multiple medical experts (internal to medicalvalues GmbH as well as external) are part of this process extracting knowledge from scientific sources and detecting errors. Among these sources are papers published to peer-reviewed papers, PubMed (https://pubmed.ncbi.nlm.nih.gov/), UpToDate (https://www.uptodate.com/), etc. In order to aid medical experts in this process, we use Natural Language Processing (NLP) [12, 13] to extract medical knowledge from identified scientific sources. NLP can be seen as the intersection of AI and linguistics and can be used to summarize texts, answer specific questions, or highlight specific parts of texts. Common NLP tasks used to extract information from texts include Named-Entity-Recognition, Relation Extraction (RE) or Question Answering [14].
Furthermore, we utilize Automated Machine Learning (AutoML) [15], [16], [17], [18], which automatically selects the right AI method for a particular problem. In particular, AutoML automates the search for an apt AI method in a specific scenario and integrating that search with the individual learning procedures.
Results
We design a multi-level system for medical operational AI and identified the following three levels (summarized by Table 1), where each one of them comes with its own objectives and challenges:
Medical operational AI on macro-, micro- and meta-level.
Macro level | Micro level | Meta level | |
---|---|---|---|
Goal | (Diagnostic) predictions w.r.t. comprehensive body of medical knowledge | AI for specialized medical scenarios | AI for optimizing AI usage on micro- and macro-level |
Methods | Heuristic, rule-based reasoning on knowledge graph | Scenario-specific selection of AI methods | NLP (NLP), missing link detection [3], AutoML (AutoML) |
Data | Full data on patient and their treatment | Scenario-specific data on patient and their treatment | Full data on patient and their treatment, data on used AI models, AI model usage metrics |
Examples | Differential diagnosis of patient with fatigue | Predicting future development of hemoglobin levels during cancer treatment | Detecting missing links in the knowledge graph on macro-level |
Macro Level At the macro level, medical operational AI is designed to make predictions for a given patient based on comprehensive medical knowledge. This way, multi-modal data (e.g. symptoms, anamnestic data, laboratory and radiological findings) is processed in a structured way transparent to the user.
Micro Level Micro level medical operational AI refers to applications specialized for predictions on a smaller scale. For example, these micro level applications include predictions of laboratory results to enable anticipatory clinical actions.
Meta Level At the meta level, we use AI to ease the creation, updating, as well as maintenance of AI systems of both other levels. Thus, the meta level in particular contributes to automating medical operational AI itself as well as its applications. Beyond the aspect of automation, meta level also acts as a layer interfacing AI and medicine respectively medical research.
In the following three subsections, we specify these different levels of medical operational AI in detail and provide information on the methods used as well as required technologies.
Macro level: AI on entirety of medical knowledge
We define the entire body of medical knowledge as a huge set of highly interrelated entities, from high level conceptualizations such as symptoms and diagnoses down to individual patient data such as age, gender, or laboratory results. Among their interrelations are causal ones like smoking cigarettes increases risk of lung cancer and those explicating similarities like leukemia is a type of cancer. This representation method is not only intuitive to humans, but also resembles the above-mentioned knowledge graphs [11] – a typical machine-readable knowledge representation. Therefore, we propose to use knowledge graphs to represent the medical body of knowledge in an intelligible format both to humans and machines. A machine and its medical operational AI in particular are then able to (i) make sense of patient data, (ii) interrelate it with that knowledge graph, and (iii) provide a prediction like diagnostic suggestions and recommended further examinations.
Figure 1 shows an exemplary excerpt from a medical knowledge graph on path-based anaemia diagnostics. It may encompass several different rules on different levels of medical observation such as physical examinations, laboratory and imaging results. Each rule impacts the final diagnosis to a different degree and assesses the influence of various findings, pre-existing diseases, laboratory results, etc. A diagnosis node assigns the corresponding ICD-10-coded disease. Every connected bit of information comes with an assigned scientific source transparently communicating the underlying medical literature. Finally, this path is framed by two path nodes managing meta information such as the path’s medical review status.

Excerpt from medical knowledge graph showing diagnostic subgraph for anaemia.
We designed heuristic procedures to cope with the heterogeneous and usually incomplete nature of medical data [cf. [19], where a qualified majority of rules need to apply for a justified diagnosis. In other words, we propose to infer new information about a patient and its relation to the entities of a medical knowledge graph by means of the heuristic rules and relations the latter embodies.
Given the outlined architecture, interpretability is already baked into medical operational AI on macro level. Furthermore, we require every information encoded in the knowledge graph to be equipped with a medical identifier based on standards like Logical Observation Identifiers Names and Codes (LOINC), International Statistical Classification of Diseases and Related Health Problems 10 (ICD-10), etc. to foster interpretability. Finally, additional contextual information is included in the knowledge graph by referencing the scientific literature each graph entity and relation stems from.
Although the perspective of a medical knowledge graph is intuitive and comprehensible to experts and non-experts alike, medicine’s deep scientific roots require its knowledge to be initially represented via papers, books, and other forms of scientific literature. Therefore, a continuous process is required intaking literature and translating it to knowledge graph entities as well as relations. While offering potential for automation, medical experts must be involved in this process in order to validate the produced graph and its accordance with the respective literature. Furthermore, patient data can be involved in the process to recommend optimizations of the graph and suggest previously unknown connections [3] to medical experts.
This concludes our proposal for medical operational AI on a macro level.
Micro level: AI for specialized scenarios
Beyond the big picture of interrelating patient data with the entirety of the medical body of knowledge, we are currently working on a second level of medical operational AI, i.e. micro level. There, AI enables to provide insights on a detailed level, where the focus lies on the distinct requirements of certain medical domains, the diagnosis of specific diseases, and needs of different kinds of medical practitioners.
Since micro level medical operational AI refers to a plethora of different scenarios, we exemplify it by means of two distinct scenarios.
Predicting future laboratory results Over the course of a particular therapy or differential diagnosis, measurements like laboratory results are often measured several times in order to monitor a temporal progression. Here, AI and regression methods in particular can provide assistance by projecting the current series of measurements into the future and predicting upcoming values.
Assistance in imaging-based diagnostics Besides physical examination and measuring laboratory results, specific imaging procedures (e.g. visual recognition of intestinal polyps) are an important part of medical diagnoses. The interpretation of the resulting images is a non-trivial task, where AI can provide valuable suggestions to the respective physician. Computer vision AI can provide suggestions via image classification or the identification of suspicious regions within an image [6, 20, 21]. Due to the abundance of solutions in that area, image-based diagnostics, we currently focus only on the produced imaging results, e.g. “ulceration of colon is present”.
Meta level: AI for medical operational AI
On a meta level, we use AI to ease processes induced on the micro and macro level. Especially on the macro level, AI relies on a structured medical knowledge base in order to infer diagnostic predictions from patient data. Likewise, that knowledge helps to increase the reliability of predictions on the micro level. Therefore, aiming for the best possible outcomes requires access to an up-to-date, reliable, and well-defined knowledge base (as obtained on the macro level, cf. Subsection 3.1).
Maintaining medical knowledge
Before integration into the medical domain, new connections within the medical knowledge graph undergo an elaborate human-governed review (cf. Subsection 3.1). That process of translating medical research literature into machine-readable knowledge can be divided into three steps:
Identify relevant literature containing medical knowledge.
Verify documents for validity.
Extract and structure knowledge in machine-readable format.
Due to the time-consuming nature of this process, we propose to use AI to automate parts of it, while keeping medical experts in control. Over the course of the resulting medical knowledge extraction procedure, medical experts need to stay in the loop by reviewing and adapting (i) the document corpus, (ii) the annotation of documents, (iii) as well as the resulting structured knowledge.
Detecting missing links
We regard the process of medical research as one uncovering missing links and entities (such as diseases, biomarkers, and diagnostic pathways) within the body of medical knowledge, thus within a medical knowledge graph. Given our medical knowledge graph, we use AI to automatically uncover potentially missing links, too. Still, that process of acquiring knowledge should be (i) strictly separated from the application of knowledge as well as (ii) include supervision and control by human medical experts. This way, the application of medical operational AI systems can help to detect missing links within a medical knowledge graph and therefore support medical research.
The application of medical operational AI systems produces a high amount of valuable data. With help of these data it is possible to detect previously unknown links and consequently improve the system. Approaches that combine the knowledge base with patient data automate the detection of possibly important links [3]. In prior work, we already showed on a real-world dataset, that bringing together knowledge and patient data enables to reveal previously unknown connections [22]. Consequently, no new data is added to the knowledge base before it was reviewed by multiple human experts.
Application examples
In the previous part of this section, we described a systematic approach to tackle the arising challenge of medical operational AI. In this section, we present exemplary results on how that approach implemented by medicalvalues GmbH performs in diagnostic scenarios. The following descriptions represent our current stage of development as of April 2023.
The medicalvalues GmbH employs a group of medical experts constantly translating medical scientific literature into new or updated parts of the knowledge graph. Table 2 shows high-level statistics resulting from this translating process and explicates the amount of medical knowledge present. That current state of the knowledge graph already covers a substantial part of the entire medical knowledge, although the latter’s sheer size and velocity of growth is a continuous challenge.
Statistics on the current state of knowledge graph at medicalvalues GmbH.
Prominent entity type | Amount |
---|---|
Diseases | 2,232 |
Findings & other symptoms | 2,911 |
Imaging results & procedures | 1,095 |
Laboratory results | 1,540 |
Patient base data | 150 |
Scientific sources | 1,591 |
All entities | 19,517 |
All relations | 41,687 |
For our first example, we use a standard scenario from [23] for iron-deficiency anemia illustrated by Figure 2. Here, a wide set of input data enables our medical operational AI to assess various disease risk on a macro level. These data is shown in a comprehensible way (if possible in the format of zlog values [24]) in relation to the correct reference ranges (if applicable) w.r.t. age, gender, ethnicity, etc. Based on decreased readings of hemoglobin, erythrocyte mean corpuscular volume and mean corpuscular hemoglobin content a diagnose of microcytic hypochromic anaemia is shown as likely. The corresponding pros and cons are shown in the user interface (cf. Figures 2 and 3). The latter laboratory finding can be further broken down to the correct diagnosis of suspected iron-deficiency anaemia as shown by Figure 3. The confidence of the diagnosis is shown by the colored progress bar, indicating the need for further diagnosis w.r.t. iron-deficiency diagnosis, which is supported by the list of suggested diagnostics (e.g. ferritin).

Anemia example: excerpt of diagnostic expert as part of the medicalvalues diagnostic intelligence.

Detailed explainable diagnosis of iron-deficiency anaemia.
Often enough, diagnosing a patient is not as simple as spotting the decreased level of hemoglobin in order to diagnose anaemia. Therefore, we apply macro-level AI to support differential diagnosis and compare the pros and cons of various diseases. This way, the respective physician is able to weigh different potential diagnosis and come to an informed conclusion. Figure 4 illustrates such an example [cf. [25]], where polymyalgia rheumatica seems to be the most likely diagnosis, while other potential diagnoses are still left to be investigated. All these investigations are suupported by a list of suggested further diagnostics.

Differential diagnosis example: excerpt of diagnostic expert as part of the medicalvalues diagnostic intelligence.
Finally, a physician comes to a conclusion and determines a diagnosis for a patient. Here, we support the task of preparing a medical report by providing an AI-based tool to automatically write a non-binding text proposal. The respective physician is then able to check that report proposal, (optionally) adapt it, and use it for the final judgement. Figure 5 illustrates such a case with an exemplary diabetes patient.

Automatic report generation example: excerpt of diagnostic expert as part of the medicalvalues diagnostic intelligence.
Besides the Diagnostic Expert App shown previously, various other medicalvalues applications offer different perspectives and analysis results based on macro level AI, like a chat-bot assisting in (among other features) differential diagnosis or cause investigations of diseases and conditions. In contrast, Micro level AI as described earlier offers medical assistance focussed on distinct medical scenarios. As such specific applications for different medical domains are required. In any case, the medicalvalues Diagnostic Data Studio based on the Jupyter Notebook (cf. https://jupyter.org/) technology enables medical practitioners and medical AI researchers to access custom AI models (written in R or Python) and easily apply them to their patient data via medicalvalues’ libraries and software (cf. Figure 6). More, micro-level AI application are yet to come.

Screenshot of diagnostic data studio.
Finally, our meta level AI enable medical experts to uncover hidden scientific links among medical concepts and early-alert about new scientific progress in the medical research domain.
Discussion
Since meta- and micro-level AI is currently still in earlier development at medicalvalues, we focus on discussing our macro-level AI systems in this section. Yu et al. [26] provide an extensive review of AI in the healthcare domain. They investigate various upcoming technologies in the field of AI and (possible) medical applications. Although they briefly mention the common issue of black-box AI systems and the subsequent lack of explainability, their coverage of this particular topic remains shallow. In contrast, Tjoa and Guan [2] study this issue in detail and emphasize the pressing nature of that problem by a list of pithy questions:
Who is accountable if things go wrong?
Can we explain why things go wrong?
If things are working well, do we know why and how to leverage them further?
We regard medical decision support systems like INTERNIST-1, INTERNIST-2, and their successor CADUCEUS [27, 28] as early, prominent examples of macro-level AI systems. These systems encompass hundreds of diseases and thus guide clinical diagnostics. Unfortunately, the lack of recent studies indicate a stop to their development and missing public data on their medical database or code inhibits further comparison to our systems.
In recent years, the following two companies also came up with offerings in the field of medical decision support. Infermedica (cf. https://infermedica.com/) offers software supporting physicians’ diagnostic processes. Their basic concept aligns with that of medicalvalues – modeling medical correlations to create predictive systems. A key difference is the scope of data points implemented to their solution. Infermedica is only capable of integrating some data points and lacks support for more complex laboratory data. They are taking a less comprehensive approach than medicalvalues: focusing more on basic medical evaluations and purposing their systems for routine diagnostics. Smart Blood Analytics (cf. https://www.smartbloodanalytics.com/) offers diagnostic prediction models based on AI. While they are developing AI-based medical intelligence similar to medicalvalues, macro-level AI is not part of their portfolio. There is no interface or service offer for physicians, and therefore no system for recommending further diagnostic approaches.
Recently, Abu-Salih [29] studied domain-specific knowledge graphs in their survey and elucidate a broad adoption of knowledge graphs in the healthcare domain. Here, we extend our discussion of subsequent medical applications of knowledge graphs from an earlier publication [3].
Similar approaches to our medical knowledge graph are discussed in [30], [31], [32], [33], where knowledge graphs of symptoms and diseases are constructed from Electronic Medical Records (EMR), medical literature, and further sources by means of relation extraction. However, while those approaches build a graph of medical concepts, the medicalvalues knowledge graph uses more complex diagnostic pathways and rules [3]. In recent years, some approaches have been proposed covering particular, constituent fields of medicine [34] or diseases [35]. In contrast, we regard the broader and larger coverage of a variety of diseases as a distinguishing factor of the medicalvalues knowledge graph.
In conclusion, we proposed a multi-level system for so-called medical operational AI and discussed the current implementation at medicalvalues GmbH. While this system is three-fold encompassing meta-, micro- and macro-level, we focussed our discussion of practical results on the latter, the macro-level. There, we have shown exemplary results of integrated medical diagnostics and elucidated how AI aids in supporting the respective physicians during their decision making process. Finally, we discussed these results and highlighted the current state-of-the-art.
-
Research funding: None declared.
-
Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.
-
Competing interests: Authors state no conflict of interest.
-
Informed consent: Not applicable.
-
Ethical approval: Not applicable.
References
1. Samek, W, Montavon, G, Vedaldi, A, Hansen, LK, Müller, K, editors. Explainable AI: interpreting, explaining and visualizing deep learning, vol 11700, of lecture notes in computer science. Cham: Springer; 2019.10.1007/978-3-030-28954-6Search in Google Scholar
2. Tjoa, E, Guan, C. A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans Neural Netw Learn Syst 2021;32:4793–813. https://doi.org/10.1109/tnnls.2020.3027314.Search in Google Scholar
3. Heilig, N, Kirchhoff, J, Stumpe, F, Plepi, J, Flek, L, Paulheim, H. Refining diagnosis paths for medical diagnosis based on an augmented knowledge graph. CoRR 2022: 13329. abs/2204.Search in Google Scholar
4. Stumpe, F, Kirchhoff, J. Diagnoseunterstützung durch künstliche Intelligenz für Labordaten. In: Pfannstiel, MA, editor. Künstliche Intelligenz im Gesundheitswesen: Entwicklungen, Beispiele und Perspektiven. Wiesbaden: Springer Gabler; 2022. pp. 505–19.10.1007/978-3-658-33597-7_23Search in Google Scholar
5. Joshi, AV. Machine learning and artificial intelligence. Cham: Springer; 2020.10.1007/978-3-030-26622-6Search in Google Scholar
6. LeCun, Y, Bengio, Y, Hinton, GE. Deep learning. Nature 2015;521:436–44. https://doi.org/10.1038/nature14539.Search in Google Scholar PubMed
7. Roscher, R, Bohn, B, Duarte, MF, Garcke, J. Explainable machine learning for scientific insights and discoveries. IEEE Access 2020;8:42200–16. https://doi.org/10.1109/access.2020.2976199.Search in Google Scholar
8. Ehrlinger, L, Wöß, W. Towards a definition of knowledge graphs. In: Martin, M, Cuquet, M, Folmer, E, editors. Joint proceedings of the posters and demos track of the 12th international conference on semantic systems - SEMANTiCS2016 and the 1st international workshop on semantic change & evolving semantics (SuCCESS’16) co-located with the 12th international conference on semantic systems (SEMANTiCS 2016), Leipzig, Germany, 2016, vol. 1695 of CEUR Workshop Proceedings CEUR-WS.org; 2016. https://ceur-ws.org/Vol-1695/paper4.pdf.Search in Google Scholar
9. Lan, Y, He, S, Liu, K, Zeng, X, Liu, S, Zhao, J. Path-based knowledge reasoning with textual semantic information for medical knowledge graph completion. BMC Med Inf Decis Making 2021;21:335. https://doi.org/10.1186/s12911-021-01622-7. https://bmcmedinformdecismak.biomedcentral.com/articles/10.11.Search in Google Scholar PubMed PubMed Central
10. Chen, X, Jia, S, Xiang, Y. A review: knowledge reasoning over knowledge graph. Expert Syst Appl 2020;141:112948. https://doi.org/10.1016/j.eswa.2019.112948.Search in Google Scholar
11. Ji, S, Pan, S, Cambria, E, Marttinen, P, Philip, SY. A survey on knowledge graphs: representation, acquisition, and applications. IEEE Trans Neural Netw Learn Syst 2021;33:494–514. https://doi.org/10.1109/tnnls.2021.3070843.Search in Google Scholar PubMed
12. Nadkarni, PM, Ohno-Machado, L, Chapman, WW. Natural language processing: an introduction. J Am Med Inf Assoc 2011;18:544–51. https://doi.org/10.1136/amiajnl-2011-000464.Search in Google Scholar PubMed PubMed Central
13. Chowdhary, KR. Natural language processing. New Delhi: Springer India; 2020. pp. 603–49.10.1007/978-81-322-3972-7_19Search in Google Scholar
14. Song, B, Li, F, Liu, Y, Zeng, X. Deep learning methods for biomedical named entity recognition: a survey and qualitative comparison. Briefings Bioinf 2021:22. https://doi.org/10.1093/bib/bbab282.Search in Google Scholar PubMed
15. He, X, Zhao, K, Chu. AutoML: A survey of the state-of-the-art. Knowl Base Syst 2021;212:106622. https://doi.org/10.1016/j.knosys.2020.106622.Search in Google Scholar
16. Santu, SKK, Hassan, MM, Smith, MJ, Xu, L, Zhai, C, Veeramachaneni, K. AutoML to date and beyond: challenges and opportunities. ACM Comput Surv 2022;54:175:1–175:36, https://doi.org/10.1145/3470918.Search in Google Scholar
17. Thornton, C, Hutter, F, Hoos, HH, Leyton-Brown, K. Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms. In: Dhillon, IS, Koren, Y, Ghani, R, Senator, TE, Bradley, P, Parekh, R, et al.., editors. The 19th ACM SIGKDD international conference on knowledge discovery and data mining, KDD 2013, Chicago, IL, USA: ACM; 2013. pp. 847–55.10.1145/2487575.2487629Search in Google Scholar
18. Hutter, F, Kotthoff, L, Vanschoren, J, editors. Automated machine learning - methods, systems, challenges. the springer series on challenges in machine learning. Cham, Switzerland: Springer; 2019.10.1007/978-3-030-05318-5Search in Google Scholar
19. Wagholikar, KB, Sundararajan, V, Deshpande, AW. Modeling paradigms for medical diagnostic decision support: a survey and future directions. J Med Syst 2012;36:3029–49. https://doi.org/10.1007/s10916-011-9780-4.Search in Google Scholar PubMed
20. Esteva, A, Chou, K, Yeung, S, Naik, N, Madani, A, Mottaghi, A, et al.. Deep learning-enabled medical computer vision. NPJ Digit Med 2021;4:1–9. https://doi.org/10.1038/s41746-020-00376-2.Search in Google Scholar PubMed PubMed Central
21. Nawaz, W, Ahmed, S, Tahir, A, Khan, HA. Classification of breast cancer histology images using ALEXNET. In: International conference image analysis and recognition. Cham, Switzerland: Springer; 2018:869–76 pp.10.1007/978-3-319-93000-8_99Search in Google Scholar
22. Ramesh, A, Dhariwal, P, Nichol, A, Chu, C, Chen, M. Hierarchical text-conditional image generation with CLIP latents. CoRR 2022. abs/2204.06125.Search in Google Scholar
23. Pottgießer, T, Ophoven, S, Schorb, E. 80 Fälle innere medizin: aus klinik und praxis. Elsevier Health Sciences; 2019.Search in Google Scholar
24. Hoffmann, G, Klawonn, F, Lichtinghagen, R, Orth, M. The zlog value as a basis for the standardization of laboratory results. LaboratoriumsMedizin 2017;41:20170135. https://doi.org/10.1515/labmed-2017-0135.Search in Google Scholar
25. Klein, R, Schwarzbach, J. 100 Fälle Allgemeinmedizin, 4th ed. Fälle, München: Elsevier; 2023.Search in Google Scholar
26. Yu, KH, Beam, AL, Kohane, IS. Artificial intelligence in healthcare. Nat Biomed Eng 2018;2:719–31. https://doi.org/10.1038/s41551-018-0305-z.Search in Google Scholar PubMed
27. Banks, G. Artificial intelligence in medical diagnosis: the INTERNIST/CADUCEUS approach. Crit Rev Med Inf 1986;1:23–54. http://europepmc.org/abstract/MED/3331578.Search in Google Scholar
28. Miller, RA, Pople, HE, Myers, JD. INTERNIST-I, an experimental computer-based diagnostic consultant for general internal medicine. In: Reggia, JA, Tuhrim, S, editors. Computer-assisted medical decision making. New York, NY: Springer New York; 1985:139–58 pp.10.1007/978-1-4612-5108-8_8Search in Google Scholar
29. Abu-Salih, B. Domain-specific knowledge graphs: a survey. J Netw Comput Appl 2021;185:103076. https://doi.org/10.1016/j.jnca.2021.103076.Search in Google Scholar
30. Chen, IY, Agrawal, M, Horng, S, Sontag, D. Robustly extracting medical knowledge from EHRs: a case study of learning a health knowledge graph. In: Pacific symposium on biocomputing; 2019.10.1142/9789811215636_0003Search in Google Scholar
31. Ernst, P, Meng, C, Siu, A, Weikum, G. KnowLife: A knowledge graph for health and life sciences. In: International Conference on Data Engineering. Chicago, IL, USA: IEEE Computer Society; 2014:1254–7 pp.10.1109/ICDE.2014.6816754Search in Google Scholar
32. Rotmensch, M, Halpern, Y, Tlimat, A, Horng, S, Sontag, D. Learning a health knowledge graph from electronic medical records. Sci Rep 2017;7:1–11. https://doi.org/10.1038/s41598-017-05778-z.Search in Google Scholar PubMed PubMed Central
33. Wang, M, Zhang, J, Liu, J, Hu, W, Wang, S, Li, X, et al.. Pdd graph: bridging electronic medical records and biomedical knowledge graphs via entity linking. In: International semantic web conference. Springer; 2017. pp. 219–27.10.1007/978-3-319-68204-4_23Search in Google Scholar
34. Liu, P, Wang, X, Sun, X, Shen, X, Chen, X, Sun, Y, et al.. HKDP: a hybrid knowledge graph based pediatric disease prediction system. In: International conference on smart health. Cham, Switzerland: Springer; 2016:78–90 pp.10.1007/978-3-319-59858-1_8Search in Google Scholar
35. Chai, X. Diagnosis method of thyroid disease combining knowledge graph and deep learning. IEEE Access 2020;8:149787–95. https://doi.org/10.1109/access.2020.3016676.Search in Google Scholar
© 2023 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Applied biostatistics in laboratory medicine
- Articles
- Digital competence in laboratory medicine
- Using Shiny apps for statistical analyses and laboratory workflows
- Comparison of three indirect methods for verification and validation of reference intervals at eight medical laboratories: a European multicenter study
- A visualization tool for continuous reference intervals based on GAMLSS
- Medical operational AI: artificial intelligence in routine medical operations
- Statistical learning and big data applications
Articles in the same Issue
- Frontmatter
- Editorial
- Applied biostatistics in laboratory medicine
- Articles
- Digital competence in laboratory medicine
- Using Shiny apps for statistical analyses and laboratory workflows
- Comparison of three indirect methods for verification and validation of reference intervals at eight medical laboratories: a European multicenter study
- A visualization tool for continuous reference intervals based on GAMLSS
- Medical operational AI: artificial intelligence in routine medical operations
- Statistical learning and big data applications