Startseite Technik Transforming healthcare with machine learning
Kapitel
Lizenziert
Nicht lizenziert Erfordert eine Authentifizierung

Transforming healthcare with machine learning

  • M. Mahalakshmi , S. Sujatha , S. Banumathi , V. Purushothaman und K. Suresh Kumar
Veröffentlichen auch Sie bei De Gruyter Brill

Abstract

Deep learning (DL) modelsdeep learning models in material science revolutionize the design and characterization of materials, but due to their complexity and the fact that no one knows exactly how the model works, they can be hard to understand. In this chapter, we introduce a critical issue of the current DL frameworks: interpretability, with a focus on post- and intermediate processing techniques that enhance the readability of the models’ predictions. The relationship between the input and the output is sometimes unclear due to the high complexity of DL methods, which require the use of high-dimensional nonlinear spaces, presenting one of the main challenges. All of these are separately discussed in the chapter: an interpretable model element, visualization methods, and local explanation methods such as Shapley values and local interpretable model-agnostic explanations (LIME). Thirdly, it compares factors such as model accuracy and interpretability, among others, because in each case, it is impossible to achieve the maximum increase in performance by introducing more complex and sophisticated models without sacrificing the interpretability of the results for end-users. Perspectives for further studies are described, with an emphasis on the ability to combine domain knowledge and create machine learningmachine learning models with domain-oriented interpretability protocols for material science fields. This proposed work should be able to close the gap between abstract DL schemes and tangible materials science applications, thus encouraging the general acceptance of AI technologiesAI technologies in materials science and engineering research and future application development.

Abstract

Deep learning (DL) modelsdeep learning models in material science revolutionize the design and characterization of materials, but due to their complexity and the fact that no one knows exactly how the model works, they can be hard to understand. In this chapter, we introduce a critical issue of the current DL frameworks: interpretability, with a focus on post- and intermediate processing techniques that enhance the readability of the models’ predictions. The relationship between the input and the output is sometimes unclear due to the high complexity of DL methods, which require the use of high-dimensional nonlinear spaces, presenting one of the main challenges. All of these are separately discussed in the chapter: an interpretable model element, visualization methods, and local explanation methods such as Shapley values and local interpretable model-agnostic explanations (LIME). Thirdly, it compares factors such as model accuracy and interpretability, among others, because in each case, it is impossible to achieve the maximum increase in performance by introducing more complex and sophisticated models without sacrificing the interpretability of the results for end-users. Perspectives for further studies are described, with an emphasis on the ability to combine domain knowledge and create machine learningmachine learning models with domain-oriented interpretability protocols for material science fields. This proposed work should be able to close the gap between abstract DL schemes and tangible materials science applications, thus encouraging the general acceptance of AI technologiesAI technologies in materials science and engineering research and future application development.

Heruntergeladen am 7.11.2025 von https://www.degruyterbrill.com/document/doi/10.1515/9783111503202-013/html
Button zum nach oben scrollen