Home Technology Explainable AI: introducing trust and comprehensibility to AI engineering
Article
Licensed
Unlicensed Requires Authentication

Explainable AI: introducing trust and comprehensibility to AI engineering

  • Nadia Burkart

    Nadia Burkart received her Bachelor degree (2011) and Master degree (2013) in business informatics from the University of Applied Science in Karlsruhe. 2013 she started as a research scientist at the Fraunhofer IOSB in Karlsruhe in the field of decision support systems. Since 2021 she is leading the research group Applied Explainable AI at Fraunhofer IOSB. In this context she is working on various projects on explainable machine learning solutions in several domains. Besides her main project business she finished her PhD thesis in the field of explainable machine learning in 2021.

    EMAIL logo
    , Danilo Brajovic

    Danilo Brajovic received a Bachelor’s degree in computer science and both a Master’s degree in cognitive and in computer science from Tübingen University in 2017, 2020 and 2021. Currently, he is working in the Center for Cyber Cognitive Intelligence (CCI) at the Fraunhofer IPA in Stuttgart, Germany. His research is focused around safe AI in industrial applications.

    and Marco F. Huber

    Marco Huber received his diploma, Ph. D., and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT), Germany, in 2006, 2009, and 2015, respectively. From June 2009 to May 2011, he was leading the research group Variable Image Acquisition and Processing of the Fraunhofer IOSB, Karlsruhe, Germany. Subsequently, he was Senior Researcher with AGT International, Darmstadt, Germany, until March 2015. From April 2015 to September 2018, he was responsible for product development and data science services of the Katana division at USU Software AG, Karlsruhe, Germany. At the same time he was adjunct professor of computer science with the KIT. Since October 2018 he is full professor with the University of Stuttgart. He further is director of the Center for Cyber Cognitive Intelligence (CCI) and of the Department for Image and Signal Processing with Fraunhofer IPA in Stuttgart, Germany. His research interests include machine learning, planning and decision making, image processing, data analytics, and robotics. He has authored or co‐authored more than 100 publications in various high-ranking journals, books, and conferences, and holds two U.S. patents and one EU patent.

Published/Copyright: September 3, 2022

Abstract

Machine learning (ML) rapidly gains increasing interest due to the continuous improvements in performance. ML is used in many different applications to support human users. The representational power of ML models allows solving difficult tasks, while making them impossible to be understood by humans. This provides room for possible errors and limits the full potential of ML, as it cannot be applied in critical environments. In this paper, we propose employing Explainable AI (xAI) for both model and data set refinement, in order to introduce trust and comprehensibility. Model refinement utilizes xAI for providing insights to inner workings of an ML model, for identifying limitations and for deriving potential improvements. Similarly, xAI is used in data set refinement to detect and resolve problems of the training data.

Zusammenfassung

Maschinelles Lernen (ML) gewinnt aufgrund kontinuierlicher Leistungssteigerungen zunehmend an Interesse. ML wird in vielen verschiedenen Anwendungen eingesetzt, um menschliche Nutzer zu unterstützen. Die Repräsentationsmächtigkeit von ML-Modellen ermöglicht es, schwierige Aufgaben zu lösen, macht es aber unmöglich, dass die resultierenden Modelle von Menschen verstanden werden. Dies bietet Raum für mögliche Fehler und schränkt das volle Potenzial von ML ein, da ein Einsatz in kritischen Umgebungen nicht möglich ist. In dieser Arbeit schlagen wir vor, erklärbare KI (xAI) sowohl für die Modell- als auch für die Datensatzverfeinerung einzusetzen, um Vertrauen und Verständlichkeit zu schaffen. Bei der Modellverfeinerung wird xAI eingesetzt, um Einblicke in das Innenleben eines ML-Modells zu erhalten, um Einschränkungen zu erkennen und um potenzielle Verbesserungen abzuleiten. Ebenso wird xAI bei der Datensatzverfeinerung eingesetzt, um Probleme mit den Trainingsdaten zu erkennen und zu beheben.

Funding statement: This work was supported by the Baden-Wuerttemberg Ministry for Economic Affairs, Labour and Tourism (Projects KI-Fortschrittszentrum “Lernende Systeme und Kognitive Robotik” and Competence Center KI-Engineering CC-KING).

About the authors

Nadia Burkart

Nadia Burkart received her Bachelor degree (2011) and Master degree (2013) in business informatics from the University of Applied Science in Karlsruhe. 2013 she started as a research scientist at the Fraunhofer IOSB in Karlsruhe in the field of decision support systems. Since 2021 she is leading the research group Applied Explainable AI at Fraunhofer IOSB. In this context she is working on various projects on explainable machine learning solutions in several domains. Besides her main project business she finished her PhD thesis in the field of explainable machine learning in 2021.

Danilo Brajovic

Danilo Brajovic received a Bachelor’s degree in computer science and both a Master’s degree in cognitive and in computer science from Tübingen University in 2017, 2020 and 2021. Currently, he is working in the Center for Cyber Cognitive Intelligence (CCI) at the Fraunhofer IPA in Stuttgart, Germany. His research is focused around safe AI in industrial applications.

Marco F. Huber

Marco Huber received his diploma, Ph. D., and habilitation degrees in computer science from the Karlsruhe Institute of Technology (KIT), Germany, in 2006, 2009, and 2015, respectively. From June 2009 to May 2011, he was leading the research group Variable Image Acquisition and Processing of the Fraunhofer IOSB, Karlsruhe, Germany. Subsequently, he was Senior Researcher with AGT International, Darmstadt, Germany, until March 2015. From April 2015 to September 2018, he was responsible for product development and data science services of the Katana division at USU Software AG, Karlsruhe, Germany. At the same time he was adjunct professor of computer science with the KIT. Since October 2018 he is full professor with the University of Stuttgart. He further is director of the Center for Cyber Cognitive Intelligence (CCI) and of the Department for Image and Signal Processing with Fraunhofer IPA in Stuttgart, Germany. His research interests include machine learning, planning and decision making, image processing, data analytics, and robotics. He has authored or co‐authored more than 100 publications in various high-ranking journals, books, and conferences, and holds two U.S. patents and one EU patent.

References

1. Burton, S. and R. Hawkins. 2020. Assuring the safety of highly automated driving: State-of-the-art and research perspectives. Technical report, University of York.Search in Google Scholar

2. Burkart, N. and M. Huber. 2021. A survey on the explainability of supervised machine learning. Journal of Artificial Intelligence Research (JAIR) 70: 245–317.10.1613/jair.1.12228Search in Google Scholar

3. Breiman, L. 2001. Random forests. Machine Learning 45(1): 5–32.10.1023/A:1010933404324Search in Google Scholar

4. Feldman, V. 2020. Does learning require memorization? A short tale about a long tail. In: Proceedings of the Annual ACM Symposium on Theory of Computing, pp. 954–959.10.1145/3357713.3384290Search in Google Scholar

5. Fayyad, U., G. Piatetsky-Shapiro and P. Smyth. 1996. From data mining to knowledge discovery in databases. AI Magazine 17(3): 37.Search in Google Scholar

6. Ghorbani, A. and J. Zou. 2019. Data shapley: Equitable valuation of data for machine learning. In: 36th International Conference on Machine Learning, ICML 2019, 2019 June, pp. 4053–4065.Search in Google Scholar

7. Hasterok, C., J. Stompe, J. Pfrommer, T. Usländer, J. Ziehn, S. Reiter, M. Weber and PAISE Till Riedel. 2021. Das Vorgehensmodell für KI-Engineering. White paper, Kompetenzzentrum KI-Engineering CC-KING.Search in Google Scholar

8. Huval, B., T. Wang, S. Tandon, J. Kiske, W. Song, J. Pazhayampallil, M. Andriluka, P. Rajpurkar, T. Migimatsu, R. Cheng-Yue, F.A. Mujica, A. Coates and A. Ng. 2015. An empirical evaluation of deep learning on highway driving. arXiv:1504.01716.Search in Google Scholar

9. Jiang, Z., C. Zhang, K. Talwar and M.C. Mozer. 2020. Characterizing structural regularities of labeled data in overparameterized models.Search in Google Scholar

10. Koh P.W. and P. Liang. 2017. Understanding black-box predictions via influence functions. In: 34th International Conference on Machine Learning, ICML 2017, pp. 2976–2987.Search in Google Scholar

11. Liu, C., T. Arnon, C. Lazarus, C. Strong, C. Barrett and M.J. Kochenderfer. 2021. Algorithms for verifying deep neural networks. Foundations and Trends® in Optimization 4(3–4): 244–404.10.1561/9781680837872Search in Google Scholar

12. Matzka, S. 2020. Ai4i 2020 predictive maintenance dataset. UCI Machine Learning Repository.Search in Google Scholar

13. Molnar, Ch. 2020. Interpretable machine learning. Lulu.com.Search in Google Scholar

14. Tulio Ribeiro, M., S. Singh and C. Guestrin. 2016. “Why should I trust you?” Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144.10.1145/2939672.2939778Search in Google Scholar

15. Salay, R. and K. Czarnecki. 2018. Using machine learning safely in automotive software: An assessment and adaption of software process requirements in ISO 26262. arXiv:1808.01614.10.4271/2018-01-1075Search in Google Scholar

16. Simonyan, K., A. Vedaldi and A. Zisserman. 2014. Deep inside convolutional networks: Visualising image classification models and saliency maps. In: Workshop at International Conference on Learning Representations.Search in Google Scholar

17. Toneva, M., A. Sordoni, R. Tachet des Combes, A. Trischler, Y. Bengio and G.J. Gordon. 2018. An empirical study of example forgetting during deep neural network learning, pp. 1–19. Published in ICLR 2019. Arxiv: https://arxiv.org/abs/1812.05159.Search in Google Scholar

18. Wirth, R. and J. Hipp. 2000. CRISP-DM: Towards a standard process model for data mining. In: Proceedings of the Fourth International Conference on the Practical Application of Knowledge Discovery and Data Mining, pp. 29–39.Search in Google Scholar

19. Yoon, J., S. Arik and T. Pfister. 2020. Data valuation using reinforcement learning. In: Proceedings of the 37th International Conference on Machine Learning, PMLR 119. pp. 10842–10851. Arxiv: https://arxiv.org/abs/1909.11671.Search in Google Scholar

Received: 2022-02-07
Accepted: 2022-07-21
Published Online: 2022-09-03
Published in Print: 2022-09-27

© 2022 Walter de Gruyter GmbH, Berlin/Boston

Downloaded on 6.2.2026 from https://www.degruyterbrill.com/document/doi/10.1515/auto-2022-0013/pdf
Scroll to top button