Junior fellows and distinguished dissertation of the GI & AI for crisis
-
Ricardo Usbeck
Prof. Dr. Ricardo Usbeck studied Computer Science at the Martin Luther University Halle-Wittenberg (B.Sc. 2010, M.Sc. 2012). In 2017, he completed his Ph.D. at the University of Leipzig. Academic post-doctoral stays at the University Paderborn (2017–2019) and Fraunhofer IAIS (2019–2021) followed. From 2021 to 2023, he was an assistant professor with tenure track at the University of Hamburg for Semantic Systems. Since 2023, he has been a full professor of Information Systems, especially Artificial Intelligence (AI) and Explainability, at Leuphana University Lüneburg, Germany. His research interests range from knowledge graphs to large language models focusing on sustainable, explainable, and robust AI methods. From 2022 to March 2025, he has been co-editor in chief of it – Information Technology, founded in 1959, the oldest German journal in information technology., Angelie Kraft
and Patrick Westphal Angelie Kraft is an interdisciplinary researcher in Natural Language Processing (NLP) and AI Ethics. She obtained an M.Sc. in Intelligent Adaptive Systems from the University of Hamburg (2021), a B.Sc. in Computer Science from the University of Bremen (2018), and a B.Sc. in Psychology from the University of Mannheim (2015). She is currently pursuing her doctorate at the University of Hamburg. She is a research associate at the Leuphana University Lüneburg in the Artificial Intelligence and Explainability Lab led by Prof. Dr. Ricardo Usbeck. Her research focuses on the ethical and epistemic limitations of language models, particularly in relation to questions of fairness and factual fidelity. Patrick Westphal is a researcher in semantic technologies, logic-based symbolic, and sub-symbolic AI. He spent several years in the private sector working for software development and scientific computing companies in Germany and abroad. After obtaining his B.Sc. in Computer Science from the University of Cooperative Education in Leipzig, he graduated with an M.Sc. in Computer Science at Leipzig University. In the following period, he gained experience in different national and EU research projects at the Institute for Applied Informatics (InfAI) in Leipzig and Fraunhofer IAIS in Dresden/Bonn. He is working as a research associate at the Hamburger Informatik Technologie-Center (HITeC) in strong collaboration with Prof. Usbeck’s group at Leuphana University. He is pursuing his doctorate at Leipzig University. His research interests are logic-based formalisms for knowledge representation and reasoning, symbolic and sub-symbolic machine learning techniques, and semantics and knowledge-driven data integration.
In this issue, we introduce four recent Gesellschaft für Informatik (GI) junior fellows and highlight one distinguished dissertation. These scholars share insights into their research journeys and key contributions to their fields. Additionally, we present a response to our Call for Papers on AI for Crisis, demonstrating how AI-driven approaches can enhance resilience in times of uncertainty.
In his article “Natural Language Processing for Social Good: Contribution to Research and Society,” Daniel Braun illustrates how AI – especially NLP – is changing our world. His appointment as a GI Junior Fellow emphasizes the importance of navigating the responsibilities associated with the growing use of AI. Daniel is a professor in Marburg and highlights key aspects in his article, such as trustworthiness; in particular, trustworthy annotated data shapes the development of AI. His article effectively demonstrates how AI in law works and how we – as computer scientists – can influence society.
In the second article, Franziska Boenisch introduces herself in “A Self-Portrayal of GI Junior Fellow Franziska Boenisch: Trustworthy Machine Learning for Individuals” as part of our series on recent GI junior fellows. Franziska is a tenure-track faculty member at the CISPA Helmholtz Center for Information Security, where she co-leads the SprintML lab focused on secure, private, robust, interpretable, and trustworthy machine learning. The author highlights her perspective on user-centered machine learning privacy and how federated learning, differential privacy methods, and even single neurons leak private data. As AI increasingly integrates into our daily lives, safeguarding our privacy is more important than ever.
Mareike Lisker’s article aims to demonstrate how academic careers can be unpredictable and subject to change. She argues that increasing expectations for digital literacy burden users with managing their data, even though they are often ill-equipped due to pervasive tracking systems. Think about how little users know about cookies. The article then connects this topic to Lisker’s academic journey, tracing her path from her Master’s research to her current PhD project on content moderation in decentralized platforms. This shows us how our research field is not only diversifying its participants and topics but is also becoming increasingly interdisciplinary. This promises to make us, as researchers, more resilient in a rapidly changing world.
Bettina Finzel’s article, “Toward Trustworthy AI with Integrative Explainable AI Frameworks,” explores the pressing challenge of ensuring AI systems are reliable, transparent, and interpretable, particularly in high-stakes domains like healthcare. As AI regulations, such as the European AI Act, continue to shape the landscape, Bettina proposes an integrative Explainable AI (XAI) framework that merges interpretability, interactivity, and robustness to enhance human-AI collaboration. The article highlights methods for evaluating AI models, mitigating bias, and fostering interdisciplinary cooperation to create AI systems that are not only technically sound but also ethically and socially responsible.
The distinguished dissertation “Scalable SAT Solving and Its Application” by Dominik Schreiber, a Young Investigator Group Leader at KIT, focuses on simplifying and accelerating the process of solving SAT problems. The author notably leveraged powerful computing systems, such as supercomputers and cloud computing, to enhance these solutions. The author’s system, MALLOB, has established itself as a world-leading automated reasoning system. This article offers an overview of the research, its key findings, future impact, and some personal reflections from the author.
Finally, we add a paper on how to generate data for AI processes. As highlighted by some of the previous articles, one can assume that we live in times of crisis. Responding to our call for AI for crisis articles, Felix Brei et al. present their approach, “Queryfy: From Knowledge Graphs to Questions using Open Large Language Models. Large Language Models (LLMs)” which can assist with the translation of natural language into SPARQL queries. Thus, questions such as “Which of our suppliers are based in countries with ongoing armed conflicts?” can be answered, and hence, systems based on this dataset can drive the resilience of companies and organizations forward.
As the editor, I want to share a few words: My time with Information Technology has recently ended, and I am grateful that this positive special issue, which highlights how technology – particularly AI – can benefit the world, is my final editorial contribution. Thank you for reading Information Technology.
Sapere Aude!
About the authors

Prof. Dr. Ricardo Usbeck studied Computer Science at the Martin Luther University Halle-Wittenberg (B.Sc. 2010, M.Sc. 2012). In 2017, he completed his Ph.D. at the University of Leipzig. Academic post-doctoral stays at the University Paderborn (2017–2019) and Fraunhofer IAIS (2019–2021) followed. From 2021 to 2023, he was an assistant professor with tenure track at the University of Hamburg for Semantic Systems. Since 2023, he has been a full professor of Information Systems, especially Artificial Intelligence (AI) and Explainability, at Leuphana University Lüneburg, Germany. His research interests range from knowledge graphs to large language models focusing on sustainable, explainable, and robust AI methods. From 2022 to March 2025, he has been co-editor in chief of it – Information Technology, founded in 1959, the oldest German journal in information technology.

Angelie Kraft is an interdisciplinary researcher in Natural Language Processing (NLP) and AI Ethics. She obtained an M.Sc. in Intelligent Adaptive Systems from the University of Hamburg (2021), a B.Sc. in Computer Science from the University of Bremen (2018), and a B.Sc. in Psychology from the University of Mannheim (2015). She is currently pursuing her doctorate at the University of Hamburg. She is a research associate at the Leuphana University Lüneburg in the Artificial Intelligence and Explainability Lab led by Prof. Dr. Ricardo Usbeck. Her research focuses on the ethical and epistemic limitations of language models, particularly in relation to questions of fairness and factual fidelity.

Patrick Westphal is a researcher in semantic technologies, logic-based symbolic, and sub-symbolic AI. He spent several years in the private sector working for software development and scientific computing companies in Germany and abroad. After obtaining his B.Sc. in Computer Science from the University of Cooperative Education in Leipzig, he graduated with an M.Sc. in Computer Science at Leipzig University. In the following period, he gained experience in different national and EU research projects at the Institute for Applied Informatics (InfAI) in Leipzig and Fraunhofer IAIS in Dresden/Bonn. He is working as a research associate at the Hamburger Informatik Technologie-Center (HITeC) in strong collaboration with Prof. Usbeck’s group at Leuphana University. He is pursuing his doctorate at Leipzig University. His research interests are logic-based formalisms for knowledge representation and reasoning, symbolic and sub-symbolic machine learning techniques, and semantics and knowledge-driven data integration.
Acknowledgements
Outstanding reviewers who responded quickly and accurately supported this issue. I also want to thank the Gesellschaft für Informatik for allowing us to publish these articles.
© 2025 the author(s), published by De Gruyter, Berlin/Boston
This work is licensed under the Creative Commons Attribution 4.0 International License.
Articles in the same Issue
- Frontmatter
- Editorial
- Junior fellows and distinguished dissertation of the GI & AI for crisis
- Self-Portrayals of GI Junior Fellows
- Natural Language Processing for Social Good: contributing to research and society
- A self-portrayal of GI Junior Fellow Franziska Boenisch: trustworthy machine learning for individuals
- Between computer science and philosophy, and: on the (im-)possibility of digital literacy
- Toward trustworthy AI with integrative explainable AI frameworks
- Distinguished Dissertations
- On the dissertation “Scalable SAT Solving and its Application”
- Research Article
- Queryfy: from knowledge graphs to questions using open Large Language Models
Articles in the same Issue
- Frontmatter
- Editorial
- Junior fellows and distinguished dissertation of the GI & AI for crisis
- Self-Portrayals of GI Junior Fellows
- Natural Language Processing for Social Good: contributing to research and society
- A self-portrayal of GI Junior Fellow Franziska Boenisch: trustworthy machine learning for individuals
- Between computer science and philosophy, and: on the (im-)possibility of digital literacy
- Toward trustworthy AI with integrative explainable AI frameworks
- Distinguished Dissertations
- On the dissertation “Scalable SAT Solving and its Application”
- Research Article
- Queryfy: from knowledge graphs to questions using open Large Language Models