Reflection with AI
- Type:Bachelor/Master Thesis
- Date:Immediately or by agreement
- Supervisor:
-
Motivation
The proliferation of AI-assisted systems in high-stakes domains has created a critical gap between AI capabilities and human understanding. While these systems can process vast information at scale, users often lack the mechanisms to actively engage with, question, and learn from AI-generated outputs. This is particularly acute in domains like disinformation detection, where the consequences of poor decision-making are substantial and the underlying reasoning transparent.
Objectives
This thesis investigates how interactive, explainable AI systems can support more informed decision-making through understanding and learning. Within the EKILED project, we develop an AI assistant that helps users identify misinformation by combining Large Language Model-based detection with Explainable AI mechanisms. LLMs detect disinformation at scale, while XAI techniques make the underlying patterns and reasoning transparent—enabling users to understand the AI's logic in the moment and refine their own decision-making strategies over time.
We offer thesis opportunities within this use case to explore how XAI explanations can be designed to support both immediate understanding and longitudinal learning from AI-assisted decision-making. Additionally, broader research questions in reflective AI and human-AI collaboration are open for investigation.
Profile
- Interest in interdisciplinary research of human interaction with AI systems
- Technical skills: Basic understanding of LLMs technological foundations, programming skills (depending on approach)
- Self-driven, open learning attitude and curiosity
- English skills
Contact
We offer an exciting research topic with strong relevance to both academia and practice, close supervision, and the opportunity to develop theoretical, methodological, and practical skills. If you are interested, please send a current transcript of records, a short CV, and a brief motivation (2–3 sentences) to Julian Benz (julian-david.benz@kit.edu).
Literature
- Kosmyna, Nataliya, Eugene Hauptmann, Ye Tong Yuan, u. a. „Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task“. arXiv:2506.08872. Preprint, arXiv, 31. Dezember 2025.
- Förster, M., Broder, H. R., Fahr, M. C., Klier, M., & Fink, L. (2025). Tell me more, tell me more: the impact of explanations on learning from feedback provided by Artificial Intelligence. European Journal of Information Systems, 34(2), 323-345.
- Förster, M., Schröppel, P., Schwenke, C., Fink, L., & Klier, M. (2024). Choose Wisely: Leveraging Explainable AI to Support Reflective Decision-Making. International Conference on Information Systems.