Explainable Artificial Intelligence (XAI)
Permanent URI for this collection
Browse
Recent Submissions
Item Quantifying Visual Properties of GAM Shape Plots: Impact on Perceived Cognitive Load and Interpretability(2025-01-07) Kruschel, Sven; Bohlen, Lasse; Rosenberger, Julian; Zschech, Patrick; Kraus, MathiasGeneralized Additive Models (GAMs) offer a balance between performance and interpretability in machine learning. The interpretability aspect of GAMs is expressed through shape plots, representing the model's decision-making process. However, the visual properties of these plots, e.g. number of kinks (number of local maxima and minima), can impact their complexity and the cognitive load imposed on the viewer, compromising interpretability. Our study, including 57 participants, investigates the relationship between the visual properties of GAM shape plots and cognitive load they induce. We quantify various visual properties of shape plots and evaluate their alignment with participants' perceived cognitive load, based on 144 plots. Our results indicate that the number of kinks metric is the most effective, explaining 86.4% of the variance in users' ratings. We develop a simple model based on number of kinks that provides a practical tool for predicting cognitive load, enabling the assessment of one aspect of GAM interpretability without direct user involvement.Item Unlocking Empowerment: An Empirical Study on the Impact of Explainable AI in Mental Health Apps(2025-01-07) Bottesch, Sven; Terhorst, Yannik; Förster, MaximilianIt is anticipated that apps based on artificial intelligence (AI) will be instrumental in mitigating the global shortage in mental healthcare. One important purpose of such apps is to encourage users’ self-help. This study is dedicated to examining the potential role of explainable AI (XAI) for mental health apps. We build on mental health literature to conceptualize potential effects of explanations in terms of patient empowerment. We implement an online experiment with a fully instantiated mental health app based on a real-world dataset. The randomized between-subject experiment is conducted with 409 participants to test the effectiveness of feature importance and counterfactual explanations on patient empowerment, intention to use, and intention to act. Our results show that the provision of counterfactual explanations alongside AI-generated predictions of depression risk in a mental health app can significantly increase users’ intention to use and empowerment.Item A Review of Reasoning in Artificial Agents Using Large Language Models(2025-01-07) Naidu, Nagraj; El-Gayar, OmarThe increasing sophistication and the use of large language models (LLMs) in artificial agents highlights the need to investigate their reasoning capabilities and limitations. Understanding these aspects is crucial, given the integral role of reasoning in decision-making processes, which are central to a software or embodied agent. This research paper presents a systematic review of the topic. We review the literature by selecting and analyzing highly cited papers using both PRISMA and snowballing. The gathered literature is categorized using a detailed framework of facets and categories. In the results section, we elaborate on our findings and illustrate the mapping through bubble chart visualizations. The paper concludes by highlighting research gaps and suggesting directions for future studies.Item Clarity in Complexity: Advancing AI Explainability through Sensemaking(2025-01-07) Gagnon, Elisa; Deregt, Anouk; Lapointe, LietteThis paper explores Explainable Artificial Intelligence (XAI) through a sensemaking lens, addressing the complexity in the extant literature and providing a comprehensive understanding of the process of explainability. Through an exhaustive review of relevant research, we develop a novel framework highlighting the dynamic interactions between AI systems and users in the co-construction of explanations. We conducted a thorough analysis and theoretical synthesis of the extant literature. Based on the results, we developed a framework that shows how explainability emerges as a shared process between humans and machines, rather than a one-sided output. The proposed framework offers valuable insights for enhancing human-AI interactions and contributes to the theoretical foundation of XAI. The findings pave the way for future research avenues, with implications for both academic investigation and practical applications in designing more transparent and effective AI systems.Item Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)(2025-01-07) Abedin, Babak; Song, Yang; Förster, Maximilian; Meske, Christian