Explainable Artificial Intelligence

Permanent URI for this collectionhttps://hdl.handle.net/10125/112428

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item type: Item ,
    Experience Over Explanation: Perceived Transparency in AI-Based Skin Cancer Detection
    (2026-01-06) Jaki, Paula; Knaus, Lukas; Benlian, Alexander
    Artificial intelligence (AI) is increasingly integrated into everyday life and holds great potential for high-stakes domains such as healthcare, for example in early skin cancer detection. However, user trust remains a major barrier to adoption, prior research has largely treated explainable AI (XAI) approaches as universally applicable rather than accounting for individual differences. In this study, we investigate how three XAI formats (mechanism–modality pairings) shape user trust through perceived transparency in AI-powered skin cancer diagnostics. Using a between-subjects online experiment with 15 dermoscopic images, we show that the effect of XAI format on trust is fully mediated by perceived transparency, and this is significantly moderated by users’ AI experience. Notably, AI experience can reverse the effect, underscoring the importance of tailoring explanations to user backgrounds. These findings advance the understanding of how trust in AI can be more appropriately calibrated and provide guidance for designing personalized XAI in healthcare.
  • Item type: Item ,
    Rationales Derived from Seeking the Stars: A Global XAI Approach for Star Rating Estimations of Reasoning LLMs Based on Textual Online Consumer Reviews
    (2026-01-06) Binder, Markus
    Online consumer reviews are a key element of ecommerce platforms. To process online consumer reviews, large language models (LLMs) are very popular. For instance, they can be used to detect reviews with inconsistencies between the sentiment in the review text and star rating. This is important as such reviews harm consumer trust on the platform. However, the internal workings of LLMs are nontransparent, thus there is a strong demand for explainable AI (XAI) approaches in e-commerce. Recently, reasoning LLMs have been proposed which offer exciting opportunities here. In this study, we present a global XAI decision tree for star rating estimations of reasoning LLMs based on the review texts by deriving features from their thinking process. Our proposed approach yields higher fidelity compared to alternatives and provides comprehensible insights on star rating estimations of reasoning LLMs. In that way, this study supports ecommerce platforms to develop more trustworthy purchasing experiences.
  • Item type: Item ,
    A Dimensionality-Reduced XAI Framework for Roundabout Crash Severity Insights
    (2026-01-06) Chakraborty, Rohit; Das, Subasish
    Understanding the complex interaction of variables contributing to crash severity at roundabouts is essential for advancing data-driven traffic safety strategies. This study proposes a computational framework that integrates variable importance ranking, unsupervised clustering, and interpretable machine learning to uncover distinct patterns in roundabout crash data. This study applied Cluster Correspondence Analysis (CCA), a dimensionality reduction and clustering technique for categorical variables, to identify latent crash profiles from real-world crash data in Ohio. To enhance transparency and interpretability, it employed SHapley Additive exPlanations (SHAP) to quantify the impact of key features on predicted crash severity within each identified cluster. The analysis revealed heterogeneous patterns involving geometry, lighting conditions, road user characteristics, and vehicle types that differ significantly across clusters. This integrated approach demonstrates how interpretable AI methods can support a subtle understanding of crash dynamics and guide safety interventions. The findings carry direct implications for adaptive traffic management, infrastructure design, and data-informed policy in roundabout safety. The adopted methodology also highlighted the utility of combining clustering and explainable AI to improve pattern recognition and feature attribution in complex categorical datasets.
  • Item type: Item ,
    Interpretability and Control in Forecasting Support Systems
    (2026-01-06) Feddersen, Leif; Cleophas, Catherine
    Forecasting Support Systems (FSS) exemplify the explainable artificial intelligence (XAI) challenge: their black-box algorithms frequently invite mistrust and harmful overrides. We experimentally compared three FSS designs to evaluate how algorithmic interpretability and control—two commonly proposed remedies—affect human–AI collaboration. We juxtaposed an Opaque baseline against an Interpretable variant that visualizes time-series decomposition and a Control variant that lets users re-parameterize components. In a controlled experiment (n=197) using real-world retail data, plain interpretability reduced the frequency and volume of judgmental adjustments and yielded a small but significant accuracy gain over the baseline. Adding component-level control, however, increased adjustment variance, did not improve average accuracy, and produced heavier error tails; self-reports indicated lower intuitiveness and satisfaction, consistent with higher perceived cognitive load. We conclude that interpretability helps calibrate users’ adjustments, whereas powerful control options introduce considerable risk of overconfident tinkering—insights directly relevant for FSS interface design and organizational governance of AI-assisted forecasting.
  • Item type: Item ,
    The Role of Explainable AI (XAI) in Organizational Performance: An Empowerment Perspective
    (2026-01-06) Wu, Jun; Lui, Ariel Kam Ha; Song, Yiliao; Fatima, Samar; Boo, Yee Ling
    As explainable AI (XAI) becomes more prevalent in organizational decision-making, its impact on organizational performance remains underexplored and requires deeper investigation. This conceptual paper develops a theoretical model linking explainable AI (XAI) to organizational performance through psychological empowerment. Drawing on empowerment theory, we propose that XAI enhances users’ perceived competence, autonomy, and impact, moderated by trust calibration. We present four propositions outlining the relationships between XAI, psychological empowerment, and organizational performance. Our model contributes to the XAI, information systems (IS), and decision analytics literature by offering a socio-technical lens and a foundation for future empirical validation.
  • Item type: Item ,
    Improving LLM Interpretability with User-Centric Chain-of-Thought Reasoning
    (2026-01-06) Schröppel, Philipp
    Advancing reasoning capabilities allow large language models (LLMs) to tackle increasingly complex problems, while reasoning traces—intermediate steps toward solutions—open up high-stakes applications by enabling human inspection of AI decision-making. However, current approaches prioritize model performance over human interpretability, limiting effective human-AI collaboration. In this study, we design and evaluate a human-centered approach that structures reasoning traces based on self-contained, verifiable steps, enabling users to independently assess and correct AI reasoning. Our approach uses XML-like tags to encode reasoning content and metadata, facilitating targeted feedback. Evaluation on mathematical reasoning tasks shows our approach maintains equivalent performance to standard Chain-of-Thought reasoning while enhancing interpretability. User studies demonstrate significant improvements in perceived usefulness and ease of use. This work advances understanding of how user-centric design of LLM outputs can better serve human collaboration needs in high-stakes AI deployments.
  • Item type: Item ,
    Introduction to the Minitrack on Explainable Artificial Intelligence
    (2026-01-06) Brachten, Florian; Gagnon, Elisa; Abedin, Babak; Förster, Maximilian