Explainable Artificial Intelligence (XAI)
Permanent URI for this collectionhttps://hdl.handle.net/10125/107432
Browse
Recent Submissions
Item type: Item , A Mythic Belief Regarding Trust in Artificial Intelligence: Uncovering the Role of Responsibility Perception for AI Use in Decision Makings(2024-01-03) Lee, Kyootai; Cho, Wooje; Woo, Han-GyunThis study aims to analyze a mechanism of AI responsibility based on attribution theory. It also identifies a new concept, AI locus of control (AI-LOC), reflecting an individual’s belief about the degree to which AI determines decision performance. To this end, we built a website with embedded AI systems where participants longitudinally made corporate credit rating decisions. We created a dynamic panel dataset that includes participants’ decisions per task and decision performance and attitudes per session. The results revealed that AI-LOC and trust in AI were developed in parallel yet differed over time. AI-LOC positively influenced AI use, but trust in AI did not. We reasoned that individuals would likely exhibit self-serving biases and take an egocentric and disengagement coping strategy regarding their decision-making with AI. This study can contribute to understanding the psychological and behavioral aspects of AI use.Item type: Item , Inclusive and Explainable AI Systems: A Systematic Literature Review(2024-01-03) Girard, Amelie; Zowghi, Didar; Bano, Muneera; Riziou, Marian-AndreiExplainable AI (XAI) plays a crucial role in enhancing transparency and providing rational explanations to support users of AI systems. Inclusive AI actively seeks to engage and represent individuals with diverse attributes who are affected by and contribute to the AI ecosystem. Both inclusion and XAI advocate for the active involvement of the users and stakeholders during the entire AI systems lifecycle. However, the relationship between XAI and Inclusive AI has not been explored. In this paper, We present the results of a systematic literature review with the objective to explore this relationship in the recent AI reserach literature. We were able to identify 18 research articles on the topic. Our analysis focused on exploring approaches to: (1) the humans attributes and perspectives, (2) preferred explanation methods, and (3) Human-AI interaction. Based on our findings, we identified potential future XAI research directions and proposed strategies for practitioners involved in the design and development of inclusive AI systems.Item type: Item , Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)(2024-01-03) Klier, Mathias; Meske, Christian; Abedin, Babak; Rabhi, FethiItem type: Item , Follow Me, Everything Is Alright (or Not): The Impact of Explanations on Appropriate Reliance on Artificial Intelligence(2024-01-03) Walter, Marie ChristineArtificial Intelligence (AI) has the potential to augment human decision making in an astonishing variety of domains. However, its opaque nature is a barrier to appropriate reliance on AI-based decision support. One possible solution stems from the research field of Explainable AI (XAI), creating automatically-generated explanations to make the inner functioning of AI understandable to humans. Our research on XAI focuses on understanding the impact of explanations alongside confidence scores on appropriate reliance on AI-based decision support systems. To this end, we conducted a randomized, between-subjects online experiment with 126 participants performing an image classification task. We find that while XAI-based explanations alongside confidence scores improve AI users’ relative positive self-reliance, they simultaneously reduce users’ relative positive AI-reliance. Thus, explanations alongside confidence scores can help reduce AI-overreliance but run the risk of causing AI-underreliance. Our findings help advance the understanding of explanations as facilitators of appropriate reliance on AI systems.
