Capturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations
Files
Date
2021-01-05
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
1274
Ending Page
Alternative Title
Abstract
The opacity of Artificial Intelligence (AI) systems is a major impediment to their deployment. Explainable AI (XAI) methods that automatically generate counterfactual explanations for AI decisions can increase users’ trust in AI systems. Coherence is an essential property of explanations but is not yet addressed sufficiently by existing XAI methods. We design a novel optimization-based approach to generate coherent counterfactual explanations, which is applicable to numerical, categorical, and mixed data. We demonstrate the approach in a realistic setting and assess its efficacy in a human-grounded evaluation. Results suggest that our approach produces explanations that are perceived as coherent as well as suitable to explain the factual situation.
Description
Keywords
Explainable Artificial Intelligence (XAI), black box explanations, design science, explainable artificial intelligence, human-grounded evaluation, user-centric xai
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 54th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.