Capturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations

dc.contributor.author Förster, Maximilian
dc.contributor.author Hühn, Philipp
dc.contributor.author Klier, Mathias
dc.contributor.author Kluge, Kilian
dc.date.accessioned 2020-12-24T19:14:24Z
dc.date.available 2020-12-24T19:14:24Z
dc.date.issued 2021-01-05
dc.description.abstract The opacity of Artificial Intelligence (AI) systems is a major impediment to their deployment. Explainable AI (XAI) methods that automatically generate counterfactual explanations for AI decisions can increase users’ trust in AI systems. Coherence is an essential property of explanations but is not yet addressed sufficiently by existing XAI methods. We design a novel optimization-based approach to generate coherent counterfactual explanations, which is applicable to numerical, categorical, and mixed data. We demonstrate the approach in a realistic setting and assess its efficacy in a human-grounded evaluation. Results suggest that our approach produces explanations that are perceived as coherent as well as suitable to explain the factual situation.
dc.format.extent 10 pages
dc.identifier.doi 10.24251/HICSS.2021.155
dc.identifier.isbn 978-0-9981331-4-0
dc.identifier.uri http://hdl.handle.net/10125/70767
dc.language.iso English
dc.relation.ispartof Proceedings of the 54th Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Explainable Artificial Intelligence (XAI)
dc.subject black box explanations
dc.subject design science
dc.subject explainable artificial intelligence
dc.subject human-grounded evaluation
dc.subject user-centric xai
dc.title Capturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations
prism.startingpage 1274
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0126.pdf
Size:
461.61 KB
Format:
Adobe Portable Document Format
Description: