Capturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations

dc.contributor.authorFörster, Maximilian
dc.contributor.authorHühn, Philipp
dc.contributor.authorKlier, Mathias
dc.contributor.authorKluge, Kilian
dc.date.accessioned2020-12-24T19:14:24Z
dc.date.available2020-12-24T19:14:24Z
dc.date.issued2021-01-05
dc.description.abstractThe opacity of Artificial Intelligence (AI) systems is a major impediment to their deployment. Explainable AI (XAI) methods that automatically generate counterfactual explanations for AI decisions can increase users’ trust in AI systems. Coherence is an essential property of explanations but is not yet addressed sufficiently by existing XAI methods. We design a novel optimization-based approach to generate coherent counterfactual explanations, which is applicable to numerical, categorical, and mixed data. We demonstrate the approach in a realistic setting and assess its efficacy in a human-grounded evaluation. Results suggest that our approach produces explanations that are perceived as coherent as well as suitable to explain the factual situation.
dc.format.extent10 pages
dc.identifier.doi10.24251/HICSS.2021.155
dc.identifier.isbn978-0-9981331-4-0
dc.identifier.urihttp://hdl.handle.net/10125/70767
dc.language.isoEnglish
dc.relation.ispartofProceedings of the 54th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectExplainable Artificial Intelligence (XAI)
dc.subjectblack box explanations
dc.subjectdesign science
dc.subjectexplainable artificial intelligence
dc.subjecthuman-grounded evaluation
dc.subjectuser-centric xai
dc.titleCapturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations
prism.startingpage1274

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0126.pdf
Size:
461.61 KB
Format:
Adobe Portable Document Format