Assessing the Fidelity of Explanations with Global Sensitivity Analysis

dc.contributor.author Smith, Michael
dc.contributor.author Acquesta, Erin
dc.contributor.author Smutz, Charles
dc.contributor.author Rushdi, Ahmad
dc.contributor.author Moss, Blake
dc.date.accessioned 2022-12-27T18:56:45Z
dc.date.available 2022-12-27T18:56:45Z
dc.date.issued 2023-01-03
dc.description.abstract Many explainability methods have been proposed as a means of understanding how a learned machine learning model makes decisions and as an important factor in responsible and ethical artificial intelligence. However, explainability methods often do not fully and accurately describe a model's decision process. We leverage the mathematical framework of global sensitivity analysis techniques to reveal deficiencies of explanation methods. We find that current explainaiblity methods fail to capture prediction uncertainty and make several simplifying assumptions that have significant ramifications on the accuracy of the resulting explanations. We show that the simplifying assumptions result in explanations that: (1) fail to model nonlinear interactions in the model and (2) misrepresent the importance of correlated features. Experiments suggest that failing to capture nonlinear feature interaction has a larger impact on the accuracy of the explanations. Thus, as most state-of-the-art ML models have non-linear interactions and operate on correlated data, explanations should only be used with caution.
dc.format.extent 10
dc.identifier.doi 10.24251/HICSS.2023.133
dc.identifier.isbn 978-0-9981331-6-4
dc.identifier.uri https://hdl.handle.net/10125/102763
dc.language.iso eng
dc.relation.ispartof Proceedings of the 56th Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Explainable Artificial Intelligence (XAI)
dc.subject artificial intelligence
dc.subject explainability
dc.subject fidelity
dc.subject machine learning
dc.subject sensitivity analysis
dc.title Assessing the Fidelity of Explanations with Global Sensitivity Analysis
dc.type.dcmi text
prism.startingpage 1085
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0105.pdf
Size:
543.18 KB
Format:
Adobe Portable Document Format
Description: