Assessing the Fidelity of Explanations with Global Sensitivity Analysis

dc.contributor.authorSmith, Michael
dc.contributor.authorAcquesta, Erin
dc.contributor.authorSmutz, Charles
dc.contributor.authorRushdi, Ahmad
dc.contributor.authorMoss, Blake
dc.date.accessioned2022-12-27T18:56:45Z
dc.date.available2022-12-27T18:56:45Z
dc.date.issued2023-01-03
dc.description.abstractMany explainability methods have been proposed as a means of understanding how a learned machine learning model makes decisions and as an important factor in responsible and ethical artificial intelligence. However, explainability methods often do not fully and accurately describe a model's decision process. We leverage the mathematical framework of global sensitivity analysis techniques to reveal deficiencies of explanation methods. We find that current explainaiblity methods fail to capture prediction uncertainty and make several simplifying assumptions that have significant ramifications on the accuracy of the resulting explanations. We show that the simplifying assumptions result in explanations that: (1) fail to model nonlinear interactions in the model and (2) misrepresent the importance of correlated features. Experiments suggest that failing to capture nonlinear feature interaction has a larger impact on the accuracy of the explanations. Thus, as most state-of-the-art ML models have non-linear interactions and operate on correlated data, explanations should only be used with caution.
dc.format.extent10
dc.identifier.doi10.24251/HICSS.2023.133
dc.identifier.isbn978-0-9981331-6-4
dc.identifier.otherbf60a720-ab7d-4202-8291-89a5c25cb7fd
dc.identifier.urihttps://hdl.handle.net/10125/102763
dc.language.isoeng
dc.relation.ispartofProceedings of the 56th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectExplainable Artificial Intelligence (XAI)
dc.subjectartificial intelligence
dc.subjectexplainability
dc.subjectfidelity
dc.subjectmachine learning
dc.subjectsensitivity analysis
dc.titleAssessing the Fidelity of Explanations with Global Sensitivity Analysis
dc.type.dcmitext
prism.startingpage1085

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0105.pdf
Size:
543.18 KB
Format:
Adobe Portable Document Format