Model Interpretation and Explainability towards Creating Transparency in Prediction Models

dc.contributor.author Dolk, Daniel
dc.contributor.author Kridel, Donald
dc.contributor.author Dineen, Jacob
dc.contributor.author Castillo, David
dc.date.accessioned 2020-01-04T07:20:27Z
dc.date.available 2020-01-04T07:20:27Z
dc.date.issued 2020-01-07
dc.description.abstract Explainable AI (XAI) has a counterpart in analytical modeling which we refer to as model explainability. We tackle the issue of model explainability in the context of prediction models. We analyze a dataset of loans from a credit card company and apply three stages: execute and compare four different prediction methods, apply the best known explainability techniques in the current literature to the model training sets to identify feature importance (FI) (static case), and finally to cross-check whether the FI set holds up under “what if” prediction scenarios for continuous and categorical variables (dynamic case). We found inconsistency in FI identification between the static and dynamic cases. We summarize the “state of the art” in model explainability and suggest further research to advance the field.
dc.format.extent 10 pages
dc.identifier.doi 10.24251/HICSS.2020.120
dc.identifier.isbn 978-0-9981331-3-3
dc.identifier.uri http://hdl.handle.net/10125/63859
dc.language.iso eng
dc.relation.ispartof Proceedings of the 53rd Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Big Data and Analytics: Pathways to Maturity
dc.subject explainable ai
dc.subject explainable models
dc.subject prrediction models
dc.title Model Interpretation and Explainability towards Creating Transparency in Prediction Models
dc.type Conference Paper
dc.type.dcmi Text
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0096.pdf
Size:
707.89 KB
Format:
Adobe Portable Document Format
Description: