Model Interpretation and Explainability towards Creating Transparency in Prediction Models

dc.contributor.authorDolk, Daniel
dc.contributor.authorKridel, Donald
dc.contributor.authorDineen, Jacob
dc.contributor.authorCastillo, David
dc.date.accessioned2020-01-04T07:20:27Z
dc.date.available2020-01-04T07:20:27Z
dc.date.issued2020-01-07
dc.description.abstractExplainable AI (XAI) has a counterpart in analytical modeling which we refer to as model explainability. We tackle the issue of model explainability in the context of prediction models. We analyze a dataset of loans from a credit card company and apply three stages: execute and compare four different prediction methods, apply the best known explainability techniques in the current literature to the model training sets to identify feature importance (FI) (static case), and finally to cross-check whether the FI set holds up under “what if” prediction scenarios for continuous and categorical variables (dynamic case). We found inconsistency in FI identification between the static and dynamic cases. We summarize the “state of the art” in model explainability and suggest further research to advance the field.
dc.format.extent10 pages
dc.identifier.doi10.24251/HICSS.2020.120
dc.identifier.isbn978-0-9981331-3-3
dc.identifier.urihttp://hdl.handle.net/10125/63859
dc.language.isoeng
dc.relation.ispartofProceedings of the 53rd Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectBig Data and Analytics: Pathways to Maturity
dc.subjectexplainable ai
dc.subjectexplainable models
dc.subjectprrediction models
dc.titleModel Interpretation and Explainability towards Creating Transparency in Prediction Models
dc.typeConference Paper
dc.type.dcmiText

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0096.pdf
Size:
707.89 KB
Format:
Adobe Portable Document Format