Explainable Artificial Intelligence (XAI)

Permanent URI for this collection


Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Assessing the Fidelity of Explanations with Global Sensitivity Analysis
    ( 2023-01-03) Smith, Michael ; Acquesta, Erin ; Smutz, Charles ; Rushdi, Ahmad ; Moss, Blake
    Many explainability methods have been proposed as a means of understanding how a learned machine learning model makes decisions and as an important factor in responsible and ethical artificial intelligence. However, explainability methods often do not fully and accurately describe a model's decision process. We leverage the mathematical framework of global sensitivity analysis techniques to reveal deficiencies of explanation methods. We find that current explainaiblity methods fail to capture prediction uncertainty and make several simplifying assumptions that have significant ramifications on the accuracy of the resulting explanations. We show that the simplifying assumptions result in explanations that: (1) fail to model nonlinear interactions in the model and (2) misrepresent the importance of correlated features. Experiments suggest that failing to capture nonlinear feature interaction has a larger impact on the accuracy of the explanations. Thus, as most state-of-the-art ML models have non-linear interactions and operate on correlated data, explanations should only be used with caution.
  • Item
    Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)
    ( 2023-01-03) Abedin, Babak ; Meske, Christian ; Rabhi, Fethi ; Klier, Mathias
  • Item
    Explaining Explainable Artificial Intelligence: An integrative model of objective and subjective influences on XAI
    ( 2023-01-03) Alarcon, Gene ; Willis, Sasha
    Explainable artificial intelligence (XAI) is a new field within artificial intelligence (AI) and machine learning (ML). XAI offers a transparency of AI and ML that can bridge the gap in information that has been absent from “black-box” ML models. Given its nascency, there are several taxonomies of XAI in the literature. The current paper incorporates the taxonomies in the literature into one unifying framework, which defines the types of explanations, types of transparency, and model methods that together inform the user’s processes towards developing trust in AI and ML systems.
  • Item
    Transparent Artificial Intelligence and Human Resource Management: A Systematic Literature Review
    ( 2023-01-03) Votto, Alexis ; Liu, Charles Zhechao
    As the technological expansion of Artificial Intelligence (AI) penetrates various industries, Human Resource Management has attempted to keep pace with the new capabilities and challenges these technologies have brought. When adopting AI, transparency within HRM decisions is an increasing demand to establish ethical, unbiased, and fair practices within a firm. To this end, explainable AI (XAI) methods have become vital in achieving transparency within HRM decision-making. Thus, there has been a growing interest in exploring successful XAI techniques, as evidenced by the systematic literature review (SLR) performed in this paper. Our SLR starts by revealing where AI exists within HRM. Following this, we review the literature on XAI and accuracy, XAI design, accountability, and data processing initiatives within HRM. The integrated framework we propose provides an avenue to bridge the gap between transparent HRM practices and Artificial Intelligence, providing the industrial and academic community with better insight into where XAI could exist within HRM processes.