Explainable Artificial Intelligence (XAI)

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item
    What Your Radiologist Might be Missing: Using Machine Learning to Identify Mislabeled Instances of X-ray Images
    ( 2021-01-05) Rädsch, Tim ; Eckhardt, Sven ; Leiser, Florian ; Pandl, Konstantin D. ; Thiebes, Scott ; Sunyaev, Ali
    Label quality is an important and common problem in contemporary supervised machine learning research. Mislabeled instances in a data set might not only impact the performance of machine learning models negatively but also make it more difficult to explain, and thus trust, the predictions of those models. While extant research has especially focused on the ex-ante improvement of label quality by proposing improvements to the labeling process, more recent research has started to investigate the use of machine learning-based approaches to identify mislabeled instances in training data sets automatically. In this study, we propose a two-staged pipeline for the automatic detection of potentially mislabeled instances in a large medical data set. Our results show that our pipeline successfully detects mislabeled instances, helping us to identify 7.4% of mislabeled instances of Cardiomegaly in the data set. With our research, we contribute to ongoing efforts regarding data quality in machine learning.
  • Item
    Reviewing the Need for Explainable Artificial Intelligence (xAI)
    ( 2021-01-05) Gerlings, Julie ; Shollo, Arisa ; Constantiou, Ioanna
    The diffusion of artificial intelligence (AI) applications in organizations and society has fueled research on explaining AI decisions. The explainable AI (xAI) field is rapidly expanding with numerous ways of extracting information and visualizing the output of AI technologies (e.g. deep neural networks). Yet, we have a limited understanding of how xAI research addresses the need for explainable AI. We conduct a systematic review of xAI literature on the topic and identify four thematic debates central to how xAI addresses the black-box problem. Based on this critical analysis of the xAI scholarship we synthesize the findings into a future research agenda to further the xAI body of knowledge.
  • Item
    Capturing Users’ Reality: A Novel Approach to Generate Coherent Counterfactual Explanations
    ( 2021-01-05) Förster, Maximilian ; Hühn, Philipp ; Klier, Mathias ; Kluge, Kilian
    The opacity of Artificial Intelligence (AI) systems is a major impediment to their deployment. Explainable AI (XAI) methods that automatically generate counterfactual explanations for AI decisions can increase users’ trust in AI systems. Coherence is an essential property of explanations but is not yet addressed sufficiently by existing XAI methods. We design a novel optimization-based approach to generate coherent counterfactual explanations, which is applicable to numerical, categorical, and mixed data. We demonstrate the approach in a realistic setting and assess its efficacy in a human-grounded evaluation. Results suggest that our approach produces explanations that are perceived as coherent as well as suitable to explain the factual situation.
  • Item
    AI-Assisted and Explainable Hate Speech Detection for Social Media Moderators – A Design Science Approach
    ( 2021-01-05) Bunde, Enrico
    To date, the detection of hate speech is still primarily carried out by humans, yet there is great potential for combining human expertise with automated approaches. However, identified challenges include low levels of agreement between humans and machines due to the algorithms’ missing expertise of, e.g., cultural, and social structures. In this work, a design science approach is used to derive design knowledge and develop an artifact, through which humans are integrated in the process of detecting and evaluating hate speech. For this purpose, explainable artificial intelligence (XAI) is utilized: the artifact will provide explanative information, why the deep learning model predicted whether a text contains hate. Results show that the instantiated design knowledge in form of a dashboard is perceived as valuable and that XAI features increase the perception of the artifact’s usefulness, ease of use, trustworthiness as well as the intention to use it.
  • Item
    Introduction to the Minitrack on Explainable Artificial Intelligence (XAI)
    ( 2021-01-05) Meske, Christian ; Abedin, Babak ; Junglas, Iris ; Rabhi, Fethi