Explainable Artificial Intelligence (XAI)

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 6
  • Item
    Visual Interpretability of Image-based Real Estate Appraisal
    ( 2022-01-04) Kucklick, Jan-Peter
    Explainability for machine learning gets more and more important in high-stakes decisions like real estate appraisal. While traditional hedonic house pricing models are fed with hard information based on housing attributes, recently also soft information has been incorporated to increase the predictive performance. This soft information can be extracted from image data by complex models like Convolutional Neural Networks (CNNs). However, these are intransparent which excludes their use for high-stakes financial decisions. To overcome this limitation, we examine if a two-stage modeling approach can provide explainability. We combine visual interpretability by Regression Activation Maps (RAM) for the CNN and a linear regression for the overall prediction. Our experiments are based on 62.000 family homes in Philadelphia and the results indicate that the CNN learns aspects related to vegetation and quality aspects of the house from exterior images, improving the predictive accuracy of real estate appraisal by up to 5.4%.
  • Item
    Validation of AI-based Information Systems for Sensitive Use Cases: Using an XAI Approach in Pharmaceutical Engineering
    ( 2022-01-04) Polzer, Anna ; Fleiß, Jürgen ; Ebner, Thomas ; Kainz, Philipp ; Koeth, Christoph ; Thalmann, Stefan
    Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box") makes the validation challenging for auditors. Explainable AI (XAI) is the proposed technical countermeasure that can support validation and auditing of AI. We developed an XAI based validation approach for AI in sensitive use cases that facilitates the understanding of the system's behaviour. We conducted a case study in pharmaceutical manufacturing where strict regulatory requirements are present. The validation approach and an XAI prototype were developed through multiple workshops and was then tested and evaluated with interviews. Our approach proved suitable to collect the required evidence for a software validation, but requires additional efforts compared to a traditional software validation. AI validation is an iterative process and clear regulations and guidelines are needed.
  • Item
    Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Workers Through Explainable Artificial Intelligence
    ( 2022-01-04) Schemmer, Max ; Kühl, Niklas ; Satzger, Gerhard
    While recent advances in AI-based automated decision-making have shown many benefits for businesses and society, they also come at a cost. It has long been known that a high level of automation of decisions can lead to various drawbacks, such as automation bias and deskilling. In particular, the deskilling of knowledge workers is a major issue, as they are the same people who should also train, challenge, and evolve AI. To address this issue, we conceptualize a new class of DSS, namely Intelligent Decision Assistance (IDA) based on a literature review of two different research streams---DSS and automation. IDA supports knowledge workers without influencing them through automated decision-making. Specifically, we propose to use techniques of Explainable AI (XAI) while withholding concrete AI recommendations. To test this conceptualization, we develop hypotheses on the impacts of IDA and provide the first evidence for their validity based on empirical studies in the literature.
  • Item
    Detecting and Understanding Textual Deepfakes in Online Reviews
    ( 2022-01-04) Kowalczyk, Peter ; Röder, Marco ; Dürr, Alexander ; Thiesse, Frédéric
    Deepfakes endanger business and society. Regarding fraudulent texts created with deep learning techniques, this may become particularly evident for online reviews. Here, customers naturally rely on truthful information about a product or service to adequately evaluate its worthiness. However, in the light of the proliferation of deepfakes, customers may increasingly harbour distrust and thereby affect a retailers business. To counteract this, we propose a novel IT artifact capable of detecting textual deepfakes to then explain their peculiarities by using explainable artificial intelligence. Finally, we demonstrate the utility of such explanations for the case of online reviews in e-commerce.
  • Item
    An Interpretable Deep Learning Approach to Understand Health Misinformation Transmission on YouTube
    ( 2022-01-04) Xie, Jiaheng ; Chai, Yidong ; Liu, Xiao
    Health misinformation on social media devastates physical and mental health, invalidates health gains, and potentially costs lives. Deep learning methods have been deployed to predict the spread of misinformation, but they lack the interpretability due to their blackbox nature. To remedy this gap, this study proposes a novel interpretable deep learning, Generative Adversarial Network based Piecewise Wide and Attention Deep Learning (GAN-PiWAD), to predict health misinformation transmission in social media. GAN-PiWAD captures the interactions among multi-modal data, offers unbiased estimation of the total effect of each feature, and models the dynamic total effect of each feature. Interpretation of GAN-PiWAD indicates video description, negative video content, and channel credibility are key features that drive viral transmission of misinformation. This study contributes to IS with a novel interpretable deep learning that is generalizable to understand human decisions. We provide direct implications to design interventions to identify misinformation, control transmissions, and manage infodemics.