Validation of AI-based Information Systems for Sensitive Use Cases: Using an XAI Approach in Pharmaceutical Engineering

Date
2022-01-04
Authors
Polzer, Anna
Fleiß, Jürgen
Ebner, Thomas
Kainz, Philipp
Koeth, Christoph
Thalmann, Stefan
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box") makes the validation challenging for auditors. Explainable AI (XAI) is the proposed technical countermeasure that can support validation and auditing of AI. We developed an XAI based validation approach for AI in sensitive use cases that facilitates the understanding of the system's behaviour. We conducted a case study in pharmaceutical manufacturing where strict regulatory requirements are present. The validation approach and an XAI prototype were developed through multiple workshops and was then tested and evaluated with interviews. Our approach proved suitable to collect the required evidence for a software validation, but requires additional efforts compared to a traditional software validation. AI validation is an iterative process and clear regulations and guidelines are needed.
Description
Keywords
Explainable Artificial Intelligence (XAI), artificial intelligence, explainable artificial intelligence, it auditing, pharmaceutical industry, software validation
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 55th Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.