Validation of AI-based Information Systems for Sensitive Use Cases: Using an XAI Approach in Pharmaceutical Engineering

Date
2022-01-04
Authors
Polzer, Anna
Fleiß, Jürgen
Ebner, Thomas
Kainz, Philipp
Koeth, Christoph
Thalmann, Stefan
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Artificial Intelligence (AI) is adopted in many businesses. However, adoption lacks behind for use cases with regulatory or compliance requirements, as validation and auditing of AI is still unresolved. AI's opaqueness (i.e., "black box") makes the validation challenging for auditors. Explainable AI (XAI) is the proposed technical countermeasure that can support validation and auditing of AI. We developed an XAI based validation approach for AI in sensitive use cases that facilitates the understanding of the system's behaviour. We conducted a case study in pharmaceutical manufacturing where strict regulatory requirements are present. The validation approach and an XAI prototype were developed through multiple workshops and was then tested and evaluated with interviews. Our approach proved suitable to collect the required evidence for a software validation, but requires additional efforts compared to a traditional software validation. AI validation is an iterative process and clear regulations and guidelines are needed.
Description
Keywords
Explainable Artificial Intelligence (XAI), artificial intelligence, explainable artificial intelligence, it auditing, pharmaceutical industry, software validation
Citation
Rights
Access Rights
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.