Managing Trustworthiness in Advanced Autonomous Systems
Files
Date
2025-01-07
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7046
Ending Page
Alternative Title
Abstract
Cyber-physical systems that are employed as part of a decision-making framework require a measure of behavior assurance to characterize their function. Further, such an assurance measure needs to accommodate both safety and security considerations in the design and implementation of such components. A key measure relevant to the emergence of data-driven and learning-enabled components is that of trustworthiness. Trustworthiness of a system is of paramount importance for all safety and security relevant systems, but of particular significance for autonomous systems operating without a human pilot or operator in the control-feedback loop. Further, emerging technologies such as data-driven ML and AI-based approaches that represent quantum jumps in tactical mission capabilities, demand a measure of confidence prior to their integration into existing decision frameworks. While we may be able to characterize an AI system’s capabilities by observing its behavior, we cannot understand its limitations and potential negative capabilities not obvious in the training data, opening up the possibility of unintended behaviors and thus a degree of unpredictability. We present a systems engineering methodology based on mission engineering coupled with system theoretic analysis, which provides a formal representation of system level security properties that can be mapped to lower level subsystem specifications and verified using conventional approaches. We address some of the underlying computing platform requirements necessary to ensure the successful implementation of safe, secure, trustworthy and resilient autonomous operations. This paper focuses on factors that impact trustworthiness, methods to estimate and assess it, design approaches that can integrate and support enhanced trustworthiness, as well as methods to verify and validate trustworthiness-relevant system requirements. Some relevant examples are provided in the context of an autonomous aircraft system and its relevant subsystems.
Description
Keywords
Artifical Intelligence Security: Ensuring Safety, Trustworthiness, and Responsibility in AI Systems, autonomy, certification, security, stpa-sec, trustworthiness
Citation
Extent
10
Format
Geographic Location
Time Period
Related To
Proceedings of the 58th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.