A Conceptual Model of Trust in Generative AI Systems
Files
Date
2025-01-07
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7017
Ending Page
Alternative Title
Abstract
Generative Artificial Intelligence (GAI) significantly impacts various sectors, offering innovative solutions in consultation, self-education, and creativity. However, the trustworthiness of GAI outputs is questionable due to the absence of theoretical correctness guarantees and the opacity of Artificial Intelligence (AI) processes. These issues, compounded by potential biases and inaccuracies, pose challenges to GAI adoption. This paper delves into the trust dynamics in GAI, highlighting its unique capabilities to generate novel outputs and adapt over time, distinct from traditional AI. We introduce a model analyzing trust in GAI through user experience, operational capabilities, contextual factors, and task types. This work aims to enrich the theoretical discourse and practical approaches in GAI, setting a foundation for future research and applications.
Description
Keywords
Artifical Intelligence Security: Ensuring Safety, Trustworthiness, and Responsibility in AI Systems, generative ai, sector-specific applications, trust
Citation
Extent
10
Format
Geographic Location
Time Period
Related To
Proceedings of the 58th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.