A Conceptual Model of Trust in Generative AI Systems

dc.contributor.authorTahmasbi, Nargess
dc.contributor.authorRastegari, Elham
dc.contributor.authorTruong, Minh
dc.date.accessioned2024-12-26T21:10:55Z
dc.date.available2024-12-26T21:10:55Z
dc.date.issued2025-01-07
dc.description.abstractGenerative Artificial Intelligence (GAI) significantly impacts various sectors, offering innovative solutions in consultation, self-education, and creativity. However, the trustworthiness of GAI outputs is questionable due to the absence of theoretical correctness guarantees and the opacity of Artificial Intelligence (AI) processes. These issues, compounded by potential biases and inaccuracies, pose challenges to GAI adoption. This paper delves into the trust dynamics in GAI, highlighting its unique capabilities to generate novel outputs and adapt over time, distinct from traditional AI. We introduce a model analyzing trust in GAI through user experience, operational capabilities, contextual factors, and task types. This work aims to enrich the theoretical discourse and practical approaches in GAI, setting a foundation for future research and applications.
dc.format.extent10
dc.identifier.doi10.24251/HICSS.2025.839
dc.identifier.isbn978-0-9981331-8-8
dc.identifier.other4febaa56-c13e-4318-8ba1-e9a602953750
dc.identifier.urihttps://hdl.handle.net/10125/109690
dc.relation.ispartofProceedings of the 58th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectArtifical Intelligence Security: Ensuring Safety, Trustworthiness, and Responsibility in AI Systems
dc.subjectgenerative ai, sector-specific applications, trust
dc.titleA Conceptual Model of Trust in Generative AI Systems
dc.typeConference Paper
dc.type.dcmiText
prism.startingpage7017

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0684.pdf
Size:
452.85 KB
Format:
Adobe Portable Document Format