A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions

dc.contributor.authorLockey, Steven
dc.contributor.authorGillespie, Nicole
dc.contributor.authorHolm, Daniel
dc.contributor.authorSomeh, Ida Asadi
dc.date.accessioned2020-12-24T20:08:30Z
dc.date.available2020-12-24T20:08:30Z
dc.date.issued2021-01-05
dc.description.abstractArtificial Intelligence (AI) can benefit society, but it is also fraught with risks. Societal adoption of AI is recognized to depend on stakeholder trust in AI, yet the literature on trust in AI is fragmented, and little is known about the vulnerabilities faced by different stakeholders, making it is difficult to draw on this evidence-base to inform practice and policy. We undertake a literature review to take stock of what is known about the antecedents of trust in AI, and organize our findings around five trust challenges unique to or exacerbated by AI. Further, we develop a concept matrix identifying the key vulnerabilities to stakeholders raised by each of the challenges, and propose a multi-stakeholder approach to future research.
dc.format.extent10 pages
dc.identifier.doi10.24251/HICSS.2021.664
dc.identifier.isbn978-0-9981331-4-0
dc.identifier.urihttp://hdl.handle.net/10125/71284
dc.language.isoEnglish
dc.relation.ispartofProceedings of the 54th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectAdvances in Trust Research: Artificial Intelligence in Organizations
dc.subjectartificial intelligence
dc.subjectreview
dc.subjectstakeholders
dc.subjecttrust
dc.subjectvulnerabilities
dc.titleA Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions
prism.startingpage5463

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0534.pdf
Size:
468.08 KB
Format:
Adobe Portable Document Format