A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions

dc.contributor.author Lockey, Steven
dc.contributor.author Gillespie, Nicole
dc.contributor.author Holm, Daniel
dc.contributor.author Someh, Ida Asadi
dc.date.accessioned 2020-12-24T20:08:30Z
dc.date.available 2020-12-24T20:08:30Z
dc.date.issued 2021-01-05
dc.description.abstract Artificial Intelligence (AI) can benefit society, but it is also fraught with risks. Societal adoption of AI is recognized to depend on stakeholder trust in AI, yet the literature on trust in AI is fragmented, and little is known about the vulnerabilities faced by different stakeholders, making it is difficult to draw on this evidence-base to inform practice and policy. We undertake a literature review to take stock of what is known about the antecedents of trust in AI, and organize our findings around five trust challenges unique to or exacerbated by AI. Further, we develop a concept matrix identifying the key vulnerabilities to stakeholders raised by each of the challenges, and propose a multi-stakeholder approach to future research.
dc.format.extent 10 pages
dc.identifier.doi 10.24251/HICSS.2021.664
dc.identifier.isbn 978-0-9981331-4-0
dc.identifier.uri http://hdl.handle.net/10125/71284
dc.language.iso English
dc.relation.ispartof Proceedings of the 54th Hawaii International Conference on System Sciences
dc.rights Attribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.uri https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subject Advances in Trust Research: Artificial Intelligence in Organizations
dc.subject artificial intelligence
dc.subject review
dc.subject stakeholders
dc.subject trust
dc.subject vulnerabilities
dc.title A Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions
prism.startingpage 5463
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
0534.pdf
Size:
468.08 KB
Format:
Adobe Portable Document Format
Description: