Lockey, StevenGillespie, NicoleHolm, DanielSomeh, Ida Asadi2020-12-242020-12-242021-01-05978-0-9981331-4-0http://hdl.handle.net/10125/71284Artificial Intelligence (AI) can benefit society, but it is also fraught with risks. Societal adoption of AI is recognized to depend on stakeholder trust in AI, yet the literature on trust in AI is fragmented, and little is known about the vulnerabilities faced by different stakeholders, making it is difficult to draw on this evidence-base to inform practice and policy. We undertake a literature review to take stock of what is known about the antecedents of trust in AI, and organize our findings around five trust challenges unique to or exacerbated by AI. Further, we develop a concept matrix identifying the key vulnerabilities to stakeholders raised by each of the challenges, and propose a multi-stakeholder approach to future research.10 pagesEnglishAttribution-NonCommercial-NoDerivatives 4.0 InternationalAdvances in Trust Research: Artificial Intelligence in Organizationsartificial intelligencereviewstakeholderstrustvulnerabilitiesA Review of Trust in Artificial Intelligence: Challenges, Vulnerabilities and Future Directions10.24251/HICSS.2021.664