Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 Enabled AI
Files
Date
2023-01-03
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
501
Ending Page
Alternative Title
Abstract
Advancements in computing power and foundational modeling have enabled artificial intelligence (AI) to respond to moral queries with surprising accuracy. This raises the question of whether we trust AI to influence human moral decision-making, so far, a uniquely human activity. We explored how a machine agent trained to respond to moral queries (Delphi, Jiang et al., 2021) is perceived by human questioners. Participants were tasked with querying the agent with the goal of figuring out whether the agent, presented as a humanlike robot or a web client, was morally competent and could be trusted. Participants rated the moral competence and perceived morality of both agents as high yet found it lacking because it could not provide justifications for its moral judgments. While both agents were also rated highly on trustworthiness, participants had little intention to rely on such an agent in the future. This work presents an important first evaluation of a morally competent algorithm integrated with a human-like platform that could advance the development of moral robot advisors.
Description
Keywords
Human-Robot Interactions, ethics, human-agent teaming, human-likeness., morality, trust
Citation
Extent
10
Format
Geographic Location
Time Period
Related To
Proceedings of the 56th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.