Trust Violations in Human-Human and Human-Robot Interactions: The Influence of Ability, Benevolence and Integrity Violations
Files
Date
2022-01-04
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
The present work investigated the effects of trust violations on perceptions and risk-taking behaviors, and how those effects differ in human-human versus human-machine collaborations. Participants were paired with either a human or machine teammate in a derivation of a well-known trust game. Therein, the teammate committed one of three qualitatively different trust violations (i.e., an ability-, benevolence-, or integrity-based violation of trust). The results showed that ability-based trust violations had the largest impact on perceptions of ability; the other trust violations did not have differential impacts on self-reported ability, benevolence, or integrity, or risk-taking behaviors, and none of these effects were qualified by being partnered with a human versus a robot. Additionally, humans engaged in more risk-taking behaviors when paired with a robotic partner compared to a human over time.
Description
Keywords
Human‒Robot Interactions, bias, distrust, human-robot interaction, trust
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 55th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Collections
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.