Alarcon, GeneCapiola, AugustMorgan, JustinHamdan, Izz AldinLee, Michael2021-12-242021-12-242022-01-04978-0-9981331-5-7http://hdl.handle.net/10125/79412The present work investigated the effects of trust violations on perceptions and risk-taking behaviors, and how those effects differ in human-human versus human-machine collaborations. Participants were paired with either a human or machine teammate in a derivation of a well-known trust game. Therein, the teammate committed one of three qualitatively different trust violations (i.e., an ability-, benevolence-, or integrity-based violation of trust). The results showed that ability-based trust violations had the largest impact on perceptions of ability; the other trust violations did not have differential impacts on self-reported ability, benevolence, or integrity, or risk-taking behaviors, and none of these effects were qualified by being partnered with a human versus a robot. Additionally, humans engaged in more risk-taking behaviors when paired with a robotic partner compared to a human over time.10 pagesengAttribution-NonCommercial-NoDerivatives 4.0 InternationalHuman‒Robot Interactionsbiasdistrusthuman-robot interactiontrustTrust Violations in Human-Human and Human-Robot Interactions: The Influence of Ability, Benevolence and Integrity Violationstext10.24251/HICSS.2022.082