Human-Robot Interactions

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 4 of 4
  • Item
    Extending the Affective Technology Acceptance Model to Human-Robot Interactions: A Multi-Method Perspective
    (2023-01-03) Jessup, Sarah; Willis, Sasha M.; Alarcon, Gene
    The current study sought to extend the Affective Technology Acceptance (ATA) model to human-robot interactions. We tested the direct relationship between affect and technology acceptance of a security robot. Affect was measured using a multi-method approach, which included a self-report survey, as well as sentiment analysis, and response length of written responses. Results revealed that participants who experienced positive affect were more likely to accept technology. However, the significance and direction of the relationship between negative affect and technology acceptance was measurement dependent. Additionally, positive and negative sentiment words accounted for unique variance in technology acceptance, after controlling for self-reported affect. This study demonstrates that affect is an important contributing factor in human-robot interaction research, and using a multi-method approach allows for a richer, more complete understanding of how human feelings influence robot acceptance.
  • Item
    Human-Robot Interaction: Mapping Literature Review and Network Analysis
    (2023-01-03) Oberhofer, Viviana; Seeber, Isabella; Maier, Ronald
    Organizations increasingly adopt social robots as additions to real-life workforces, which requires knowledge of how humans react to and work with robots. The longstanding research on Human-Robot Interaction (HRI) offers relevant insights, but the existing literature reviews are limited in their ability to guide theory development and practitioners in sustainably employing social robots because the reviews lack a systematic synthesis of HRI concepts, relationships, and ensuing effects. This study offers a mapping review of the past ten years of HRI research. With the analysis of 68 peer-reviewed journal articles, we identify shifting foci, for example, towards more application-specific empirical investigations, and the most prominent concepts and relationships investigated in connection with social robots, for example, robot appearance. The results offer Information Systems scholars and practitioners an initial knowledge base and nuanced insights into key predictors and outcome variables that can hinder and foster social robot adoption in the workplace.
  • Item
    Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 Enabled AI
    (2023-01-03) Momen, Ali; De Visser, Ewart; Wolsten, Kyle; Cooley, Katrina; Walliser, James; Tossell, Chad C.
    Advancements in computing power and foundational modeling have enabled artificial intelligence (AI) to respond to moral queries with surprising accuracy. This raises the question of whether we trust AI to influence human moral decision-making, so far, a uniquely human activity. We explored how a machine agent trained to respond to moral queries (Delphi, Jiang et al., 2021) is perceived by human questioners. Participants were tasked with querying the agent with the goal of figuring out whether the agent, presented as a humanlike robot or a web client, was morally competent and could be trusted. Participants rated the moral competence and perceived morality of both agents as high yet found it lacking because it could not provide justifications for its moral judgments. While both agents were also rated highly on trustworthiness, participants had little intention to rely on such an agent in the future. This work presents an important first evaluation of a morally competent algorithm integrated with a human-like platform that could advance the development of moral robot advisors.
  • Item
    Introduction to the Minitrack on Human-Robot Interactions
    (2023-01-03) You, Sangseok; Robert, Lionel