Collaboration with Intelligent Systems: Machines as Teammates

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 5
  • Item
    Mechanisms of Common Ground in Human-Agent Interaction: A Systematic Review of Conversational Agent Research
    ( 2023-01-03) Tolzin, Antonia ; Janson, Andreas
    Human-agent interaction is increasingly influencing our personal and work lives through the proliferation of conversational agents in various domains. As such, these agents combine intuitive natural language interactions by also delivering personalization through artificial intelligence capabilities. However, research on CAs as well as practical failures indicate that CA interaction oftentimes fails miserably. To reduce these failures, this paper introduces the concept of building common ground for more successful human-agent interactions. Based on a systematic review our analysis reveals five mechanisms for achieving common ground: (1) Embodiment, (2) Social Features, (3) Joint Action, (4) Knowledge Base, and (5) Mental Model of Conversational Agents. On this basis, we offer insights into grounding mechanisms and highlight the potentials when considering common ground in different human-agent interaction processes. Consequently, we secure further understanding and deeper insights of possible mechanisms of common ground in human-agent interaction in the future.
  • Item
    Towards the Design of Hybrid Intelligence Frontline Service Technologies – A Novel Human-in-the-Loop Configuration for Human-Machine Interactions
    ( 2023-01-03) Li, Mahei ; Löfflad, Denise ; Reh, Cornelius ; Oeste-Reiß, Sarah
    Rapid adoption of innovative technologies confront IT-Service-Management (ITSM) to incoming support requests of increasing complexity. As a consequence, job demands and turnover rates of ITSM support agents increase. Recent technological advances have introduced assistance systems that rely on hybrid intelligence to provide support agents with contextually suitable historical solutions to help them solve customer requests. Hybrid intelligence systems rely on human input to provide high-quality data to train their underlying AI models. Yet, most agents have little incentives to label their data, lowering data quality and leading to diminishing returns of AI systems due to concept drifts. Following a design science research approach, we provide a novel Human-in-the-Loop design and hybrid intelligence system for ITSM support ticket recommendations, which incentivize agents to provide high-quality labels. Specifically, we leverage agent’s need for instant gratification by simultaneously providing better results if they improve labeling automatically labeled support tickets.
  • Item
    It Depends on the Timing: The Ripple Effect of AI on Team Decision-Making
    ( 2023-01-03) Yan, Bei ; Gurkan, Necdet
    Whereas artificial intelligence (AI) is increasingly used to facilitate team decision-making, little is known about how the timing of AI assistance may impact team performance. The study investigates this question with an online experiment in which teams completed a new product development task with assistance from a chatbot. Information needed for making the decision was distributed among the team members. The chatbot shared information critical to the decision in either the first half or second half of team interaction. The results suggest that teams assisted by the chatbot in the first half of the decision-making task made better decisions than those assisted by the chatbot in the second half. Analysis of team member perceptions and interaction processes suggests that having a chatbot at the beginning of team interaction may have generated a ripple effect in the team that promoted information sharing among team members.
  • Item
    Introduction to the Minitrack on Collaboration with Intelligent Systems: Machines as Teammates
    ( 2023-01-03) Seeber, Isabella ; Elson, Joel ; Waizenegger, Lena
  • Item
    The Effect of AI Teammate Ethicality on Trust Outcomes and Individual Performance in Human-AI Teams
    ( 2023-01-03) Schelble, Beau ; Lancaster, Caitlin ; Duan, Wen ; Mallick, Rohit ; Mcneese, Nathan ; Lopez, Jeremy
    This study improves the understanding of trust in human-AI teams by investigating the relationship of AI teammate ethicality on individual outcomes of trust (i.e., monitoring, confidence, fear) in AI teammates and human teammates over time. Specifically, a synthetic task environment was built to support a three person team with two human teammate and one AI teammate (simulated by a confederate). The AI teammate performed either an ethical or unethical action in three missions and measures of trust in the human and AI teammates were taken after each mission. Results from the study revealed that unethical actions by the AT had a significant effect on nearly all of the outcomes of trust measured and that levels of trust were dynamic over time for both the AI and human teammates, with the AI teammate recovering trust to Mission 1 levels by Mission 3. AI ethicality was mostly unrelated to participants trust in their fellow human teammate but did decrease perceptions of fear, paranoia, and skepticism in them and trust in the human and AI teammate was not significantly related to individual performance outcomes, which both diverge from previous trust research in human-AI teams utilizing competency-based trust violations.