Collaboration with Intelligent Systems: Machines as Teammates
Permanent URI for this collection
Browse
Recent Submissions
Item Working with ELSA – How an Emotional Support Agent Builds Trust in Virtual Teams(2022-01-04) Hofeditz, Lennart; Harbring, Mareen; Mirbabaie, Milad; Stieglitz, StefanVirtual collaboration is an increasing part of daily life for many employees. Despite many advantages, however, virtual collaborative work can lead to a lack of trust among virtual team members, e.g., due to spatial separation and little social interaction. Previous findings indicated that emotional support provided by a conversational agent (CA) can impact human-agent trust and the perceived social presence. We developed an emotional support agent called ELSA and conducted a between-subject online experiment to examine how CAs can provide emotional support in order to increase the level of trust among colleagues in virtual teams. We found that human-agent trust positively influences the level of calculus-based trust among team members and increases team cohesion, whereas perceived anthropomorphism and social presence towards a CA seems to be less important for trust among team members.Item How do Pedagogical Conversational Agents affect Learning Outcomes among High School Pupils: Insights from a Field Experiment(2022-01-04) Waldner, Sarah; Seeber, Isabella; Waizenegger, Lena; Maier, RonaldPedagogical conversational agents (CA) support formal and informal learning to help students achieve better learning outcomes by providing information, guidance or fostering reflections. Even though the extant literature suggests that pedagogical CAs can improve learning outcomes, there exists little empirical evidence of what design features drive this effect. This study reports on an exploratory field experiment involving 31 pupils in commercial high schools and finds that students achieved better learning outcomes when preparing for their tests with a pedagogical CA than without. However, the drivers of this effect remain unclear. Neither the use frequency of the design features nor the pupils’ expectations towards the CA could explain the improvement in marks. However, for the subjective perception of learning achievement, pupils’ expectations was a significant predictor. These findings provide support for the use of pedagogical CAs in teaching but also highlight that the drivers of better learning outcomes still remain unknown.Item From Tools to Teammates: Conceptualizing Humans’ Perception of Machines as Teammates with a Systematic Literature Review(2022-01-04) Rix, JenniferThe accelerating capabilities of systems brought about by advances in Artificial Intelligence challenge the traditional notion of systems as tools. Systems’ increasingly agentic and collaborative character offers the potential for a new user-system interaction paradigm: Teaming replaces unidirectional system use. Yet, extant literature addresses the prerequisites for this new interaction paradigm inconsistently, often not even considering the foundations established in human teaming literature. To address this, this study utilizes a systematic literature review to conceptualize the drivers of the perception of systems as teammates instead of tools. Hereby, it integrates insights from the dispersed and interdisciplinary field of human-machine teaming with established human teaming principles. The creation of a team setting and a social entity, as well as specific configurations of the machine teammate’s collaborative behaviors, are identified as main drivers of the formation of impactful human-machine teams.Item Do We Blame it on the Machine? Task Outcome and Agency Attribution in Human-Technology Collaboration(2022-01-04) Jia, Haiyan; Wu, Mu; Sundar, S. ShyamWith the growing functionality and capability of technology in human-technology interaction, humans are no longer the only autonomous entity. Automated machines increasingly play the role of agentic teammates, and through this process, human agency and machine agency are constructed and negotiated. Previous research on “Computers are Social Actors (CASA)” and self-serving bias suggest that humans might attribute more technology agency and less human agency when the interaction outcome is undesirable, and vice versa. We conducted an experiment to test this proposition by manipulating task outcome of a game co-played by a user and a smartphone app, and found partially contradictory results. Further, user characteristics, sociability in particular, moderated the effect of task outcome on agency attribution, and affected user experience and behavioral intention. Such findings suggest a complex mechanism of agency attribution in human-technology collaboration, which has important implications for emerging socio-ethical and socio-technical concerns surrounding intelligent technology.Item Complex Problem Solving through Human-AI Collaboration: Literature Review on Research Contexts(2022-01-04) Memmert, Lucas; Bittner, EvaSolving complex problems has been proclaimed as one major challenge for hybrid teams of humans and artificial intelligence (AI) systems. Human-AI collaboration brings immense opportunities in these complex tasks, in which humans struggle, but full automation is also impossible. Understanding and designing human-AI collaboration for complex problem solving is a wicked and multifaceted research problem itself. We contribute to this emergent field by reviewing to what extent existing research on instantiated human-AI collaboration already addresses this challenge. After clarifying the two key concepts (complex problem solving and human-AI collaboration), we perform a systematic literature review. We extract research contexts and assess them considering different complexity features. We thereby provide an overview of existing and guidance for designing new, suitable research contexts for studying complex problem solving through human-AI collaboration and present an outlook for further work on this research challenge.Item Can we Help the Bots? Towards an Evaluation of their Performance and the Creation of Human Enhanced Artifact for Emotions De-escalation(2022-01-04) Palese, Biagio; Pickard, Matthew; Bartosiak, MarcinWe propose a hybrid intelligence socio-technical artifact that identifies a threshold where the chatbot requires human intervention in order to continue to perform at an appropriate level to achieve the pre-defined objective of the system. We leverage the Yield Shift Theory of Satisfaction, the Intervention Theory and the Nudge Theory to develop meta requirements and design principles for this system. We discuss the first iteration of implementation and evaluation of the artifact components.Item Introduction to the Minitrack on Collaboration with Intelligent Systems: Machines as Teammates(2022-01-04) Derrick, Douglas; Seeber, Isabella; Elson, Joel; Waizenegger, Lena