Collaboration with Intelligent Systems: Machines as Teammates

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 9 of 9
  • Item
    Understanding the Effect of Expectation Disconfirmation on Trusting Intention in Human-AI Teams
    (2025-01-07) Cheng, Xusen; Zhang, Xiaoping; De Vreede, Triparna; De Vreede, Gj
    The presence of an AI teammate changes traditional collaboration dynamics found in human-human teams. Trust is especially important for human members of human-AI teams (HATs) to cope with changes in collaborative environments. However, little research focuses on trust development in HATs. The process of trust development can be viewed as a process of initial trust expectations being met or unmet. Drawing upon expectation disconfirmation theory, this study investigates how trust expectation disconfirmation influences trusting intention and willingness to work with the focal teammate (AI or human). Results indicate that as positive disconfirmation increases or negative disconfirmation decreases, individuals' trusting intention toward the human teammate will increase. However, both negative and positive disconfirmation is harmful to trusting intention toward the AI teammate. We also confirm the mediating effect of trusting intention on the impact of expectation disconfirmation on willingness to work with the focal teammate.
  • Item
    AI-ThinkLets for Brainstorming
    (2025-01-07) Schwabe, Gerhard; Katsiuba, Dzmitry; Specker, Richard; Dolata, Mateusz
    Digital Agents have the potential to be effective collaborators and significantly improve team collaboration practices. This study introduces the concept of AI-ThinkLets, which are repeatable collaboration activities involving Digital Agents. Using the Brainstorming process as an example, we demonstrate the applicability of AI-ThinkLets. We extend the existing OnePage ThinkLet with the Digital Agent by specifying its possible setup and functionality. This study provides insights for practitioners, orchestrators, and researchers. By integrating Digital Agents into established ThinkLets, we develop synergies that enhance the capabilities of human actors and enable new collaboration patterns
  • Item
    The Task Matters: The Effect of Perceived Similarity to AI on Intention to Use in Different Task Types
    (2025-01-07) Liang, Qingyu; Crowston, Kevin; You, Sangseok
    With the development of AI technologies, especially generative AI (GAI) like ChatGPT, GAI is increasingly assisting people in various tasks. However, people may have different requirements for GAI when using it for different kinds of tasks. For instance, when brainstorming new ideas, people may want GAI to propose different ideas that supplement theirs with different problem-solving perspectives, but for decision-making tasks, they may prefer GAI adopt a similar problem-solving process with people to make a similar or even the same decision as they would. We conducted an online experiment examining how perceived similarities between GAI and human task-solving influence people’s intention to use GAI, mediated by trust, for four task types (creativity, planning, intellective, and decision-making tasks). We demonstrate that the effect of similarity on trust (and so intent to use AI) depends on the type of task. This paper contributes to understanding the impact of task types on the relationship between perceived similarity and GAI adoption, with implications for future use of GAI in various task contexts.
  • Item
    Fostering Innovation with Generative AI: A Study on Human-AI Collaborative Ideation and User Anonymity
    (2025-01-07) La Scala, Jérémy; Bartłomiejczyk, Natalia; Gillet, Denis; Holzer, Adrian
    Collaborative ideation is a key aspect of many innovation processes. However, a lack of proper support can hinder the process and limit the ability of participants to generate innovative ideas. Thus, we introduce AI-deation, a digital environment for collaborative ideation. At the heart of the system is an AI collaborator powered by Generative Artificial Intelligence that participates in the ideation process by automatically suggesting new ideas. Moreover, the submitted ideas are processed by a Large Language Model, acting as an idea editor. It strengthens the anonymity of the contributions to alleviate fears of judgment. We studied this system in a randomized experiment where groups solved complex problems in two conditions: humans only and humans supported by an AI collaborator. Results show that the idea editor effectively strengthened the participants’ anonymity. Despite being more innovative, the AI collaborator did not significantly influence the participants’ innovativeness.
  • Item
    Resistance to Generative AI: Investigating the Drivers of Non-Use
    (2025-01-07) Wells, Taylor; Steffen, Jacob; Hughes, Amanda; Richardson, Benjamin; Meservy, Tom; Schuetzler, Ryan
    While much research focuses on the adoption and benefits of generative AI (GenAI) technologies, this study explores the reasons behind the non-use of these tools after initial adoption. A mixed-methods approach was employed, beginning with an exploratory qualitative study to identify scenarios and reasons for non-use among experienced GenAI users. This was followed by a quantitative study analyzing the relationship between these reasons and non-use across various contexts. Findings highlight key concerns, such as output quality, ethical implications, and the loss of human connection, as significant predictors of non-use. Additionally, individual characteristics, like the need for social connectedness, significantly influence non-use behavior. These insights are critical for designers, organizations, and managers to understand and address barriers to GenAI usage, ultimately aiming to enhance the effective integration of these technologies in diverse settings.
  • Item
    I Trust You So You are Part of Our Team: the Influence of Group Trust and Trust in Social Robots on In-Group Perception
    (2025-01-07) Oberhofer, Viviana
    Social robots are technically capable of acting as robotic team members, enabled by the advances in artificial intelligence. However, research has shown the importance of also perceiving robots as part of the in-group instead of as out-group members, who are often met with avoidance and resistance. Research is lacking an understanding of the antecedents of in-group perception in robots and the interplay with trust. This study addresses this gap by conducting a between-subject lab experiment with 18 teams of three humans and one social robot. Our findings indicate that trust in the group can stimulate trust in the robot. Additionally, our findings demonstrate that trust skews with the perception of the robots’ actions, which are perceived as more favorable for the group. This, in turn, increases the in-group perception of the social robot. This research contributes to social categorization and trust research in human-agent team and human-robot interaction research.
  • Item
    Artificial Trailblazing - How Human-AI Collaboration Transforms Organizational Innovation Practices
    (2025-01-07) Zheng, Jingyu; Hong, Yvonne; Richter, Alexander
    Artificial Intelligence (AI) has the potential to profoundly disrupt many industries and domains, including innovation, but there is a lack of comprehensive overviews of existing insights. This systematic literature review identifies seven key dimensions within organizational innovation practices that are significantly influenced by Human-AI Collaboration (HAIC), providing a structured analysis of its effects within organizational settings. Our systematic literature review reveals that HAIC has the potential to broadly transform organizational innovation practices, indicating possibilities for similar effects in varied contexts and providing a direction for future research.
  • Item
    Investigating the Effects of Classification Model Error Type on Trust-relevant Criteria in a Human-Machine Learning Interaction Task
    (2025-01-07) Harris, Krista; Capiola, August; Johnson, Dexter; Alarcon, Gene; Jessup, Sarah; Willis, Sasha; Bennette, Walter
    Machine learning models have been critiqued for their opaqueness, so recent work has created models to accurately convey model confidence. High performance is the most important aspect of trust in the model. However, when performance drops, accurate decision confidence leads to higher trust outcomes. The current research expands upon this work investigating how incorrect, low confidence decisions differentially impact the trust process. Incorrect decisions were either made on stimuli the model was trained to classify or stimuli outside those classification categories. In a between-subjects design, participants monitored low performing models of varying low confidence error type in an online image classification task. Results demonstrated when the model flagged incorrect stimuli it was not trained to classify with low confidence, process perceptions increased, while decision time and task performance decreased. Our results extend the current framework regarding how model confidence influences the trust process. Implications, limitations, and future research are discussed.
  • Item
    Introduction to the Minitrack on Collaboration with Intelligent Systems: Machines as Teammates
    (2025-01-07) Seeber, Isabella; Elson, Joel; Söllner, Matthias; Mullins, Ryan