Collaboration with Intelligent Systems: Machines as Teammates

Permanent URI for this collectionhttps://hdl.handle.net/10125/107404

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item type: Item ,
    Human-AI Collaboration for Brainstorming: Effect of the Presence of AI Ideas on Breadth of Exploration
    (2024-01-03) Memmert, Lucas; Bittner, Eva
    With the widespread adoption of generative large language models (GLMs) such as GPT-3 or ChatGPT for human-AI problem solving, understanding the effect on performance becomes important. Brainstorming is an established approach for generating ideas to solve problems. In this study, we investigate how AI ideas affect the brainstorming performance metric ‘flexibility’, which refers to the breadth of exploration or coverage of the topic. The foundation for our analysis is the data from an experiment (n=52) in which individual participants brainstormed in two conditions: (1) human-only (baseline) and (2) human+AI (treatment). The treatment condition had access to ideas generated via the GLM OpenAI GPT-3.5. Results show significantly higher flexibility for the human+AI as compared to the human-only condition with a large effect size. With our study, we contribute to the literature of electronic brainstorming, brainstorming with GLMs, as well as to the research challenge of human-AI collaboration.
  • Item type: Item ,
    To Be Credible or to Be Creative? Understanding the Antecedents of User Satisfaction with AI-Generated Content from a Cognitive Fit Perspective
    (2024-01-03) Yang, Bo; Sun, Yongqiang; Li, Qinwei
    Generative artificial intelligence (GAI) has the potential to fundamentally disrupt how content is produced and will become increasingly integrated into organizational and individual task-performing and decision-making. This study aims to investigate how individuals perceive and process AI-generated content. Specifically, we propose that perceived credibility and creativity are critical antecedents of user satisfaction via cognitive fit and examine the boundary conditions. In an online scenario experiment with a sample size of 548 participants, we tested our hypotheses. The result shows that perceived credibility and creativity positively impact cognitive fit, which in turn affects user satisfaction with the outcome and process. Furthermore, regarding the boundary conditions, the results indicate a good match between the information values (i.e., credibility and creativity) and task types (i.e., routine vs. creative task) leads to cognitive fit, and users perceive different levels of satisfaction when they have different task motivations (i.e., hedonic vs. utilitarian task). Finally, we discuss theoretical contributions and practical implications.
  • Item type: Item ,
    AI Narratives: What Can They Tell Us About Individuals’ AI Literacy and Emotional Attitudes toward AI Assistants?
    (2024-01-03) Hammerschmidt, Teresa; Passlack, Nina; Stolz, Katharina; Posegga, Oliver
    How individuals understand Artificial Intelligence (AI) affects whether they can interact with AI assistants appropriately. To foster the appropriate use of AI assistants, individuals require realistic perceptions of what AI can or cannot do. However, perceptions (which we refer to as AI narratives) depend on individuals’ AI literacy and their emotional attitudes regarding AI assistants. To investigate how literate individuals are and their emotional attitudes when dealing with AI assistants, we suggest developing a better understanding of their different AI narratives. Through a qualitative online survey, we explore differences in AI narratives among individuals with positive, ambivalent, or negative emotional attitudes regarding AI and among those with low, medium, or high levels of AI literacy. This work provides two research-guiding propositions on an individual’s AI understanding and two recommen-dations for managing realistic AI perception-building.
  • Item type: Item ,
    Achieving Decisional Fit with AI-Aided Group Decisions: The Role of Intuitive Decision-Making Style in Predicting Perceived Fairness and Decision Acceptance
    (2024-01-03) Askay, David; Dhillon, Anuraj; Metcalf, Lynn
    As organizations integrate AI decision tools into their decision-making processes, there is a need to understand factors that promote acceptance of decisions made with AI tools. This study draws from the theory of decisional fit and design features of an AI platform to examine the relationship between decision-making styles, procedural fairness, and decision acceptance when teams collaborate with AI decision aid to reach a decision. The results confirm the mediating relationship of procedural fairness between an intuitive decision-making style and decision acceptance. These results extend theory related to decision-making styles by identifying individual differences that predict procedural fairness and decision acceptance. Moreover, it offers guidance to managers and organizations seeking to adopt and design AI decision aids.
  • Item type: Item ,
    Proactive and Reactive Help from Intelligent Agents in Identity-Relevant Tasks
    (2024-01-03) Goutier, Marc; Diebel, Christopher; Adam, Martin; Benlian, Alexander
    Enabled with artificial intelligence (AI), intelligent agents in information systems have developed from passive tools that only help in return to user prompts (i.e., reactive help) to intelligent agents that can help without requiring user requests (i.e., proactive help). Yet, it is unclear how users react to these different types of help and whether the task creates or reinforces the users’ identity (i.e., identity-relevance). Against this backdrop, we drew on self-affirmation and identity theory and conducted a vignette-based online experiment (n = 135). Our results show that proactive (vs. reactive) help decreases users’ willingness to accept help because of users’ higher perceived self-threat (i.e., threat to their self-image). Identity-relevance of the task moderates this effect – high (vs. low) identity-relevance causes a greater increase in self-threat through proactive (vs. reactive) help. Our study contributes to a better understanding of help from intelligent agents and their implications for effective human-AI collaboration.
  • Item type: Item ,
    Power of Language Automation: The Potential for Closing the Loop in Responding to Online Customer Feedback
    (2024-01-03) Katsiuba, Dzmitry; Dolata, Mateusz; Schwabe, Gerhard
    Online customer feedback management is playing an increasingly important role for businesses. Quickly providing guests with good responses to their reviews can be challenging, especially as the number of reviews increases. To address these challenges, this paper explores the response process and the potential for AI augmentation in the formulation and quality assurance of responses. As part of a design science research approach, it proposes an orchestration concept for humans and AI in intelligence co-writing in the hospitality industry and a novel NLP-based solution, which combines the advantages of human and AI in one application. The evaluation of the developed artifact shows that it is currently not possible to close the loop and automate the response process completely. This study describes the necessary components and provides transferable design knowledge. It opens possibilities for practical applications of NLP and further IS research.
  • Item type: Item ,
    Introduction to the Minitrack on Collaboration with Intelligent Systems: Machines as Teammates
    (2024-01-03) Elson, Joel; Derrick, Douglas; Seeber, Isabella; Waizenegger, Lena