Human-AI Collaborations and Ethical Issues

Permanent URI for this collectionhttps://hdl.handle.net/10125/112406

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item type: Item ,
    Provisional, Contextual, and Verified: How Professionals Navigate Trust- Control Boundaries with AI
    (2026-01-06) Singh, Shivansh; Blomqvist, Kirsimarja
    Contemporary AI systems require humans to continuously negotiate when to cede control and when to maintain oversight. While theoretical frameworks describe meaningful human control and variable autonomy, we lack understanding of how professionals implement these concepts in practice. Through critical incident analysis of 29 participants across seven professional domains, this study reveals four behavioral mechanisms through which users navigate trust-control boundaries: provisional control allocation, contextual control switching, active verification loops, and evolving mental models. These findings suggest users often prioritize maintaining control capabilities alongside task performance considerations. This focus on control maintenance may contribute to understanding the performance paradox in human-AI collaboration, where human-AI teams sometimes underperform compared to either alone. The mechanisms reveal sophisticated strategies for preserving meaningful human control, often accepting reduced task efficiency to maintain intervention capabilities. These insights provide design implications for systems that support users' natural control-maintenance behaviors rather than enforcing full automation.
  • Item type: Item ,
    Do Two AI Physicians Equal One Human Physician in Online Healthcare Consultations?
    (2026-01-06) Chen, Wenjing; Tong, Jingjing; Xu, Jingjun (David)
    With the development of technology, two artificial intelligence (AI) physicians collaborating to interact with depressed patients in online healthcare consultations has become possible. However, little is known about whether two AI physicians are perceived to be better than one AI and how two AIs can mitigate the weaknesses of one AI versus one human in this context. Drawing on the stereotype content model, wisdom of crowd effect, and synergy effect, we build a research model to understand how different physician type affects patients’ intention to use the service. Results of an experiment show that, while one AI is less likely to evoke perceived accuracy and caring than one human physician, two AIs can mitigate such weaknesses of one AI. Further, there is no difference between two AIs and one human physician in enhancing perceived accuracy and caring. Perceived accuracy and caring increase trust, thereby enhancing intention to use the service. This research makes contributions by examining the influences of two AI physicians.
  • Item type: Item ,
    Predicting Engagement in Human-Robot Teams via Node-Edge Co-Attention Dynamic Graph Neural Networks
    (2026-01-06) Li, Shaochun; Traeger, Margaret; Cook, Ryan; Abbasi, Ahmed; Zhang, Pengzhu
    Human-AI collaboration increasingly depends on intelligent agents that can understand and influence human social behavior. In small-group settings, conversational dynamics shape team cohesion and task success. However, existing models in dynamic graph learning struggle with small-scale graphs, high-dimensional edge features, and multi-level predictions. This paper proposes a novel Node-Edge Co-Attention Dynamic Graph Neural Network (DyNEA) for engagement prediction in human-robot teams. Using data from 30 rounds of collaborative games involving conversations from participants, our model jointly learns node, edge, and graph-level representations and make predictions. DyNEA outperforms baselines across multiple metrics. Our framework offers potential applications in human-AI collaboration, emotional support systems, and modeling cooperation dynamics.
  • Item type: Item ,
    AI Moral Patiency and Moral Dissonance: Almost Human, Almost Deserving Moral Treatment
    (2026-01-06) Dennis, Alan; Seymour, Mike; Kim, Antino; Yuan, Lingyao
    As Artificial Intelligence (AI) agents become increasingly common in all walks of life, most users would agree that AI agents should behave ethically and morally towards their human users. This paper examines moral patiency (MP), the extent to which an entity is perceived as deserving moral consideration. This is a construct distinct from moral agency (the ability of an entity to act morally). We develop and validate a multi-dimensional scale capturing six positive and six negative factors indicating the extent to which someone ascribes MP to an AI agent. MP toward an AI agent was only weakly correlated with MP toward human agents. Interestingly, the MP factors that were related to trust in human agents were quite different than the MP factors that were related to trust in the AI agents. Some users reported treating the AI fairly, following its advice, and protecting its security. Fewer participants reported engaging in negative MP behaviors. These findings highlight the risk of moral dissonance, the ethical confusion users experience about wanting the AI to treat them morally, but failing to perceive a need to reciprocate and treat the AI morally. We argue that MP, and the moral dissonance it may generate, is a foundational yet underexplored lens for understanding the evolving dynamics of human-AI interaction.
  • Item type: Item ,
    Explainable AI in Content Moderation: Global, Local, and Narrative Approaches to Trust and Plausibility
    (2026-01-06) Kim, David; Lee, Kyuhan; Suh, Jihae; Park, Jinsoo
    We have explored various explainable AI scopes in fake news detection and found individuals trust AI when its explanation feels plausible. Drawing on the Heuristic–Systematic Model, we claim that plausibility mainly drives reliance. In an experiment, we compared seven increasingly detailed explanation formats—ranging from a bare flag to keyword lists, token-highlight views, and two levels of narrative—while controlling model accuracy. Trust did not rise monotonically with extra detail. Unedited token highlights lowered trust relative to the keyword list; pruning irrelevant tokens restored it; only the contextual narrative produced the highest trust. Mediation analysis confirmed the theory: the plausibility → trust path was β ≈ 0.55 (p < .001), and all indirect effects were significant. Our findings suggest that, in content-moderation settings, explanations that foreground plausibility through concise, context-anchored narratives can foster higher reviewer trust than simply exposing additional model internals.
  • Item type: Item ,
    From Prompts to Trustworthiness: Operationalizing Technological Transparency in Generative AI Systems
    (2026-01-06) Chen, Jie; Sha, Xiqing; Zhang, Xiaohui; Huang , Yankun; Guo, Hong
    Compared to traditional information systems, generative AI (GenAI) systems pose unique challenges for user trustworthiness perception due to their opaque architectures and open-ended outputs. This study conceptualizes technological transparency as a multidimensional concept encompassing data source transparency, algorithmic transparency, and limitation transparency. We operationalize this concept through a prompt-based intervention that conditions a GenAI assistant to embed transparency cues directly into its responses. In a randomized experiment with undergraduate students using the assistant for career exploration, we compare a technologically transparent condition with a control condition with outcomes on performance risk and trustworthiness perception. Results show that the intervention significantly reduces users’ performance risk perception and increases their trustworthiness perception of the system. Mediation analysis further reveals that performance risk perception fully mediates the effect of technological transparency on trustworthiness perception. Together, these findings advance theory by clarifying how technological transparency shapes user evaluations in GenAI contexts and offer a scalable, low-cost strategy for fostering trustworthy GenAI through prompt engineering.
  • Item type: Item ,
    Introduction to the Minitrack on Human-AI Collaborations and Ethical Issues
    (2026-01-06) Kim, Dan; Yoon, Victoria; Chen, Xunyu; Abedin, Babak