Conversational AI and Ethical Issues
Permanent URI for this collectionhttps://hdl.handle.net/10125/107406
Browse
Recent Submissions
Item type: Item , Exploring the Impact of Perceived Convenience, Autonomy, and Satisfaction on Citizens’ Continuance with Government Chatbots(2024-01-03) Kim, Seongjin; Tang, Zhenya; Kim, Dan; Ahn, HyunchulChatbots are computer programs that utilize artificial intelligence techniques to simulate human-like conversations with users. Governments worldwide are increasingly employing them to engage with citizens, provide information and services, and support government activities. By employing the Information Systems Continuance Model and Resources Matching Theory as theoretical frameworks, this study explores the influence of perceived convenience, autonomy-related control, and citizens’ satisfaction on their continuance with government chatbots. The findings of the study indicate that citizens’ decision to continue using government chatbots is directly affected by their perceived convenience, autonomy-related control, and satisfaction and indirectly influenced by expectation confirmation. Theoretical and practical implications for the use of chatbots in government contexts are discussed.Item type: Item , Ethical Tensions in Human-AI Companionship: A Dialectical Inquiry into Replika(2024-01-03) Ciriello, Raffaele; Hannon, Oliver; Chen, Angelina Ying; Vaast, EmmanuelleThe unfolding loneliness pandemic sees artificial intelligence (AI) companions emerge as a potential, albeit controversial, remedy offering emotional support to those suffering from social isolation. However, this also raises new and unique ethical issues regarding the personification of AI agents. Replika, an AI companion service with over 10 million users, is a case in point, facing both regulatory scrutiny and community pushback over the removal of its 'erotic roleplay' features. Through a dialectical inquiry, this paper explicates three salient ethical tensions in human-AI companionship: The Companionship-Alienation Irony, the Autonomy-Control Paradox, and the Utility-Ethicality Dilemma. We critically question the personification of AI agents and contribute insight into human-AI companionship dynamics, providing a basis for further inquiry into the emerging realm of artificial emotional intelligence (AEI). We also offer practical guidance for navigating these tensions as we move to a future where such relationships may become prevalent.Item type: Item , Do Users Really Want “Human-like” AI? The Effects of Anthropomorphism and Ego-morphism on User’s Perceived Anthropocentric Threat(2024-01-03) Kim, Joohee; Im, IlThis paper aims to explore the development of a perceived anthropocentric threat (PAT) arising from the advancement of AI-based assistants (AIAs) beyond human capabilities. We highlight that while anthropomorphism offers valuable insights into human-AI interaction, it provides an incomplete understanding of advanced AIAs. To address this, we introduce the concept of ego-morphism, which emphasizes AIA’s unique behavior and attributes, shifting the focus away from mere human resemblances. Building upon prior research on anthropocentrism (belief that the humans are the center of the universe), we define PAT in the context of AI’s intelligence, autonomy, and ethical aspects. The study results reveal that when users perceive AIA as possessing its own ego, they are more likely to perceive PAT, particularly in cases where AIAs violate ethical values. The findings unveil new insights into the black box phenomenon through the lens of ego-morphism and its association with PAT. These findings show that individuals favor AIAs resembling humans as long as they exhibit human-like understanding of values and norms.Item type: Item , Deconstructing Review Deception: A Study on Counterfactual Explanation and XAI in Detecting Fake and GPT-Generated Reviews(2024-01-03) Chernyaeva, Olga; Hong, Taeho; Lee, One-Ki DanielOur models not only deliver high-performing predictions but also illuminate the decision-making processes underlying these predictions. By experimenting with five datasets, we have showcased our framework's prowess in generating diverse and specific counterfactuals, thereby enhancing deception detection capabilities and supporting review authenticity assessments. The results demonstrate the significant contribution of our research in furthering the understanding of AI-generated review detection and, more broadly, AI interpretability. Experimentation on five datasets reveals our framework's ability to produce diverse and specific counterfactuals, significantly enriching deception detection capabilities and facilitating the evaluation of review authenticity. Our robust model offers a novel contribution to the understanding of AI applications, marking a significant step forward in both the detection of deceptive reviews and the broader field of AI interpretability.Item type: Item , The Impact of Empathy Display in Language of Conversational AI: A Controlled Experiment with a Legal Chatbo(2024-01-03) Brunswicker, Sabine; Zhang, Yifan; Rashidian, Christopher; Linna Jr., Dan W.The rise of ChatGPT has revealed the potential of chatbots and other conversational AI tools to assist humans in fields such as law and healthcare, where the best human experts can engage in empathetic conversations. The belief is that if chatbots can connect with humans on a social and emotional level, they can reduce the cognitive effort required by humans to solve their problems, while increasing user satisfaction and trust. Although existing research has shown that empathy is crucial for designing human-AI conversations and their outcomes (effort, helpfulness, trust), it fails to separate the impact of empathy in language display from the AI’s underlying ”cognitive” abilities, like logical reasoning. To address this gap, this research aims to develop and empirically test a theory of empathy in the language displayed by conversational AI, explaining the relational outcomes of human-AI conversations in terms of cognitive effort, helpfulness, and trustworthiness. Using this theory, a chatbot is designed using syntactic and rhetorical linguistic elements that evoke empathy when providing legal services to tenants renting property. Through a randomized controlled experiment with a 2 by 3 factorial design, the effects of this empathetic chatbot on three relational outcomes in human-AI conversations are examined and compared to a non-empathetic chatbot that maintains the same logic. A baseline model utilizing non-conversational access to legal services via frequently asked questions (”FAQs”) is also implemented, and the subjects’ emotional state (anger) is manipulated as a moderating factor. The study involves 277 participants randomly assigned to one of six groups. The findings demonstrate the significance of both main and interaction effects on trustworthiness, usefulness, and cognitive effort. The results indicate that subtle changes in language syntax and style can have substantial implications for the outcomes of human-AI conversations. These findings contribute to the growing literature on conversational AI and have practical implications for the design of conversational and generative AI.Item type: Item , Introduction to the Minitrack on Conversational AI and Ethical Issues(2024-01-03) Kim, Dan; Yoon, Victoria; Yang, Kiseol; Thomas, Manoj
