Do Users Really Want “Human-like” AI? The Effects of Anthropomorphism and Ego-morphism on User’s Perceived Anthropocentric Threat

Date
2024-01-03
Authors
Kim, Joohee
Im, Il
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
477
Ending Page
Alternative Title
Abstract
This paper aims to explore the development of a perceived anthropocentric threat (PAT) arising from the advancement of AI-based assistants (AIAs) beyond human capabilities. We highlight that while anthropomorphism offers valuable insights into human-AI interaction, it provides an incomplete understanding of advanced AIAs. To address this, we introduce the concept of ego-morphism, which emphasizes AIA’s unique behavior and attributes, shifting the focus away from mere human resemblances. Building upon prior research on anthropocentrism (belief that the humans are the center of the universe), we define PAT in the context of AI’s intelligence, autonomy, and ethical aspects. The study results reveal that when users perceive AIA as possessing its own ego, they are more likely to perceive PAT, particularly in cases where AIAs violate ethical values. The findings unveil new insights into the black box phenomenon through the lens of ego-morphism and its association with PAT. These findings show that individuals favor AIAs resembling humans as long as they exhibit human-like understanding of values and norms.
Description
Keywords
Conversational AI and Ethical Issues, anthropomorphism, artificial intelligence, ego-morphism, perceived anthropocentric threats, perceived intelligence
Citation
Extent
11 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 57th Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.