AI, Emotions, Empathy, and Explainability

Permanent URI for this collectionhttps://hdl.handle.net/10125/112501

Browse

Recent Submissions

Now showing 1 - 3 of 3
  • Item type: Item ,
    Designing Effective Empathy in AI Agents: An Empirical Study of User-Centric vs. Situation-Centric Approaches between Human and AI Agents
    (2026-01-06) Kim, Joohee; Im, Il
    As generative AI systems play an increasing role in emotional support, scholars have raised concerns about discomfort, inauthenticity, and expectancy violations resulting from AI's empathic responses. Drawing on verbal person-centeredness theory, we propose User Centric Empathy (UCE: emotion-focused and validating) and Situation Centric Empathy (SCE: context-focused and redirecting) to identify a more effective AI empathy approach. Across two experiments, we investigate how empathy type (UCE vs. SCE) and agent type (human vs. AI) interact to shape user experience. The results indicate that, when expressed by an AI agent, situation-centric empathy (SCE) emerges as a more appropriate empathy strategy, as it reduces discomfort and inauthenticity. Interestingly, when blame is attributed to one’s own error rather than to external sources, the type of empathy expressed by the AI agent exerts no significant effect. These results highlight that the effectiveness of AI-delivered empathy depends less on mimicking human-like responses and more on adopting an appropriate empathy approach, showing that superficial mimicry cannot foster authentic relational outcomes.
  • Item type: Item ,
    Introduction to the Minitrack on AI, Emotions, Empathy, and Explainability
    (2026-01-06) Vaezi, Reza; Ghasemaghaei, Maryam; Jozani, Mohsen
  • Item type: Item ,
    Meaning Matters for Large Language Models
    (2026-01-06) Riemer, Kai; Peter, Sandra
    Large language models (LLMs) have achieved remarkable adoption. While AI providers position their systems as assistive helpers and knowledge tools, users increasingly employ them for open-ended interactions seeking life-advice or creative exploration. This raises questions about the role of LLMs in such meaning-making activities, and the extend to which LLMs can access and encode meaning. In this conceptual essay, we apply Paul Ricoeur's hermeneutic philosophy to distinguish between structural and existential forms of meaning, revealing that LLMs can function as sophisticated conversational partners capable of engaging their vast “text” while lacking access to experientially-grounded understanding. We come to interpret user prompting as genuine hermeneutic encounters that enable meaning-making. We further reveal that hallucinations, the propensity of LLMs to generate plausible sounding yet incorrect responses, represent inevitable architectural trade-offs rather than eliminable technical failures. Our framework suggests new directions for LLM design that embrace generative capabilities and establishes principles for responsible user engagement.