Learning Divide: AI, Human Intelligence, and Linguistic Justice
Permanent URI for this collectionhttps://hdl.handle.net/10125/112496
Browse
Recent Submissions
Item type: Item , AI says You are a Happy Dishwasher – How Generative AI Systematically Misrepresents Neurominority Professionals(2026-01-06) Lemke, Claudia; Bloomfield, Martin; Herfurth, Florian N.Data-driven AI systems are shaping our perception of the potential for automation and human-based knowledge representation. As a reflection on the world, these systems also exhibit discernible patterns of discrimination, particularly against marginalized groups. Whereas gender and ethnicity are well-known categories of AI-driven biases the plight of neurodivergent individuals (neurominorities) remains largely unexplored. Our study investigates how AI-generated portraits (N = 2,240 images, Stable Diffusion V2) represent dyslexic and non-dyslexic people in low- and high-paid occupations. Analyzing occupation-related images of female and male dyslexics conveys a significant neuronormative bias. Regardless of the occupational profile, images of dyslexic people represent more expressions identified as negative basic emotions than their non-dyslexic counterparts and show that people with dyslexia in low-paid occupations are happier than in high-paid occupations. This previously undiscovered unfairness in AI systems therefore necessitates greater awareness, as AI-generated images are increasingly shaping our digital lives, economy, and society.Item type: Item , Developing the PsyCogMetrics™ AI Lab to Evaluate Large Language Models and Advance Cognitive Science—A Three-Cycle Action Design Science Study(2026-01-06) Jin, Zhiye; Li, Yibai; Joshi, K.D.; Deng, Xuefei; Lee, EmilyThis study presents the development of the PsyCogMetrics™ AI Lab (https://psycogmetrics.ai), an integrated, cloud-based platform that operationalizes psychometric and cognitive-science methodologies for Large Language Model (LLM) evaluation. Framed as a three-cycle Action Design Science study, the Relevance Cycle identifies key limitations in current evaluation methods and unfulfilled stakeholder needs. The Rigor Cycle draws on kernel theories such as Popperian falsifiability, Classical Test Theory, and Cognitive Load Theory to derive deductive design objectives. The Design Cycle operationalizes these objectives through nested Build–Intervene–Evaluate loops. The study contributes a novel IT artifact, a validated design for LLM evaluation, benefiting research at the intersection of AI, psychology, cognitive science, and the social and behavioral sciences.Item type: Item , Analyzing Information-Seeking Behaviors in a Hakka AI Chatbot: A Cognitive-Pragmatic Study(2026-01-06) Lee, Chuhsuan; Chang, Chenchi; Lee, Hungshin; Hsu, Yunhsiang; Chen, ChingyuanWith many endangered languages at risk of disappearing, efforts to preserve them now rely more than ever on using technology alongside culturally informed teaching strategies. This study examines user behaviors in TALKA, a generative AI-powered chatbot designed for Hakka language engagement, by employing a dual-layered analytical framework grounded in Bloom’s Taxonomy of cognitive processes and dialogue act categorization. We analyzed 7,077 user utterances, each carefully annotated according to six cognitive levels and eleven dialogue act types. These included a variety of functions, such as asking for information, requesting translations, making cultural inquiries, and using language creatively. Pragmatic classifications further highlight how different types of dialogue acts—such as feedback, control commands, and social greetings—align with specific cognitive intentions. The results suggest that generative AI chatbots can support language learning in meaningful ways—especially when they are designed with an understanding of how users think and communicate. They may also help learners express themselves more confidently and connect with their cultural identity. The TALKA case provides empirical insights into how AI-mediated dialogue facilitates cognitive development in low-resource language learners, as well as pragmatic negotiation and socio-cultural affiliation. By focusing on AI-assisted language learning, this study offers new insights into how technology can support language preservation and educational practice.Item type: Item , Introduction to the Minitrack on Learning Divide: AI, Human Intelligence, and Linguistic Justice(2026-01-06) Joshi, K.D.; Deng, Xuefei
