Socio-Economic Impacts of AI and Algorithmic Systems
Permanent URI for this collectionhttps://hdl.handle.net/10125/112516
Browse
Recent Submissions
Item type: Item , From Notes to Bots: How Generative AI Impacts Human-Led Fact-Checking(2026-01-06) Zhou, Yingxin; Hou, JingboHuman-led approaches, such as crowdsourced fact-checking, have been central to combating misinformation, but generative AI (GenAI) has recently emerged as an alternative, offering faster fact-check-like outputs. We examine how GenAI affects human-led fact-checking in the context of Grok (a GenAI chatbot) and Community Notes (X’s crowdsourced fact-checking system). Leveraging the rollout of Grok’s reply function, which enabled users to summon AI-generated fact-check-like replies, we find that its availability reduced user participation in Community Notes on both the demand and supply sides. Additionally, this disengagement effect is more pronounced for highly active contributors who are critical to the sustainability of Community Notes. This study enhances the understanding of the interdependence between fact-checking approaches.Item type: Item , Tagging Lemons: The Strategic Use of AIGC Tags in Online Artwork Marketplaces(2026-01-06) Liu, Xuan; Xue, Ling; Song, Peijian; Du , TianwenMany platforms now host both user-generated content (UGC) and AI-generated content (AIGC), managing them through tagging mechanisms. However, little is known about how creators use these tags and what market dynamics their use may entail. This study examines the consequences of adherence to a voluntary AIGC tagging policy implemented in the online artwork marketplace. Using image-based detection and a staggered difference-in-differences design, we identify opportunistic artists who strategically omit tags on low-quality AIGC artworks to misrepresent them as human-generated. We find that such behavior helps consumers distinguish high-quality AIGC and artists, reducing sales of opportunistic artist artworks and thus mitigating adverse selection. We attribute this effect to consumers’ ability to detect speculative behavior. This explanation is corroborated by further computational image analysis. We also find that opportunistic behavior significantly lowers artwork quality, suggesting heightened moral hazard. These findings offer important theoretical and practical implications for platforms that manage AIGC.Item type: Item , Once One Fails, All Are Suspect: Understanding Error Generalization in AI(2026-01-06) Dai, Lu; Wang, Zhongfeng; Chen, Liqi; Jin, JiaArtificial intelligence (AI) systems are increasingly deployed in consumer-facing domains, where their occasional errors raise important questions about human responses. While prior research has examined trust and moral judgment following AI errors, little is known about how such errors generalize to perceptions of other AI systems and the mechanisms that drive this process. To address this gap, we conducted four one-factor experiments across distinct contexts. Results consistently show that AI errors elicit broader error generalization than comparable human errors. This effect appears to stem from perceptions that AI lacks flexibility and the capacity to learn from errors. These findings highlight the psychological asymmetry in how people interpret AI versus human errors and underscore the need for human-AI interaction research to consider the generalization effects of a single AI error on perceptions of other systems, which may ultimately affect user engagement and technology adoption.Item type: Item , Introduction to the Minitrack on Socio-Economic Impacts of AI and Algorithmic Systems(2026-01-06) Zhang, JohnItem type: Item , From Policy Documents to Audit Logic: A LLMs-based Framework for Extracting Executable Audit Rules(2026-01-06) Tang, Bingxin; Zhuang, Jiajie; Wu, Zhiang; Lu, Hongru; Fang, ChangjianGenerative Artificial Intelligence (GenAI), particularly Large Language Models (LLMs), has shown great potential across diverse fields. This study explores leveraging LLMs to efficiently parse complex medical policy documents and convert them into executable audit rules. Although focused on hospital auditing—a domain characterized by dense policies and well-defined audit logic—our approach is broadly applicable to public administration. First, we formally model hospital audit rules using ProLog syntax. Second, we propose a column compression method based on text chunking to reduce input size and improve LLMs inference efficiency. Third, we develop prompt-based extraction combined with semantic alignment to enhance accuracy. Experimental results demonstrate that our method not only reproduces rules consistent with expert-crafted knowledge but also uncovers numerous novel, unexpected, yet valid audit rules. Compared to generic information extraction approaches, our framework yields significantly better performance. This work advances policy document information extraction and offers significant practical value for expanding hospital audit coverage.
