AI and Digital Discrimination
Permanent URI for this collectionhttps://hdl.handle.net/10125/112485
Browse
Recent Submissions
Item type: Item , Responsible AI by Design: Embedding Diversity, Equity, and Inclusion Values into AI Development Practices(2026-01-06) Falcon, Gaëlle; Carillo, KévinResponsible AI has emerged as a critical concern in Information Systems research, centered on embedding principles such as fairness, accountability, and transparency into the development of AI systems. The integration of these principles, particularly those of diversity, equity, and inclusion into everyday design practices remains inconsistent and under-theorized, leaving a gap between ethical aspiration and implementation. To address this, we draw on Value-Belief-Norm theory to model how personal DEI values translate into ethical development behavior. Using survey data from 194 AI professionals, we test a model where personal values influence awareness of consequences, moral responsibility, and personal norms, which in turn predicts the adoption of inclusive AI practices. All hypothesized relationships are supported, underscoring the importance of individual-level moral cognition in operationalizing RAI. This study advances IS scholarship by shifting the analytical focus from institutional governance to the cognitive and motivational mechanisms through which ethical AI is enacted in practice.Item type: Item , DeepSeek in China: AI Hiring or Bias Hiring?(2026-01-06) Xu, Weishan; Wu, Chong; Wang, Yanjun; Merino, HernánAlgorithmic hiring tools based on large language models (LLMs) are increasingly adopted, yet studies show that such systems replicate historical labor market biases. Prior research has largely focused on Western contexts, leaving limited understanding of how these issues manifest in China. This study evaluates DeepSeek, a leading Chinese LLM used in recruitment, to fill this gap. We combine linear regression with explainable machine learning techniques to quantify the influence of demographic and job-related factors on candidate scores. Results reveal systematic disparities, with applicants aged 35 and above, as well as female candidates receiving lower predicted scores. These findings highlight entrenched inequities in China’s labor market, provide a novel perspective on international implicit bias research, and demonstrate how combined methods reveal complex bias patterns. Beyond its academic contributions, the study offers practical guidance for fairness-aware AI deployment and contributes to ongoing discussions on trustworthy AI and regulation.Item type: Item , Algorithmic Fairness in NLP: Persona-Infused LLMs for Human-Centric Hate Speech Detection(2026-01-06) Gajewska, Ewelina; Derbent, Arda; Chudziak, Jarosław A; Budzynska, KatarzynaIn this paper, we investigate how personalising Large Language Models (Persona-LLMs) with annotator personas affects their sensitivity to hate speech, particularly regarding biases linked to shared or differing identities between annotators and targets. To this end, we employ Google’s Gemini and OpenAI's GPT-4.1-mini models and two persona-prompting methods: shallow persona prompting and a deeply contextualised persona development based on Retrieval-Augmented Generation (RAG) to incorporate richer persona profiles. We analyse the impact of using in-group and out-group annotator personas on the models' detection performance and fairness across diverse social groups. This work bridges psychological insights on group identity with advanced NLP techniques, demonstrating that incorporating socio-demographic attributes into LLMs can address bias in automated hate speech detection. Our results highlight both the potential and limitations of persona-based approaches in reducing bias, offering valuable insights for developing more equitable hate speech detection systems.Item type: Item , Evidence for the Emergence of Compassionate Decision Making in the Presence of AI Advice(2026-01-06) Rubin, Eran; Benbasat, IzakThe effect of Artificial Intelligence (AI) advice on decision making processes is gaining increasing attention. In this research based on the theoretical lens of group identity theory and integrated threat theory we posit that the societal well-being fears associated with increased AI use, can result in enhanced compassion towards others. In an online experiment we analyze whether the provision of AI advice as compared to a human expert advice in a recommendation system can indeed result in a side effect of enhanced compassion. Our initial results support the hypotheses. The results of this research shed light on decision making with AI and have wide implications for the ramifications of working with AI and the design of AI-based decision support systems.Item type: Item , Toward an AI Maturity Model in Healthcare: Identifying Core Dimensions and Critical Success Factors(2026-01-06) Krey, Mike; Filipovic, Karlo; Said, Rosch; Saban, Luka; Uzdilli, MustafaArtificial Intelligence is increasingly recognized as a critical enabler for transforming hospital operations and improving healthcare delivery. However, the absence of healthcare-specific maturity models limits the systematic adoption of AI in clinical settings. This study addresses this gap by conducting a structured literature review of existing AI maturity models and critical success factors across domains. The analysis identifies six core dimensions: technology, data, strategy, people, organization, and regulations. These findings highlight the multifaceted nature of AI integration and underscore the need for a tailored approach in complex healthcare environments. By providing a conceptual foundation, this work advances the development of future AI maturity models to support healthcare leaders in assessing AI readiness, ensuring strategic alignment, and facilitating structured AI integration within hospital settings. Further empirical validation is needed to refine the framework for practical application.Item type: Item , Enhancing Fairness in Image Classification: Modified Loss Functions for Mitigating Race and Sex Bias(2026-01-06) Trotter, Christina; Chen, Yixin; Walter, CharlesArtificial intelligence and machine learning have become ubiquitous in everyday life. With ubiquity, there is a need to ensure minimal algorithmic bias from these systems. Training data distributions can cause these systems to be unfair, negatively impacting users. One solution to this is to ensure fair, unbiased training sets. Unfortunately, unbiased training data is not enough to fully mitigate the issue. In this work, we utilize modified loss functions to mitigate race and sex bias that can appear when training machine learning models on biased data. We used face images from the FairFace dataset for binary and categorical classification tasks. Although FairFace has better diversity than other available face image datasets, it remains biased due to uneven distributions of race-sex groups. We find that the modified loss functions work moderately well at mitigating data bias. In some cases, combining multiple loss functions yields improved results compared to using one alone.Item type: Item , Introduction to the Minitrack on AI and Digital Discrimination(2026-01-06) Moussawi, Sara; Kuruzovich, Jason; Modaresnezhad, Minoo
