AI and Digital Discrimination

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 7 of 7
  • Item
    Exploring Covert Bias in Large Language Models - Experimental Evidence of Racial Discrimination in Resume Creation and Selection
    (2025-01-07) Riemer, Kai; Peter, Sandra
    Fine-tuning efforts have led to progress in reducing overt, obvious gender and racial biases in the latest generation of large language models (LLMs). Here we study covert, non-obvious bias in LLM-based chat systems. We run a two-stage experiment in the hiring context consisting of resume creation and selection. We use ChatGPT-4o to create resumes for minority, ethnic candidates and majority, baseline candidates. After removal of all identifying markers, we run pair-wise selection tests and find that resumes of majority candidates are stronger, winning contests in 80% of the time. This suggests that racial markers lead to encoding of biases in resume generation in imperceptible ways. Covert biases are difficult to spot and hard to address, but they deserve urgent attention, as the latest models are becoming increasingly capable of inferring user characteristics from conversations, potentially biasing content in unwanted and unexpected ways. We discuss implications and avenues for future research.
  • Item
    Multi-Objective Ensemble Machine Learning for Fairness
    (2025-01-07) Konak, Abdullah; Rodriguez, Alina
    With the proliferation of machine learning applications across industries such as hiring, finance, surveillance, and healthcare, concerns about the fairness and equity of artificial intelligence (AI) have intensified. Recent incidents highlighting biased AI predictions have underscored the urgent need to ensure fairness in these systems. This paper introduces Multi-Objective Ensemble Learning for Fairness (MELF), a novel approach that combines ensemble learning and multi-objective decision-making to train machine learning models that achieve a balance between predictive performance and fairness metrics. MELF is adaptable across various datasets and machine learning algorithms and can be integrated with other fairness-aware training techniques. Computational experiments with decision trees and logistic regression machine learning algorithms demonstrate that MELF can enhance fairness without compromising predictive accuracy.
  • Item
    AI-fairness: The FairBridge Approach to Practically Bridge the Gap Between Socio-legal and Technical Perspectives
    (2025-01-07) Borghesi, Andrea; Ciatto, Giovanni; Matteini, Mattia; Calegari, Roberta; Sartori, Laura; Rebrean, Maria; Muller, Catelijne
    Addressing the need for AI systems free from discrimination requires a multidisciplinary approach that combines social, legal, and technical perspectives. Despite significant advancements in research and technical solutions, a gap remains between socio-legal and technical approaches. This paper proposes a meta-methodology -- namely, FairBridge -- to bridge this gap, offering a reference for defining AI fairness methodologies that integrate all three perspectives. The meta-methodology utilizes a questionnaire-based system where socio-legal and technical domain experts iteratively refine questions and responses, supported by automation.
  • Item
    Human Cognitive Bias Mitigation Approaches to Fairness within the Machine Learning Value Chain: A Review and Research Agenda
    (2025-01-07) Surles, Stephen; Noteboom, Cherie; El-Gayar, Omar
    This systematic review examines the influence of human cognitive biases on machine learning (ML) systems across the 9 phases of the ML algorithmic value chain. Following the PRISMA guidelines, it synthesizes 19 studies on bias integration and management within ML, highlighting techniques to reduce bias and increase fairness. The review identifies key gaps: the unclear translation of human cognitive biases to ML biases, absence of metrics to measure biases, re-introduction of biases during debiasing, and the critical need for human intervention. These findings prompt several research themes spanning human cognition and algorithmic bias. The theoretical implications are three-fold: extending bias concepts to human cognition, creating an agenda to associate cognitive biases with ML outcomes, and assessing the need for a new or extended discipline. Practically, it raises awareness of human cognition in ML fairness, leading to improved methods for data handling
  • Item
    Minimal Agents, Maximum Bias Insight
    (2025-01-07) Amin, Md Nur; Jesser, Alexander
    Language models often struggle to accurately evaluate and mitigate biases across different domains. This limitation stems from their reliance on static, context-agnostic evaluation methods that fail to capture the nuanced, context-dependent nature of biases. Our research introduces a multi-agent framework utilizing causal abductive reasoning to address these shortcomings. The approach collaborates agents for contextual coherence, stereotype detection, semantic evaluation, and causal plausibility, which refine their assessments through an adaptive multi-round negotiation and confidence adjustment mechanism. Experimental results reveal that our framework significantly outperforms existing models in detecting and mitigating biases.
  • Item
    How does Semiotics Influence Social Media Engagement in Information Campaigns?
    (2025-01-07) Gurung, Mayor Inna; Agarwal, Nitin; Bhuiyan, Md. Monoarul Islam
    The rise of visually driven social media platforms like Instagram has transformed the way information and narratives are shared. This study explores the impact of social, cultural, and political (SCP) symbols in Instagram images on user engagement within Taiwan’s 2024 Election Anti-Disinformation Campaign. Specifically, it investigates the correlation between the presence of SCP symbols and user engagement, and further examines whether a higher semantic similarity between SCP symbols in user text and large language model (LLM) generated image descriptions (consistency) leads to increased likes. Additionally, the SEIZ epidemiological model is used to assess whether the presence of SCP symbols and consistency in messaging influence the spread of this content, measured by its infection rate. Our findings reveal that posts rich in SCP symbols and those with greater text-image alignment not only achieve higher engagement but also greater dissemination, as indicated by a faster infection rate. These results highlight the importance of multimodal analysis in understanding information campaigns. By integrating semiotics, LLMs, and epidemiological modeling, this study offers a robust framework for future research on social media information campaigns and provides valuable insights for strategic communication efforts aimed at countering disinformation.
  • Item
    Introduction to the Minitrack on AI and Digital Discrimination
    (2025-01-07) Moussawi, Sara; Modaresnezhad, Minoo; Deng, Xuefei; Kuruzovich, Jason