Gender and Technology

Permanent URI for this collectionhttps://hdl.handle.net/10125/112492

Browse

Recent Submissions

Now showing 1 - 5 of 5
  • Item type: Item ,
    Influenceable HR Personnel: An Empirical Investigation of Social Influence when Recruiting Female IT Professionals
    (2026-01-06) Oehlhorn, Caroline; Irmer, Josephine; Meier, Marco; Maier, Christian
    Human resource (HR) personnel operate within a complex network of organizational stakeholders whose diverse perspectives can influence recruitment strategies, including those aimed at attracting female IT professionals. This study contributes to the existing body of research by identifying the key stakeholders who influence human resource personnel’s decision-making. Employing a fuzzy-set qualitative comparative analysis, the findings reveal that different configurations of influence impact HR personnel’s intention to recruit female IT professionals. Based on the findings, the study offers both research contributions and practical implications.
  • Item type: Item ,
    Time to Close the Gender Gap? Field Experimental Evidence on Generative AI and Gender Gaps in IT Job Applications
    (2026-01-06) Liang, Zhewen; Li, Zengxi; Lu, Angela; Liu, Ben
    The rapid emergence of generative artificial intelligence (GenAI) technologies has sparked debates about their potential to democratize technical work and reduce barriers to entry in IT careers. This study investigates whether possessing GenAI skills influences the gender gap in IT job applications through a field experiment on Upwork. Contrary to expectations that GenAI might level the playing field, we find preliminary evidence that GenAI actually widens the gender gap. While both men and women with GenAI skills show increased rates of applying for IT jobs, the effect is stronger for men, particularly among those with high GenAI proficiency. Our findings challenge optimistic narratives about GenAI’s democratizing potential and suggest that technological advances alone cannot address gender inequalities. These results alert policymakers about GenAI’s unintended consequence of widening gender gaps and inform the development of targeted interventions to mitigate inequalities.
  • Item type: Item ,
    Mind the Gap: Gender Differences in Generative AI Adoption at Work
    (2026-01-06) Zahs, Dominik; Schmodde, Lynn; Wehner, Marius
    Despite the growing relevance of generative AI in the workplace, a significant gender gap in its adoption persists. This study investigates why women are less likely than men to use generative AI tools at work and identifies predictors that explain this difference. Combining a cross-sectional survey (n = 200) with a one-week diary study (n = 76, 266 daily observations), we examine both the intention to use and actual daily use of generative AI. Across both studies, women reported lower usage intentions and spent significantly less time using generative AI. Drawing on the UTAUT, we find that performance expectancy is the strongest predictor—particularly among women—followed by social influence. In contrast, effort expectancy and facilitating conditions appear less relevant. Additional factors such as AI literacy and job demands further explain AI use. Our results highlight the need for gender-sensitive interventions to reduce the gender gap in generative AI use.
  • Item type: Item ,
    Beyond Calibration: Rethinking Algorithmic Fairness through an Intersectional, Justice-Aware Lens
    (2026-01-06) Farayola, Michael; Tal, Irina; Saber, Takfarinas; Bendechache, Malika; Connolly, Regina
    As predictive algorithms increasingly guide high-stakes decisions in fields like criminal justice, healthcare, and finance, the concept of "fairness" often centers on the idea of model calibration, the alignment between predicted probabilities and observed outcomes. Calibration is typically treated as a reliable marker of objectivity and fairness. However, this paper argues that in contexts shaped by structural inequalities, including those based on gender, race, and class, calibration fails to account for deeper ethical and social implications. Drawing on research from algorithmic fairness, feminist technology studies, and intersectionality, we challenge the assumption that models that are calibrated to biased outcomes can be considered fair. This critique is especially urgent for individuals at the intersection of multiple marginalized identities, whose experiences with technology are often shaped by compounded, gendered harms that traditional fairness metrics fail to address. We propose a justice-aware framework for algorithmic fairness that acknowledges the historical and social contexts embedded in data and integrates technical interventions across the AI development lifecycle, before, during, and after model deployment. Rather than treating calibration as an ultimate standard for fairness, we argue it should be viewed as a single tool within a broader, intersectional approach. Our paper makes three key contributions: (1) a conceptual critique of calibration as a fairness metric, (2) a call for intersectional, multi-attribute fairness frameworks that account for gender and other identity factors, and (3) an argument for embedding fairness-enhancing tools within a broader socio-technical and justice-oriented framework that goes beyond mere technical performance to address systemic inequality. This paper addresses that gap by offering a justice-aware framework that integrates technical fairness interventions with gender-conscious design, participatory governance, and socio-technical accountability, bridging the divide between algorithmic fairness and the lived realities of marginalized groups.
  • Item type: Item ,
    Introduction to the Minitrack on Gender and Technology
    (2026-01-06) Connolly, Regina; Jafarijoo, Mina; Mcparland, Cliona