Artificial Intelligence and Digital Discrimination

Permanent URI for this collectionhttps://hdl.handle.net/10125/107493

Browse

Recent Submissions

Now showing 1 - 9 of 9
  • Item type: Item ,
    Harnessing Large Language Models for Effective and Efficient Hate Speech Detection
    (2024-01-03) Svetasheva, Arina; Lee, Keeheon
    Hate speech presents a growing concern within online communities, posing threats to marginalized groups and undermining ethical norms. Although automatic hate speech detection (AHSD) methods have shown promise, there is still room for improvement. Recent advancements in Language Model Pretraining, exemplified by the introduction of ChatGPT-4, bring forth new possibilities for enhancing classification. In this study, we propose leveraging synthetic data generation to improve hate speech detection. Our findings demonstrate the effectiveness and efficiency of this approach in rapidly improving model performance, particularly in scenarios where obtaining sufficient amounts of hate speech data is challenging. Through our experiments, we establish that Large Language Models (LLMs) can proficiently serve as both data generators and annotators in the desired format, exhibiting performance comparable to, and even surpassing, that of humans. Moreover, we validate the applicability of LLMs in domains characterized by complex and highly abbreviated lexicons, such as the gaming industry.
  • Item type: Item ,
    AI Literacy in Adult Education - A Literature Review
    (2024-01-03) Wolters, Anna; Arz Von Straussenburg, Arnold F.; Riehle, Dennis M.
    The pervasiveness of Artificial Intelligence (AI) continues to increase, disrupting both individuals’ professional and social lives. In order to enhance public understanding of AI technologies, the concept of AI literacy has emerged in scientific discourse in recent years, drawing upon interdisciplinary research from various fields. While much of the existing research focuses on educational efforts for K-12 students, this paper explicitly addresses research on AI literacy for adult education. A systematic literature review was conducted to characterize existing research in this area, which examines the understanding and approach to AI literacy in higher education institutions, the relevant target groups, the primary research directions, and assessment approaches for individual competency levels. Based on this analysis, research gaps are identified and future research directions are proposed.
  • Item type: Item ,
    Contextualizing the Accuracy-Fairness Tradeoff in Algorithmic Prediction Outcomes
    (2024-01-03) Arhin, Kofi; Treku, Daniel
    Pervasively, organizations are using artificial intelligence (AI) to augment and automate business processes. Meanwhile, ethical concerns have been raised regarding the ability of algorithms to replicate existing human biases. To this end, a plethora of technical solutions have been proffered to address algorithmic discrimination. However, according to some studies, algorithms that prioritize fairness can be less accurate in their prediction outcomes, eliciting debates about the nature of the trade-off between accuracy and fairness in deploying fair algorithms. In this study, we explicate the contexts surrounding the so-called accuracy-fairness trade-off and make the empirical case for why, when, and how the trade-offs manifest in AI systems. Using Python-generated synthetic data for the flexibility of manipulating data features, we propose a classification framework to aid the understanding of the algorithmic accuracy-fairness trade-off. Besides the theoretical contribution, our study has practical implications for designing and implementing efficient and equitable AI systems.
  • Item type: Item ,
    Does a Fair Model Produce Fair Explanations? Relating Distributive and Procedural Fairness
    (2024-01-03) Yang, Yiwei; Howe, Bill
    We consider interactions between fairness and explanations in neural networks. Fair machine learning aims to achieve equitable allocation of resources --- distributive fairness --- by balancing accuracy and error rates across protected groups or among similar individuals. Methods shown to improve distributive fairness can induce different model behavior between majority and minority groups. This divergence in behavior can be perceived as disparate treatment, undermining acceptance of the system. In this paper, we use feature attribution methods to measure the average explanations for a protected group, and show that differences can occur even when the model is fair. We prove a surprising relationship between explanations (via feature attribution) and fairness (in a regression setting), demonstrating that under moderate assumptions, there are circumstances when controlling one can influence the other. We then study this relationship experimentally by designing a novel loss term for explanations called GroupWise Attribution Divergence (GWAD) and comparing its effects with an existing family of loss terms for (distributive) fairness. We show that controlling explanation loss tends to preserve accuracy. We also find that controlling distributive fairness loss tends to also reduce explanation loss empirically, even though it is not guaranteed to do so theoretically. We also show that there are additive improvements by including both loss terms. We conclude by considering the implications for trust and policy of reasoning about fairness as manipulations of explanations.
  • Item type: Item ,
    Does AI Disclosure in Discriminatory Pricing Backfire? The Moderating Role of Price Sensitivity and Explanation of Price Differences
    (2024-01-03) Peng, Xiao; Peng, Xixian; Xu, Jingjun (David)
    This research aimed to examine the moderating role of price sensitivity and explanation for price differences in the relationship between AI disclosure and consumers' revenge behavior, as well as to explore the potential mediating effect of inferred motives. A scenario-based lab experiment was conducted, involving 121 participants who engaged in an online airline ticket booking context. The findings of this study revealed that the positive impact of AI disclosure on revenge behavior was amplified among consumers with high price sensitivity, and this relationship was mediated by inferred motives. Additionally, the provision of explanations alongside AI disclosure was found to increase revenge behavior. These findings contribute to the understanding of consumers' psychological processes and revenge behavior within the context of discriminatory pricing empowered by AI. Moreover, the study offers practical implications for managers aiming to mitigate the negative consequences of discriminatory pricing.
  • Item type: Item ,
    What Is Ethical AI? – Design Guidelines and Principles in the Light of Different Regions, Countries, and Cultures
    (2024-01-03) Lier, Sarah; Gerlach, Jana; Breitner, Michael H.
    Artificial Intelligence (AI)'s impact on societies is positive and negative. Human well-being, self-actualization, human agency, and social cohesion come with challenges of overuse, underuse, and misuse of AI systems and social anxiety, ignorance, or erroneous data. An implementation of AI Ethics is expected to address these challenges. Literature includes general or specific guidelines for ethical AI, but country-, region-, and culture-specific categorizations are limited. We derive ethical AI key topics (KTs), design requirements (DRs), and design principles (DPs). We apply text mining and topic modeling analysis in a Design Science Research (DSR)-oriented approach. From 187 scientific publications, we deduce four KTs, 13 DRs, and 15 DPs. We identify four regions, countries, and cultures and apply cultural dimensions to assign a prioritization of the DPs. This ranking enables ethical AI realizations in different regions, countries, and cultures.
  • Item type: Item ,
    Public Perceptions, Critical Awareness and Community Discourse on AI Ethics: Evidence from an Online Discussion Forum
    (2024-01-03) Sengupta, Subhasree; Srivastava, Swapnil; Mcneese, Nathan
    As Artificial Intelligence (AI) become increasingly ingrained into society, ethical and regularity concerns become critical. Given the vast array of philosophical considerations of AI ethics, there is a pressing need to understand and balance public opinion and expectations of how AI ethics should be defined and implemented, such that it centers the voice of experts and non-experts alike. This investigation explores a subreddit r/aiethics through a multi-methodological, multi-level approach. The analysis yields six conversational themes, sentiment trends, and emergent roles that elicit narratives associated with expanding implementation, policy, critical literacy, communal preparedness, and increased awareness towards combining technical and social aspects of AI ethics. Such insights can help to distill necessary considerations for the practice of AI ethics beyond scholarly traditions and how informal spaces (such as virtual channels) can and should act as avenues of learning, raising critical consciousness, bolstering connectivity, and enhancing narrative agency on AI ethics.
  • Item type: Item ,
    An Implementable Guideline for Developing Ethical AI Systems: The Evaluation of Child Abuse and Neglect Prediction
    (2024-01-03) Han, Yuzhang; Landau, Aviv; Kulkarni, Paritosh; Modaresnezhad, Minoo; Nemati, Hamid
    Artificial Intelligence (AI) is becoming a crucial part of our lives. Although AI applications, such as facial recognition, autonomous driving and ChatGPT, can benefit different industries, users are more and more concerned about the ethical issues associated with AI systems. As a result, various ethics frameworks and standards have been proposed for regulating AI systems. Nevertheless, existing ethics frameworks and standards are hardly actionable or implementable for AI developers. To fill this gap, the current study proposes an actionable ethics-aware guideline for AI developers, as well as a set of quality metrics for ethical AI systems. Further, we implement the guideline using numerous AI predictive models constructed on a national big data set that estimates children’s risk of experiencing abuse and neglect in the United States. Evaluation results indicate that the proposed guideline can effectively enhance the quality of predictive models in utility, ethicality and cost dimensions.
  • Item type: Item ,
    Introduction to the Minitrack on Artificial Intelligence and Digital Discrimination
    (2024-01-03) Moussawi, Sara; Deng, Xuefei; Kuruzovich, Jason