Contextualizing the Accuracy-Fairness Tradeoff in Algorithmic Prediction Outcomes
Files
Date
2024-01-03
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
6878
Ending Page
Alternative Title
Abstract
Pervasively, organizations are using artificial intelligence (AI) to augment and automate business processes. Meanwhile, ethical concerns have been raised regarding the ability of algorithms to replicate existing human biases. To this end, a plethora of technical solutions have been proffered to address algorithmic discrimination. However, according to some studies, algorithms that prioritize fairness can be less accurate in their prediction outcomes, eliciting debates about the nature of the trade-off between accuracy and fairness in deploying fair algorithms. In this study, we explicate the contexts surrounding the so-called accuracy-fairness trade-off and make the empirical case for why, when, and how the trade-offs manifest in AI systems. Using Python-generated synthetic data for the flexibility of manipulating data features, we propose a classification framework to aid the understanding of the algorithmic accuracy-fairness trade-off. Besides the theoretical contribution, our study has practical implications for designing and implementing efficient and equitable AI systems.
Description
Keywords
Artificial Intelligence and Digital Discrimination, accuracy-fairness trade-off, algorithmic bias, label error, machine learning, synthetic data.
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 57th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.