Towards a Quantitative Evaluation Framework for Trustworthy AI in Facial Analysis
Files
Date
2024-01-03
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7821
Ending Page
Alternative Title
Abstract
As machine learning (ML) models are increasingly being used in real-life applications, ensuring their trustworthiness has become a rising concern. Previous research has extensively examined individual perspectives on trustworthiness, such as fairness, robustness, privacy, and explainability. Investigating their interrelations could be the next step in achieving an improved understanding of the trustworthiness of ML models. By conducting experiments within the context of facial analysis, we explore the feasibility of quantifying multiple aspects of trustworthiness within a unified evaluation framework. Our results indicate the viability of such a framework, achieved through the aggregation of diverse metrics into holistic scores. This framework can serve as a practical tool to assess ML models in terms of multiple aspects of trustworthiness, specifically enabling the quantification of their interactions and the impact of training data. Finally, we discuss potential solutions to key technical challenges in developing the framework and the opportunities of its transfer to other use cases.
Description
Keywords
Trustworthy Artificial Intelligence and Machine Learning, evaluation, facial analysis, trustworthy ai
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 57th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.