Generalized Loss-Function-Based Attacks for Object Detection Models
Files
Date
2025-01-07
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7007
Ending Page
Alternative Title
Abstract
As artificial intelligence (AI) systems become increasingly integrated into daily life, the robustness of these systems, particularly object detection models, has gained substantial attention. Object detection is crucial in applications ranging from autonomous driving to surveillance. However, these models are vulnerable to adversarial attacks, which can deceive them into making incorrect predictions. This paper introduces a novel approach to generating inference-time adversarial attacks on object detection models using generalized loss functions. We present the Generalized Targeted Object Attacks (GTOA) and the Generalized Heuristic Object Suppression Technique (GHOST) algorithms, which perform targeted and vanishing attacks, respectively. Our method is highly adaptable, allowing attacks on any object detection model with minimal model adjustments. We demonstrate that our generalized loss function-based attacks are effective across various object detection models, highlighting the need for enhanced robustness in AI systems.
Description
Keywords
Artifical Intelligence Security: Ensuring Safety, Trustworthiness, and Responsibility in AI Systems, adversarial attacks, inference-time attacks, machine learning security
Citation
Extent
10
Format
Geographic Location
Time Period
Related To
Proceedings of the 58th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.