Using LLMs to Adjudicate Static-Analysis Alerts
Files
Date
2025-01-07
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7554
Ending Page
Alternative Title
Abstract
Software analysts use static analysis as a standard method to evaluate the source code for potential vulnerabilities, but the volume of findings is often too large to review in their entirety, causing the users to accept unknown risk. Large Language Models (LLMs) are a new technology with promising initial results for automation of alert adjudication and rationales. This has the potential to enable more secure code, support mission effectiveness, and reduce support costs. This paper discusses techniques for using LLMs to handle static analysis output, initial tooling we developed, and our experimental results from tests using GPT-4 and Llama 3.
Description
Keywords
Use of LLMs for Program Analysis and Generation, cybersecurity, llm, software
Citation
Extent
10
Format
Geographic Location
Time Period
Related To
Proceedings of the 58th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.