Using LLMs to Adjudicate Static-Analysis Alerts

dc.contributor.authorFlynn, Lori
dc.contributor.authorKlieber, Will
dc.date.accessioned2024-12-26T21:11:33Z
dc.date.available2024-12-26T21:11:33Z
dc.date.issued2025-01-07
dc.description.abstractSoftware analysts use static analysis as a standard method to evaluate the source code for potential vulnerabilities, but the volume of findings is often too large to review in their entirety, causing the users to accept unknown risk. Large Language Models (LLMs) are a new technology with promising initial results for automation of alert adjudication and rationales. This has the potential to enable more secure code, support mission effectiveness, and reduce support costs. This paper discusses techniques for using LLMs to handle static analysis output, initial tooling we developed, and our experimental results from tests using GPT-4 and Llama 3.
dc.format.extent10
dc.identifier.doi10.24251/HICSS.2025.903
dc.identifier.isbn978-0-9981331-8-8
dc.identifier.other5358ac67-1c4d-447f-8364-dfc647a8ff15
dc.identifier.urihttps://hdl.handle.net/10125/109755
dc.relation.ispartofProceedings of the 58th Hawaii International Conference on System Sciences
dc.rightsAttribution-NonCommercial-NoDerivatives 4.0 International
dc.rights.urihttps://creativecommons.org/licenses/by-nc-nd/4.0/
dc.subjectUse of LLMs for Program Analysis and Generation
dc.subjectcybersecurity, llm, software
dc.titleUsing LLMs to Adjudicate Static-Analysis Alerts
dc.typeConference Paper
dc.type.dcmiText
prism.startingpage7554

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
0736.pdf
Size:
505.76 KB
Format:
Adobe Portable Document Format