Use of LLMs for Program Analysis and Generation
Permanent URI for this collection
Browse
Recent Submissions
Item Using LLMs to Adjudicate Static-Analysis Alerts(2025-01-07) Flynn, Lori; Klieber, WillSoftware analysts use static analysis as a standard method to evaluate the source code for potential vulnerabilities, but the volume of findings is often too large to review in their entirety, causing the users to accept unknown risk. Large Language Models (LLMs) are a new technology with promising initial results for automation of alert adjudication and rationales. This has the potential to enable more secure code, support mission effectiveness, and reduce support costs. This paper discusses techniques for using LLMs to handle static analysis output, initial tooling we developed, and our experimental results from tests using GPT-4 and Llama 3.Item Introduction to the Minitrack on Use of LLMs for Program Analysis and Generation(2025-01-07) Schmidt, Douglas; Sherman, MarkItem Accelerating Verification and Software Standards Testing (AVASST) with Large Language Models (LLMs)(2025-01-07) Zhang, Shen; Karl, Ryan; Hindka, Yash; Robert, JohnThe recent explosion in large language model (LLM) technology has highlighted the challenges of using public generative Artificial Intelligence (AI) tools in classified environments, especially for software analysis. Currently, software analysis falls on the shoulders of static analysis (SA) tools and manual code review, which tend to provide limited technical depth and are often time-consuming in practice. We show that LLMs can be used in unclassified environments to rapidly develop tools that accelerate software analysis in classified environments. Through LLM assistance, our work has produced several avenues for success, and preliminary experimentation has shown significant time savings (~40% ) and improved accuracy (~10%) for certain software analysis tasks.