Accelerating Verification and Software Standards Testing (AVASST) with Large Language Models (LLMs)

Loading...
Thumbnail Image

Contributor

Advisor

Department

Instructor

Depositor

Speaker

Researcher

Consultant

Interviewer

Interviewee

Narrator

Transcriber

Annotator

Journal Title

Journal ISSN

Volume Title

Publisher

Volume

Number/Issue

Starting Page

7544

Ending Page

Alternative Title

Abstract

The recent explosion in large language model (LLM) technology has highlighted the challenges of using public generative Artificial Intelligence (AI) tools in classified environments, especially for software analysis. Currently, software analysis falls on the shoulders of static analysis (SA) tools and manual code review, which tend to provide limited technical depth and are often time-consuming in practice. We show that LLMs can be used in unclassified environments to rapidly develop tools that accelerate software analysis in classified environments. Through LLM assistance, our work has produced several avenues for success, and preliminary experimentation has shown significant time savings (~40% ) and improved accuracy (~10%) for certain software analysis tasks.

Description

Citation

Extent

10

Format

Geographic Location

Time Period

Related To

Proceedings of the 58th Hawaii International Conference on System Sciences

Related To (URI)

Table of Contents

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International

Rights Holder

Catalog Record

Local Contexts

Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.