Detection of Contradictions and Inconsistencies in Regulatory Documents using Prompt-Engineering

Loading...
Thumbnail Image

Contributor

Advisor

Editor

Performer

Department

Instructor

Depositor

Speaker

Researcher

Consultant

Interviewer

Interviewee

Narrator

Transcriber

Annotator

Journal Title

Journal ISSN

Volume Title

Publisher

Journal Name

Volume

Number/Issue

Starting Page

1766

Ending Page

Alternative Title

Abstract

Companies create regulatory documents, such as policies, standards, and guidelines, to define their processes and structures. Frequent updates to these documents can lead to inconsistencies and contradictions between the respective regulations, which can result in errors, delays, asset compromise, or enabling fraud and non-compliance. Given the variety of document types and their thematic, structural, lexical, syntactic, and domain-specific differences, automated conflict detection remains a challenge, especially due to the lack of annotated data from practice. As an alternative to supervised approaches, this paper investigates whether a prompt-based classifier can detect contradictions and inconsistencies between regulatory texts and what level of accuracy can be achieved. The evaluation of three prompt variants and seven large language models on a real-world regulatory dataset shows that the detection accuracy of a prompt-based classifier (F1- score of 0.851), which includes 26 detailed formulated rules, is only 1.29% lower than that of a supervised model (F1-score of 0.862) trained with annotated data.

Description

Citation

DOI

Extent

10 pages

Format

Type

Conference Paper

Geographic Location

Time Period

Related To

Proceedings of the 59th Hawaii International Conference on System Sciences

Related To (URI)

Table of Contents

Rights

Attribution-NonCommercial-NoDerivatives 4.0 International

Rights Holder

Catalog Record

Local Contexts

Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.