Narrating Causal Graphs with Large Language Models

Date
2024-01-03
Authors
Giabbanelli, Philippe
Phatak, Atharva
Mago, Vijay
Agrawal , Ameeta
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
7530
Ending Page
Alternative Title
Abstract
The use of generative AI to create text descriptions from graphs has mostly focused on knowledge graphs, which connect concepts using facts. In this work we explore the capability of large pretrained language models to generate text from causal graphs, where salient concepts are represented as nodes and causality is represented via directed, typed edges. The causal reasoning encoded in these graphs can support applications as diverse as healthcare or marketing. Using two publicly available causal graph datasets, we empirically investigate the performance of four GPT-3 models under various settings. Our results indicate that while causal text descriptions improve with training data, compared to fact-based graphs, they are harder to generate under zero-shot settings. Results further suggest that users of generative AI can deploy future applications faster since similar performances are obtained when training a model with only a few examples as compared to fine-tuning via a large curated dataset.
Description
Keywords
Generative and Conversational AI in Information Systems Research and Education: Opportunities and Challenges, causal map, generative ai, gpt, pre-trained large-scale language model
Citation
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 57th Hawaii International Conference on System Sciences
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.