Construct Relation Extraction from Scientific Papers: Is It Automatable Yet?
Loading...
Files
Date
Authors
Contributor
Advisor
Editor
Performer
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Interviewee
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Journal Name
Volume
Number/Issue
Starting Page
4672
Ending Page
Alternative Title
Abstract
The process of identifying relevant prior research articles is crucial for theoretical advancements, but often requires significant human effort. This study examines the feasibility of using large language models (LLMs) to support this task by extracting tested hypotheses, which consist of related constructs, moderators or mediators, path coefficients, and p-values, from empirical studies using structural equation modeling (SEM). We combine state-of-the-art LLMs with a variety of post-processing measures to improve the relation extraction quality. An extensive evaluation yields recall scores of up to 79.2% in construct entity extraction, 58.4% in construct-mediator/moderator-construct extraction, and 39.3% in extracting the full tested hypotheses. We provide a manually annotated dataset of 72 SEM articles and 749 construct relations to facilitate future research. Our findings offer critical insights and suggest promising directions for advancing the field of automated construct relation extraction from scholarly documents.
Description
Citation
Extent
10
Format
Type
Conference Paper
Geographic Location
Time Period
Related To
Proceedings of the 58th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Catalog Record
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.
