Contextualized News in Corporate Disclosures: A Neural Language Approach

Date
2022
Authors
Siano, Federico
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
Ending Page
Alternative Title
Abstract
I quantify and explain value-relevant news in textual disclosures using word context. I improve upon current methods by applying a new textual analysis approach—a BERT-based neural language model—to characterize disclosures as sequentially connected and interacting elements (rather than stand-alone words). I denote this enhanced measurement as contextualized, and I apply it to predicting the magnitude and direction of disclosure news. The contextualized text in earnings announcements (1) explains three times more variation in short-window stock returns than text measured using traditional narrative attributes or recent machine learning techniques, and (2) offers large incremental explanatory power relative to reported earnings modeled using traditional or machine learning methods. Contextualized disclosures also strongly predict future earnings, with most news arising from (a) word order (i.e., context), (b) text describing numbers, and (c) text at the beginning of disclosures. This study highlights the importance of contextualized disclosures for researchers, regulators, and practitioners.
Description
Keywords
disclosure, earnings announcements, textual analysis, neural language models, machine learning
Citation
Extent
Format
Geographic Location
Time Period
Related To
Rights
Rights Holder
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.