Contextualized News in Corporate Disclosures: A Neural Language Approach

dc.contributor.author Siano, Federico
dc.date.accessioned 2022-10-20T19:39:24Z
dc.date.available 2022-10-20T19:39:24Z
dc.date.issued 2022
dc.description.abstract I quantify and explain value-relevant news in textual disclosures using word context. I improve upon current methods by applying a new textual analysis approach—a BERT-based neural language model—to characterize disclosures as sequentially connected and interacting elements (rather than stand-alone words). I denote this enhanced measurement as contextualized, and I apply it to predicting the magnitude and direction of disclosure news. The contextualized text in earnings announcements (1) explains three times more variation in short-window stock returns than text measured using traditional narrative attributes or recent machine learning techniques, and (2) offers large incremental explanatory power relative to reported earnings modeled using traditional or machine learning methods. Contextualized disclosures also strongly predict future earnings, with most news arising from (a) word order (i.e., context), (b) text describing numbers, and (c) text at the beginning of disclosures. This study highlights the importance of contextualized disclosures for researchers, regulators, and practitioners.
dc.identifier.uri https://hdl.handle.net/10125/103979
dc.subject disclosure
dc.subject earnings announcements
dc.subject textual analysis
dc.subject neural language models
dc.subject machine learning
dc.title Contextualized News in Corporate Disclosures: A Neural Language Approach
Files
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
HARC-2023_paper_50.pdf
Size:
1.44 MB
Format:
Adobe Portable Document Format
Description: