The Role of Sentiment Shift: Measuring and Explaining Performance of Fake News Detection After LLM Laundering
Loading...
Files
Date
Authors
Contributor
Advisor
Department
Instructor
Depositor
Speaker
Researcher
Consultant
Interviewer
Interviewee
Narrator
Transcriber
Annotator
Journal Title
Journal ISSN
Volume Title
Publisher
Volume
Number/Issue
Starting Page
2838
Ending Page
Alternative Title
Abstract
With their advanced capabilities, Large Language Models (LLMs) can generate highly convincing and contextually relevant fake news, which can contribute to disseminating misinformation. Though there is much research on fake news detection for human-written text, the field of detecting LLM-generated fake news is still under-explored. This paper augments existing datasets to measure the efficacy of detectors in identifying LLM paraphrased fake news. By investigating which models excel at which tasks (detection, paraphrasing to evade detection, and paraphrasing for semantic similarity), we found detectors struggled to detect LLM-paraphrased fake news more than human-written text. Further, upon inspecting LIME explanations, we observed a possible sentiment shift and digging deeper revealed a worrisome trend for paraphrase quality measurement: many samples exhibit sentiment shift despite a high BERTSCORE.
Description
Citation
DOI
Extent
10 pages
Format
Geographic Location
Time Period
Related To
Proceedings of the 59th Hawaii International Conference on System Sciences
Related To (URI)
Table of Contents
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International
Rights Holder
Catalog Record
Local Contexts
Email libraryada-l@lists.hawaii.edu if you need this content in ADA-compliant format.
