Decision Making Bias and Misinformation in Online Social Networks
Permanent URI for this collectionhttps://hdl.handle.net/10125/112448
Browse
Recent Submissions
Item type: Item , The Role of Sentiment Shift: Measuring and Explaining Performance of Fake News Detection After LLM Laundering(2026-01-06) Das, Rupak Kumar; Dodge, JonathanWith their advanced capabilities, Large Language Models (LLMs) can generate highly convincing and contextually relevant fake news, which can contribute to disseminating misinformation. Though there is much research on fake news detection for human-written text, the field of detecting LLM-generated fake news is still under-explored. This paper augments existing datasets to measure the efficacy of detectors in identifying LLM paraphrased fake news. By investigating which models excel at which tasks (detection, paraphrasing to evade detection, and paraphrasing for semantic similarity), we found detectors struggled to detect LLM-paraphrased fake news more than human-written text. Further, upon inspecting LIME explanations, we observed a possible sentiment shift and digging deeper revealed a worrisome trend for paraphrase quality measurement: many samples exhibit sentiment shift despite a high BERTSCORE.Item type: Item , Credibility Staining and the Boundaries of Expertise: The Reputational Cost of Commenting on Polarized Topics(2026-01-06) Andrews, Emily; Walter, Nathan; Pusateri, KimberlyThis study introduces the concept of credibility staining, the reputational harm experts may experience when their commentary on polarizing issues undermines perceptions of their trustworthiness and expertise, even within their own domain. Such risks are increasingly relevant in today’s digital media environment, where experts often offer opinions on contentious topics outside their area of expertise. We tested this phenomenon using a 2x2 factorial experiment in which participants viewed a fictitious marine biologist’s tweets on both a domain-relevant issue (fishing zone expansion) and a domain-irrelevant, polarized issue (gun control). When the expert’s stance on gun control conflicted with participants’ beliefs, perceived credibility declined – even in their area of specialization. These effects were strongest among highly partisan individuals. The findings highlight how ideological alignment shapes credibility judgements and underscore the reputational risks of epistemic trespassing in digital discourse. We discuss implications for science communication and public trust in expertise.Item type: Item , Introduction to the Minitrack on Decision Making Bias and Misinformation in Online Social Networks(2026-01-06) Tilvawala, Khushbu; Hassna, Ghazwan; Sadovykh , Valeria; Peko, Gabrielle
