1 - 4 of 4
ItemUnravelling the Origins of Infobesity: The Impact of Frequency on Intensity( 2022-01-04)Infobesity is characterized by information overload whereby firms and decision makers collect more information than they need, or they can efficiently use. While recent studies have begun to unravel the antecedents of infobesity in organizations, there is a need to examine the relationship between the frequency and the degree of experiencing infobesity originating from enterprise systems. We use a research design that integrates inductive analytics and abductive discovery to uncover the interaction of multi-level antecedents of infobesity and conclude that the rate at which firms encounter infobesity drives the perception of the intensity at which the overload will be experienced.
ItemThe Scamdemic Conspiracy Theory and Twitter’s Failure to Moderate COVID-19 Misinformation( 2022-01-04)During the past few years, social media platforms have been criticized for reacting slowly to users distributing misinformation and potentially dangerous conspiracy theories. Despite policies that have been introduced to specifically curb such content, this paper demonstrates how conspiracy theorists have thrived on Twitter during the COVID-19 pandemic and managed to push vaccine and health related misinformation without getting banned. We examine a dataset of approximately 8200 tweets and 8500 Twitter users participating in discussions around the conspiracy term Scamdemic. Furthermore, a subset of active and influential accounts was identified and inspected more closely and followed for a two-month period. The findings suggest that while bots are a lesser evil than expected, a failure to moderate the non-bot accounts that spread harmful content is the primary problem, as only 12.7% of these malicious accounts were suspended even after having frequently violated Twitter’s policies using easily identifiable conspiracy terminology.
ItemSocial Media and Fake News Detection using Adversarial Collaboration( 2022-01-04)The diffusion of fake information on social media networks obscures public perception of events, news, and relevant content. Intentional misleading news may promote negative online experiences and influence societal behavioral changes such as increased anxiety, loneliness, and inadequacy. Adversarial attacks target creating misinformation in online information systems. This behavior can be viewed as an instrument to manipulate the online social media networks for cultural, social, economic, and political gains. A method to test a deep learning model- long short-term memory (LSTM) using adversarial examples generated from a transformer model has been presented. The paper attempts to examine features in machine learning algorithms that propagate fake news. Another goal is to evaluate and compare the usefulness of generative adversarial networks with long-term short-term recurrent neural network algorithms in identifying fake news. A closer look at the mechanisms of implementing adversarial attacks in social media systems helps build robust intelligent systems that can withstand future vulnerabilities.