Adversarial Behavior in Collaboration and Social Media Systems
Permanent URI for this collection
1 - 3 of 3
ItemAre Deep Learning-Generated Social Media Profiles Indistinguishable from Real Profiles?( 2023-01-03)In recent years, deep learning methods have become increasingly capable of generating near photorealistic pictures and humanlike text up to the point that humans can no longer recognize what is real and what is AI-generated. Concerningly, there is evidence that some of these methods have already been adopted to produce fake social media profiles and content. We hypothesize that these advances have made detecting generated fake social media content in the feed extremely difficult, if not impossible, for the average user of social media. This paper presents the results of an experiment where 375 participants attempted to label real and generated profiles and posts in a simulated social media feed. The results support our hypothesis and suggest that even fully-generated fake profiles with posts written by an advanced text generator are difficult for humans to identify.
ItemToward Designing Effective Warning Labels for Health Misinformation on Social Media( 2023-01-03)Health misinformation on social media has become a major threat to users. To alleviate this issue, platforms such as Twitter have started labeling posts considered as misinformation to warn users. However, the effectiveness of such labels on user perceptions and actions are not clear, as it has not yet been examined by researchers in prior studies. We aim to address this gap through a model, which draws upon concepts from color theory and construal level theory and focuses on the impact of three misinformation label characteristics: background color of the label, abstractness of the message, and assertiveness of the message language. We propose that the effectiveness of these warning labels will lead users to verify, avoid using, and avoid sharing such labeled posts on social media. This paper provides important theoretical contributions and aids policymakers and platform providers by offering insights on what motivates users to take protective actions.