Roberts, SarahWood, StacyEadon, Yvonne2022-12-272022-12-272023-01-03978-0-9981331-6-4https://hdl.handle.net/10125/102883Despite the growing prevalence of ML algorithms, NLP, algorithmically-driven content recommender systems and other computational mechanisms on social media platforms, some of their core and mission-critical gatekeeping functions are nonetheless deeply reliant on the persistence of humans-in-the-loop to both validate computational models in use, and to intervene when those models fail. Perhaps nowhere is this human interaction with/on behalf of computation more key than in social media content moderation, where human capacities for discretion, discernment and the holding of complex mental models of decision-trees and changing policy are called upon hundreds, if not thousands, of times per day. This paper presents the results of a qualitative, interview-based study of an in-house content moderation team (Trust & Safety, or T&S) at a mid-size, erstwhile niche social platform we call FanClique. Findings indicate that while the FanClique T&S team is treated well in terms of support from managers, respect and support from the wider company, and mental health services provided (particularly in comparison to other social media companies), the work of content moderation remains an extremely taxing form of labor that is not adequately compensated or supported.10engAttribution-NonCommercial-NoDerivatives 4.0 InternationalCritical and Ethical Studies of Digital and Social Mediacontent moderationhuman-in-the-loopplatformssocial media"We Care About the Internet; We Care About Everything" Understanding Social Media Content Moderators' Mental Models and Support Needstext10.24251/HICSS.2023.252