(In)visible Content Moderation on Reddit and Twitch

Hibby Thach (they/them)
6 min readAug 22, 2022

--

A gif of a guy making a peace sign and disappearing.
A gif of a guy making a peace sign and disappearing, similar to how content can quickly disappear following content moderation.

I was so so happy to receive the acceptance email from New Media and Society, getting a first-author publication in a high impact journal in my field, before even starting my PhD! Content moderation is a topic I’ve always been close to throughout my whole life, but became more familiar with academically in the past year and a half. Big shoutout to Oliver Haimson for being a great supporter and advisor during our work on this project, as well as my other co-authors, Samuel Mayworm and Dan Delmonaco. This blog post will summarize our recent publication on this topic:

  • Thach H, Mayworm S, Delmonaco D, Haimson O. (In)visible moderation: A digital ethnography of marginalized users and content moderation on Twitch and Reddit. New Media & Society. July 2022. doi:10.1177/14614448221109804

What is content moderation?

Content moderation is ever-present on social media platforms; ever have a post deleted for violating community guidelines? Ever hear about famous people having their accounts removed from various social media? What about the questions vetting your entry into a Facebook group (in my case, a Pokemon Plush market group)? All of this is a part of content moderation. However, most of the time, content moderation processes are meant to be invisible to the average user, as platforms try to protect users from offensive and inappropriate content in the name of platform-wide neutrality and “freedom.” To read more about what content moderation is, read Tarleton Gillespie’s Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that shape Social Media, freely available in the linked tweet from Tarleton himself.

Content moderation and marginalized groups

When it comes to marginalized groups, they have it rough on social media platforms. TikTok purposefully moderated content from users deemed “ugly, poor, or disabled,” including queer creators. Salty, an 100% independent, membership supported newsletter, released a report on algorithmic bias in content policing on Instagram in 2020, finding that various marginalized groups were policed at higher rates than the general population on Instagram. Haimson et al. (2021) found that Black and Transgender participants’ content were disproportionately removed for unfair reasons from various platforms. The list goes on and on, showing various sources that provide evidence for the disproportionate content moderation and account removal of marginalized groups. Despite commitments to create a free space for all users, platforms’ neutral one-size-fits-all approaches to content moderation end up harming the most marginalized by collapsing contexts, as shown in Schoenebeck, Haimson, & Nakamura (2020)’s and Caplan (2018)’s work.

(In)visibility

In our study, we conducted a digital ethnography of marginalized users on Reddit’s /r/FTM subreddit and Twitch’s “Just Chatting” and “Pools, Hot Tubs, and Beaches” categories, observing content moderation in real time. What this means is that we perused the various posts, threads, and live-streams of these fieldsites over five weeks, taking fieldnotes and writing memos every day we had observations. The in real time part refers to attempting to pull open the curtain on content moderation processes. This meant observing when posts, comments, streams, and accounts were removed or banned. On Reddit, this was a less immediate occurrence that we would notice happen if we had seen the original post or comment before its removal, but would be unable to see the decision-making process happen after-the-fact unless a user or moderator posted about the removal. On Twitch, this occurred during live-stream chats, where comments come in and are removed quite quickly, or through streamers speaking about moderation decisions (of their own or the platform’s) while on stream. See the below figures for some examples of “real-time” content moderation on Reddit and Twitch.

A Reddit post removed by the moderators of r/ftm.
A Reddit post removed by the moderators of r/ftm
A Reddit moderator explaining why they removed a comment.
A Reddit moderator explaining why they removed a comment
Twitch streamer Alinity doing an unban appeal stream and asking the chat if a user should remain banned or not
A side-by-side comparison of a Twitch channel’s chat before and after comments were moderated.

While Reddit may not have the live immediacy that Twitch might have, it still provides an interesting comparison to show how platforms can differ in their content moderation processes based on their available tools and features. Of note in the third figure with Twitch streamer Alinity is the Twitch phenomenon of the “unban appeal,” a stream where the streamer asks the opinion of the users in the chat regarding unbanning a previously banned user or keeping them banned. In this case, users in the chat are informed of the behind-the-scenes content moderation processes streamers and their moderation teams are involved in. Another case of this is the “back from my ban” genre of streams, such as Sukesha Ray’s stream detailing her ban from the platform and eventual return (read more about this in the open-access article). These streams occur whenever a streamer returns from a Twitch ban, usually announcing that they’re back and sometimes describing what all happened to their viewers. Oftentimes, the streamers ask Twitch to be held accountable for their unfair decisions.

Additionally, content moderation processes are made visible through press coverage on notable content removals and account bans. For example, Twitch streamer Amouranth has been banned multiple times over the years, making news on gaming press outlets and around social media. On July 9th, 2021, the news platform, Jezebel, posted a detailed article about the wave of transphobic platform abuse on Reddit, featuring quotes from several trans Reddit users (including moderators of trans-specific subreddits) about their experiences dealing with harassment. On April 2nd, 2021, Kotaku, a gaming news outlet, posted an article on how Twitch’s new “Hot Tubs” category was sparking lots of attention and conversation around women’s attire on the platform. Attention through press coverage may not always result in changes to content moderation decisions or processes, but increased coverage allows for more people to be made aware of unfair and disproportionate content moderation decisions.

Content moderation visibility helps when it holds powerful actors accountable. It harms when it introduces harmful and offensive content to a platform’s users, such as the transphobic abuse mentioned in the Jezebel article or the hate raids of marginalized streamers on Twitch. Content moderation visibility is a double-edged sword, meaning that content moderation processes should be made visible when they can help repair harms done, and made invisible when they help shield harms from others.

The future of content moderation visibility

Content moderation, as said earlier, is a process that was designed to be invisible to the average user. However, as we’ve seen, external visibility can help reveal a platform’s wrongdoings, such as TikTok’s suppression of disabled, fat, queer, and Black users or Instagram’s content moderation practices disproportionately targeting women of color, queer people, and pole dancers. Commonly, scholars recommend increased transparency in content moderation practices, as lack of transparency disproportionately impacts marginalized users. To achieve more equitable social media content moderation for marginalized groups, we must consider the content moderation visibility that platforms’ features allow and where and when content moderation should be visible and when it should not.

Moderation is an expression of power, which is continually renegotiated between the moderator and the moderated. We argue that an ideal world of content moderation allows marginalized users to call out inconsistencies and unfair treatment regarding content removal and account suspensions while also enabling automated and tiered governance systems that allow moderators and users to minimize their encounters with abusive and harmful content. This also requires that automated content moderation systems are improved so that they do not disproportionately target marginalized people.

Finding the right balance between protection and open expression is a difficult task. But the responsibility for developing and implementing processes, procedures, and technologies to maintain that delicate balance lies not with a platform’s users but with the platforms themselves. Only by supporting adequate, judicious visibility of their content moderation processes can platforms evolve to meet this challenge.

--

--

Hibby Thach (they/them)
Hibby Thach (they/them)

Written by Hibby Thach (they/them)

An MA Student currently in the Department of Communication at the University of Illinois at Chicago studying identity, content moderation, and digital cultures.

No responses yet