Content Moderation: A Chronic Problem for Platforms

"Facebook Press Conference" by Robert Scoble is licensed under CC BY 2.0

Content moderation has brought platforms to the forefront of social and political discourse, due to the role of platforms in perpetuating hate speech, extremism, and misinformation. Despite the importance of content moderation, the efficacy of platform’s moderation processes is consistently questioned.

The task of content moderation assumes that the objective platform is sorting through the subjective material of user-generated content. However, the methods of content moderation reveals that platforms are accountable for the outcomes. Platforms, as intermediary hosts, rely on its users to produce the content – thus, content moderation is the core service it provides (Zolides, 2021). It is not a secondary or external feature, it is the primary “commodity” platforms offer (Gillespie, 2018, p. 13).

“Facebook Press Conference” by Robert Scoble is licensed under CC BY 2.0

Roberts (2019) highlights how modern platforms have strayed from pre-existing Internet cultures – the early Internet was a borderless, interconnected world that strayed away from regulation, now transformed into an arena of control by a handful of private conglomerates. Content moderation, therefore, is a “powerful mechanism of control” (Roberts, 2019, p. 14). Platforms exercise this control differently, both covertly and overtly, seen in the comparison of commercially moderated content and user-moderated content.

Twitch and the consequences of user moderation

Twitch is an interactive live-streaming service that is owned by tech giant Amazon and has become the most popular platform for video game streaming. On the platform, streamers can monetise their content through viewer subscriptions, donations or sponsorships (Zolides, 2021). This makes the platform particularly unique, as streamers rely on both economic and social capital – the viewers that make up the Twitch community are also vital to the platform’s ecosystem (Zolides, 2021).

“Twitch Streamers Booth” by NotUbercow is licensed under CC BY-SA 2.0

Content moderation on Twitch primarily relies on the mechanisms available to the viewer, such as blocking or reporting content, and in the informal role of ‘moderators’ chosen by individual streamers. Thus, content moderation on the platform is user-moderated, rather than commercially moderated. While Twitch has a series of community guidelines and content policies, the platform relies on viewer agency – Zolides (2021, p. 3003) argues that content moderation on Twitch is based on the concept of “community management.” Twitch consists of multiple sub-communities within the platform, and therefore it is a “decentralised platform” whereby “rules are both created and enforced by the users” (Cook et al., 2021, p. 2).

However, a platform’s reliance on community-based, voluntary content moderation becomes problematic when the users of the platform are demographically disproportionate. Twitch has been widely considered as a male-dominated platform, with 65% of its users being male (Kavanagh, 2019). In 2018, Twitch announced an overhaul of its community guidelines with the aim of removing sexually suggestive content from the platform. The policies primarily targeted streamer attire:

We’re updating our moderation framework to review your conduct in its entirety when evaluating if the intent is to be sexually suggestive. We’ll be looking at contextual elements such as the stream title, camera angles, emotes, panels, attire, overlays, and chat moderation. (Twitch, 2018)

While no specific reference to gender was made, it was female streamers who were highly policed with the new set of guidelines. Viewers, who are equipped with the moderating mechanisms of reporting content, took a vigilante-style approach, and began policing female streamers based on the factors Twitch outlined: attire, camera angle, or perception of sexually suggestive content (D’Anastasio, 2018). Zolides (2021) highlights that Twitch’s definition of ‘sexually suggestive content’ was not clearly defined and emphasised the multiple factors involved. This consequently empowers the viewer’s role in moderation, as it determined primarily by their discretion.

Drawing on Massanari’s (2015) analysis of Reddit, Twitch as a platform is also conducive to a “toxic technoculture” – the nature of sub-communities on the platform, combined with dominant perceptions of masculinity, race and gender consequently excludes and punishes women (Massanari, 2015). Because Twitch only provides the guidelines and relies on the community to carry out the moderation, they only superficially appear to be managing the community – in fact, they do so to “avoid certain forms of cultural liability,” and therefore “underplays their political power” (Zolides, 2021, p. 3002). Although seemingly taking a neutral role, Twitch enables the perpetuation of harmful ideologies on their platform.

AI & content moderation: the solution?

Commercial content moderation often requires a human person to screen the content (Roberts, 2019). However, the role of humans in content moderation has been largely questioned, due to the traumatic and unethical nature of the role.

Facebook is a centralised platform that relies on commercial content moderation – that is, decisions are made “based on the corporation’s rules and regulations” (Cook et al, 2021, p. 2). After Facebook’s reliance of human content moderators came into question, Facebook announced in 2020 that it would introduce more artificial intelligence (AI) into its content moderation processes (Vincent, 2020). The new process, called “whole post integrity embeddings” (WPIE) uses AI to sort through the most important posts first, rather than chronologically, with the AI technology able to judge several elements in a post (Facebook, 2019). However, Facebook has insisted that they will continue to use a hybrid approach, with human moderators still active in the process.

“Twitch Streamers Booth” by NotUbercow is licensed under CC BY-SA 2.0

Facebook’s shift to AI for content moderation is part of a bigger trend, with several platforms doing the same. Gillespie (2020) highlights that AI has been hailed by these platforms to be the solution that would promises to solve content moderation. It is representative of ‘technocentric’ thinking that emerges from Silicon Valley culture, whereby technological, internally innovative solutions are always prioritised (Gillespie, 2020). In doing so, they can “appease governance stakeholders,” but can also present “self-serving and unrealistic narratives about their technological prowess” (Gorwa et al., 2020, p.2).

Platforms often boast about their progress on AI; however, the shortcomings of this moderation process are rarely publicly addressed. Algorithmic content moderation reveals biases against people of colour – for example, Google’s AI technology to detect hate speech were more likely to flag content posted by African Americans as toxic or offensive (Cao, 2019). Furthermore, it was found that Facebook’s AI content moderation processes are unable to understand languages other than English, highlighting the Western-centric view of the platform (Canales, 2021).

The technocentric solution of AI fails to address content moderation issues not only because of the immensely large scale of data, but also because these solutions come from the source of the problem itself. Facebook’s new WPIE technology, for example, was developed internally within the company (Facebook, 2019). This further entrenches the ideologies that come from White male-dominated, Western-centric spaces of Silicon Valley.

Accountability over regulation as the way forward

There has been consistent political debate on how platforms manage their content, particularly in the areas of hate speech or misinformation. In 2020, the UK government announced an online harms bill, which requires social media platforms to actively remove and spread harmful content. While it was a legislative attempt to hold platforms accountable for their moderation of content, platforms can move away from being private entities and will be “essentially required to assess the legality of user content as national authorities” (Mchangama, 2021). In practice, government involvement in regulation may hinder the efficacy of timely content moderation.

Instead, governments should look to hold platforms accountable, and require transparency on content moderation processes. A report by the Open Technology Institute recommends that platforms should make an active effort to disclose methods of content moderation to policymakers and other interest groups (Singh, 2019). This transparency will also limit the effects of technological biases that are entrenched within the cultures of these platforms – in particular, the associated “impartiality” of AI-driven content moderation (Gorwa et al., 2020, p. 12). In the case of user-moderated platforms, transparency is required in the form of specific and measurable content policies so that communities can effectively and neutrally moderate.

Analysing how content moderation occurs on different platforms highlights the persistent and evolving problem of how to effectively moderate content. Regardless of whether a platform relies internally (commercially moderated) or externally (user moderated) to moderate content, it is clear that platforms are not neutral entities, and moderation approaches are reflective of the ‘Big Tech’ cultures that dominate these companies.

 
Creative Commons Licence
This work is licensed under a Creative Commons Attribution 4.0 International License.

References

Canales, K. (2021). Facebook’s AI moderation reportedly can’t interpret many languages, leaving users in some countries ore susceptible to harmful posts. Business Insider Australia. Retrieved from https://www.businessinsider.com.au/facebook-content-moderation-ai-cant-speak-all-languages-2021-9

Cao, S. (2019). Google’s Artificial Intelligence hate speech detector has a ‘Black tweet’ problem. Observer. Retrieved from https://observer.com/2019/08/google-ai-hate-speech-detector-black-racial-bias-twitter-study/

Community Standards report. (2019). Retrieved from https://ai.facebook.com/blog/community-standards-report/

Cook, C., Patel, E., & Wohn, D. (2021). Commercial versus volunteer: comparing user perceptions of toxicity and transparency in content moderation across social media platforms. Frontiers in Human Dynamics, 3. doi: 10.3389/fhumd.2021.626409

D’Anastasio, C. (2018). Self-appointed anti-boob police are trolling Twitch streamers. Kotaku. Retrieved from https://kotaku.com/self-appointed-anti-boob-police-are-trolling-twitch-str-1823776968

Facebook AI. (2020, November 20). AI now proactively detects 94.7 percent of hate speech [Facebook post]. Retrieved from https://www.facebook.com/FacebookAI/videos/417715849613549

Gillespie, T. (2018). Custodians of the Internet: Platforms, content moderation, and the hidden decisions that shape social media. New Haven, US: Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 1-5. doi: 10.1177/2053951720943234

Gorwa, B., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the age of platform governance. Big Data & Society, 1-15. doi: 10.1177/2053951719897945

Hern, A. (2020). Online harms bill: firms may face multibillion-pound fines for illegal content. The Guardian. Retrieved from https://www.theguardian.com/technology/2020/dec/15/online-harms-bill-firms-may-face-multibillion-pound-fines-for-content

Kavanagh, D. (2019). Watch and learn: The meteoric rise of Twitch. GWI. Retrieved from https://blog.gwi.com/chart-of-the-week/the-rise-of-twitch/

Massanari, A. (2015). #Gamergate and the Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. doi: 10.1177/1461444815608807

Mchangama, J. (2021). Rushing to judgement: Examining government mandated content moderation. Lawfare Institute. https://www.lawfareblog.com/rushing-judgment-examining-government-mandated-content-moderation

Roberts, S. (2019). Behind the screen: Content moderation in the shadows of social media. New Haven, US: Yale University Press.

Ruberg, B. (2021). “Obscene, pornographic, or otherwise objectionable”: Biased definitions of sexual content in video game live streaming. New Media & Society, 23(6), 1681-1699. doi: 10.1177/1461444820920759

Singh, S. (2019). Open Technology Institute report. Everything in moderation: an analysis of how Internet platforms are using artificial intelligence to moderate user-generated content. Retrieved from https://www.newamerica.org/oti/reports/everything-moderation-analysis-how-internet-platforms-are-using-artificial-intelligence-moderate-user-generated-content/

Twitch Community Guidelines Updates. (2018). Retrieved from https://blog.twitch.tv/en/2018/02/08/twitch-community-guidelines-updates-f2e82d87ae58/

Vincent, J. (2020). Facebook is now using AI to sort content for quicker moderation. The Verge. Retrieved from https://www.theverge.com/2020/11/13/21562596/facebook-ai-moderation

Zolides, A. (2021). Gender moderation and moderating gender: Sexual content policies in Twitch’s community guidelines. New Media & Society, 23(10), 2999-3015. doi: 10.1177/1461444820942483