Why Is Content Moderation a Challenging Task for Digital Platforms?

Yunrui Yang - TUT 05, Venessa Paech, Wed 10-12PM

Featured Image
“Social media” by Jason Howie is licensed under CC BY 2.0

Introduction

In the web 2.0 era, platformisation thrives on the flourishing user-generated content and the construction of decentralized online communities. (Paech, 2021) Creative global users allow digital platforms to profit from the “greater diversity of voices.”(Flew, 2021) However, not only positive content is produced, immoral, unethical and illegal materials are also spread via digital platforms, which makes content moderation a crucial task. This essay suggests that the commercial nature of platforms and the prejudice of their moderators are the main factors that hinder platforms from pursuing and achieving a better moderation outcome. Therefore, governments should take the role of guiding and supervising the platforms to execute content moderation. 

Figure 1. "Social Media Marketing Strategy." by Today Testing is licensed under CC BY-SA 4.0

Figure 1. “Social Media Marketing Strategy.” by Today Testing is licensed under CC BY-SA 4.0

Why Do Platforms Implement Content Moderation?

“platforms generally frame themselves as open, impartial, and noninterventionist…to avoid obligation or liability.” (Gillespie, 2018, p. 7)

Platforms label themselves as intermediaries that provide a public sphere where users can enjoy the freedom of speech. (Schlesinger, 2020) However, the absolute openness indicates the “utopian notions of community and democracy” which is an idealized fantasy drawn by platforms. (Gillespie, 2018, p.5) When pornographic, abusive, illegal, and discriminatory content disrupting the order of the networked publics, interfere with platforms’ operation and impact their business performance, they have to take measures in regulating user-generated content. (Gillespie, 2018; boyd, 2010) Content moderation helps to balance the platform’s commercial purpose with social responsibility, optimize service quality and protect users from malignity. Tarleton Gillespie (2018) suggests that content moderation should be regarded as a central service of platforms instead of the peripheral one.

Source: Appen (2021)

 

Why Do Platforms Downplay Content Moderation?

Content moderation may decrease the platform’s profit by displeasing content producers and target audiences. The development of internet entrepreneurial culture provides fertile ground for profit-oriented platformization. (Castells, 2002) Platforms design algorithms to attract users to “hooked on” their product as addicted as possible. (Zuckerberg, as cited in Absolute Motivation, 2018) Algorithms are designed to manage the visibility of content, in other words, what will be shown to attract users. For example, Facebook’s algorithm pushes material according to the data of interest gained from users’ previous online behaviour. (Bucher, 2012)
Figure 2. “Social media” by Jason Howie is licensed under CC BY 2.0
Figure 2. “Addicted To Social Media” by joey zanotti is licensed under CC BY 2.0

 

Unfortunately, unethical content also has its target audience. Commercial platforms are sometimes concerned more about if a post is eye-catching and profitable instead of if it is morally right. Reddit, a social-news aggregation platform, is an example. Reddit allows any individual to create micro-communities called subreddits to share their niche interest. The majority of its users are “young, white, cis-gendered, heterosexual males” interested in “computing, science, or fandom”. (Massanari, 2017, p. 330) Reddit’s algorithmic “platform policies” encourage geek masculinity by motivating users to vote for anti-feminist and racist content. (Massanari, 2017) The highly upvoted posts will be displayed on the front page to attract and retain as much flux as possible. #Gamergate and The Fappening are two anti-feminist cases that are widespread among subreddits, which indicate a “toxic technoculture” that pander to the white geek tastes while ignoring and marginalizing others. (Massanari, 2017) They displayed on the front page for a few days before being banned because they are “extremely profitable for Reddit’s coffers”. (Massanari, 2017, p. 340) 

“Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites”(Gorwa, p.3)

Platforms like Reddit regard content moderation as an “unpleasant necessity” that might reduce their business payoff. (Roberts, 2019) Removing users’ posts might deviate from their promise of openness. Although some platforms allow users to “flag” undesirable content, they stay “tight-lipped about how many users flag, what percentage of those who do flag provides the most flags, how often the platform decides to remove or retain content that’s been flagged, etc.” (Gillespie, 2018, p. 268). Most flagged materials are hidden by filtering for users who are offended instead of being removed for everyone. It is possible that when content moderation is at odds with platforms’ business performance, moderation will be sacrificed. 

Source: Absolute Motivation (2018)

How Does Moderator’s Prejudice Influence Content Moderation?

“When rules of propriety are crafted by small teams of people that share a particular worldview, they aren’t always well suited to those with different experiences, cultures, or value systems.” (Gillespie, 2018, p.8)

In-house moderators of giant digital platforms are influencing by the “Californian Ideology”. (Barbrook & Cameron, 1996) They are high-tech labourers who work in Silicon Valley and earn the best wages. In relation to the Center for Employment Equity (2017), most of them are white, male and well-educated. The homogeneity might endow them with the “communal prejudice” that leads them to think from an emic perspective and judge content in terms of colonization and Western tech-utopianism. (Lusoli & Turner, 2021) Materials might have multiple connotations. And users decode material based on their personal comprehension while moderators tend to judge materials from their own perspectives. It is worth doubting that can moderators make an objective judgement for users? Or are they just working for evaluating whether the content is detrimental to people like themselves?

For instance, Facebook deleted the photo The Terror of War quickly after it was posted. The photo shows that several children are fleeing napalm attacks from Vietnamese soldiers. A naked girl in the middle is suffering from napalm burning over her body. (Gillespie, 2018) Moderators of Facebook removed the photo because it includes underage nudity and violence. They ignored the emotional and historic significance of the photo. They made the decision based on their communal values without resonating with children who are suffering from the untold pain and unimaginable crimes caused by wars.

Another example is that Tumblr banned the term #gay “because it is commonly associated with pornographic images, and thereby blocked all other non-pornographic content similarly tagged.” (Gillespie, 2018, p. 270) Tumblr’s moderators failed to consider the perspective of homosexual users.

Figure 2. “LGBTQ IMG 1724 (49142643082)” by Elvert Barnes is licensed under CC BY 2.0 CC BY-SA 2.0 https://creativecommons.org/licenses/by-sa/2.0/deed.en https://upload.wikimedia.org/wikipedia/commons/f/f8/LGBTQ_IMG_1724_%2849142643082%29.jpg

Figure 3. “LGBTQ IMG 1724 (49142643082)” by Elvert Barnes is licensed under CC BY-SA 2.0  

Why Governments Intervene Content Moderation? 

 

“The likelihood of regulatory harmonization on a global scale is currently low.” (Flew, et al., 2019 p. 45)

Since undesirable political content on platforms could lead to moral panics and “public shock” that disrupt social harmony (Flew, et al., 2019), when ethical concern and social responsibility fail to regulate commercial platforms and some of their users, governments should do. Governments can customize the regulation policies towards national conditions. For example, Europe and South America commonly enforce policies applied “conditional liability” while China and the Middle East applied a ‘strict liability framework. (Flew, et al., 2019)  Besides, sometimes platforms might fail to deal with anti-social content immediately. The UK House of Commons found that “Google failed to perform basic due diligence regarding advertising [that] containing inappropriate and unacceptable content, some of which were created by terrorist organizations.” (Flew, et al., 2019) In such cases, the government should remind platforms to remove those content as fast as possible or directly intervene in removing them.

“Platforms vary, in ways that matter both for the influence they can assert over users and for how they should be governed.” (Gillespie, 2018, p. 25)

I suggest governments avoid go beyond platforms to directly constrain users because it needs massive revenues and might lead to ideological control. (Kelty, 2014) China is a typical example. China’s media censorship enshrines ‘national cyber sovereignty as a core principle” (Flew et al., 2019, p. 13) The Chinese government blocked giant foreign digital companies that might threaten their cultural hegemony and employed two million exports to monitor and censor citizens’ online behaviour. (Xu & Albert, 2017)  Chinese authorities empower and encourage platforms to penetrate into citizen’s daily life. (de Kloet et al., 2019)Platforms are like panoptic spheres, in which, users are always visible to and surveilled by state power. (Gillespie, 2014) Although such policy is strong in regulating illegal content, it suppresses citizens’ freedom of speech. Overall, I would suggest the government ascertain the scale and range of content moderation and authorize platforms to implement specific strategies. By doing so, platforms can be referred to as watchdogs that scrutinize “over the media as well as government” while the governments also scrutinizing the platforms. (Dutton, 2009; Flew, 2021)

Figure 2. “China Censorship22” by mikemacmarketing www.vpnsrus.com is licensed under CC BY 2.0 https://www.flickr.com/photos/152824664@N07/30188201687/

Figure 4. “China Censorship22” by mikemacmarketing is licensed under CC BY 2.0

Conclusion

Governments’ governance towards content moderation is crucial when platforms’ commercial nature impaired their social responsibility. The contradictions between the diversity of users and homogeneity of moderators, between platforms’ commercial attributes and social liability are the main obstacles for platforms to implementing moderation. Therefore, governments should enact edicts and supervise platforms’ execution. To avoid hegemonic control, governments are supposed to act as supervisors of platforms instead of regulators of users. Platforms should take the role of scrutinizing the government as well. As Gillespie (2018, p. 264) said “[platforms] must.. decide how to translate a new legal obligation [enacted by governments] into an actionable rule, react to the emergence of a category of content they would like to curtail, and respond to surges of complaints from users.

 

References

Absolute Motivation. (2018). You Will Wish You Watched This Before You Started Using Social Media/ The Twisted Truth. Retrieved October 2021, from Youtube: https://www.youtube.com/watch?v=PmEDAzqswh8

Barbrook, R., & Cameron, A. (1996). The Californian Ideology. Science as Culture, 6(1), 44-72  https://doi.org/10.1080/09505439609526455

Boyd, D. (2010). Social Network Site as Networked Publics: Affordances, Dynamics, and Implications. Networked Self: Identity, Community and Culture on Social Network SIites, 39-58. https://www.danah.org/papers/2010/SNSasNetworkedPublics.pdf

Bucher, T. (2012, 11). Want to be on the top? Algorithmic power and the threat of invisibility o Facebook. New media & society, 14(7), 1164-1180. https://doi.org/10.1177/1461444812440159

Castells, M. (2002). The Culture of the Internet. In The internet galaxy reflections on the Internet, business, and society (pp. 36-63). Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199255771.001.0001.

Center for Employment Equity; University of Massachusetts, Amherst. (2017). Is Silicon Valley Tech Diversity Possible Now? https://www.umass.edu/employmentequity/silicon-valley-tech-diversity-possible-now-0.

de Kloet, J., Poell, T., Guohua, Z., & Yiu Fai, C. (2019, 7 3). The platformization of Chinese Society: infrastructure, governance, and practice. Chinese Journal of communication, 12(3), 249-256. https://doi.org/0.1080/17544750.2019.1644008

Dutton, W. H. (2009, 3 1). The Fifth Estate Emerging through the Network of Networks. Prometheus (Saint Lucia, Brisbane, Qld.), 27(1), 1-15, ISSN: 08109028 https://doi.org/10.1080/08109020802657453

Flew, T. (2021). Week 6 – Governing the Internet: Content Moderation and Community Management. Retrieved October 2021, from Canvas: https://canvas.sydney.edu.au/courses/34089/pages/week-6-governing-the-internet-content-moderation-and-community-management?module_item_id=1160574

Flew, T., Martin, F., & Suzor, N. (2019, 3 1). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of digital media & policy, 10(1), 33-50 ISSN: 25163523  https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2014). The Relevance of Algorithms. In Media Technologies: Essays on Communication, Materiality, and Society (pp. 167-193). Cambridge, Massachusetts: The MIT Press. https://doi.org/10.7551/mitpress/9780262525374.001.0001

Gillespie, T. (2018). All Platforms Moderate. In T. Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23)  New Haven, CT: Yale University Press. https://doi.org/10.12987/9780300235029

Gillespie, T. (2018). Governance by and through Platforms. The SAGE handbook of social media, 254-278 ISBN: 9781473984066 https://doi.org/10.1177/2056305120936636

Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review, 8(2), https://doi.org/10.14763/2019.2.1407.

Kelty, C. M. (2014). The Fog of Freedom. In T. Gillespie, P. J. Boczkowski, & K. A. Foot (Eds.), Media technologies: essays on communication, materiality, and society (pp. 196-220). Cambridge, Massachusetts: The MIT Press.    https://doi.org/10.7551/mitpress/9780262525374.001.0001

Lusoli, A., & Turner, F. (2021, 4). “It’s an Ongoing Bromance”: Counterculture and Cyberculture in Silicon Valley—An Interview with Fred Turner. Journal of management inquiry, 30(2), 235-242 ISSN: 10564926 https://doi.org/10.1177/1056492620941075

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New media & society, 19(3), 329-346 ISSN: 14614448 https://doi.org/10.1177/1461444815608807

Paech, V. (2021). Week 6 – Governing the Internet: Content Moderation and Community Management LectureOnline Community Management. Retrieved October 2021, from Canvas: https://canvas.sydney.edu.au/courses/34089/pages/week-6-governing-the-internet-content-moderation-and-community-management?module_item_id=1160574

Roberts, S. T. (2019). Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (Vols. ISBN: 9780300245318 DOI: 10.12987/9780300245318 OCLC Number: (de-b1597)540572, pp. 33-72). New Haven, CT: Yale University Press. https://doi.org/10.12987/9780300245318

Schlesinger, P. (2020). After the post-public sphere. Media, culture & society, 42(7-8), 1545-1563 ISSN: 01634437  https://doi.org/ 10.1177/0163443720948003.

Xu, B., & Albert, E. (2017). Media Censorship in China. Retrieved October 2021, from Council on Foreign Relations: https://www.cfr.org/backgrounder/media-censorship-china

Creative Commons License