Content Moderation in Digital Platforms and Government’s Role in Platform Regulation

"Automotive Social Media Marketing" by socialautomotive is licensed under CC BY 2.0

 

“Social Media” by MySign AG is licensed with CC BY 2.0.

 

Social media platforms are built from the chaos of the web (Gillespie, 2018). Content moderation is an essential regulation for commercial websites and social media platforms to ensure online safety, compliance with laws and positivity of online community (Roberts, 2019).

 

 

Content moderation is the analysing, screening, categorising and approval or removal of digital information based on the platforms’ policies. In general, content moderation aims to protect users from any potential adverse contents such as illegal publication, violence, nudity and brutality images, as well as to minimise anti-social behaviour, especially for particularly vulnerable users like children (Flew et al., 2019). Content moderation also expects to enhance the protection of user privacy. As content moderation necessarily involves removing inappropriate contents and suspending user’ account for the goal of maintaining a harmonious community, there are various controversies in relations to the disadvantages of it.

 

“Social Media Influence” by Intersection Digital is licensed with CC BY-NC 2.0.

The main criticisms are the sacrifice of freedom of speech, the debate of the standards of inappropriate contents and the potential loss of valuable speech. Attempts have been made to moderate contents, such as Facebook’s update on community standards, suspension of formal U.S. president Donald Trump’s social media accounts, and the removal of the Napalm girl photograph from a writer, Tom Egeland’s Facebook account. However, these well-known attempts on content moderation have raised a heated debate in relation to the necessity of moderating such contents and nature of content moderation. In addition, as Flew et al., (2019) highlight, the large technological and telecommunications companies are the main actors regulating the platforms, in which they have to comply with both their own nations’ laws and international laws where their businesses have engaged in. In this case, government intervention in protecting online safety and regulating contents are crucial but also difficult.

 

Content moderation or freedom of speech?

According to Gillespie (2018), platforms moderate for multiple purposes, such as removing illegal content, improving users’ online experiences, avoid criticism from the public and government, protections of copyright, as well as for the cooperation with advertisers. Platforms must moderate their contents for all those reasons as it should be their responsibilities to demonstrate the best side of them to the public at large, and to protect those users who engage in online community positively. As Gillespie (2018) emphasises, content moderation is the task that the operators implement unwillingly, and these platforms operators know better than any other users about the trade-off of content moderation, but they have to do it for the bigger picture. Moderation is not a new concept, but rather it exists from the very beginning. It remains hidden and provides the fantasy of an open platform for the users in order to avoid legal and cultural criticism. Despite the good intention of content moderation, platforms still received a lot of criticisms, since moderating contents on social media platforms such as Twitter and Facebook, undermine the promised freedom from the web. It is vital to note that neither the social media platforms nor the web are utopia in its nature. Absolute freedom of speech may in fact undermine the freedom for all. By moderating contents from the platforms and trading off some users’ freedom of speech, the platforms may be more capable to improve and protect general users’ interests.

 

 "facebook nonprofits" by cambodia4kidsorg is licensed with CC BY 2.0.
“facebook nonprofits” by cambodia4kidsorg is licensed with CC BY 2.0.

Attempts on content moderation, are they doing the right thing?

The photo “Napalm girl”, is photographed by Nick Ut depicting the devastation of the Vietnam war. In the photo, a nine-year-old girl desperately running to the soldiers for help, naked and injured. A writer named Tom Egeland has included this photo is his Facebook post in 2016, stating the dramatic change and significant of history and warfare (Gillespie, 2018). However, this photo has been removed by Facebook and Egeland has argued that this photo does not represent nudity and child-pornography. Despite its appearance, the meaning behind the photo should be considered. This example of “Napalm girl” demonstrates the controversies of content moderation as different stakeholders share different views toward what is appropriate to post and what is not. The contents that Egeland has posted might be valuable, by removing his post, the trade-off may be more than the freedom of speech itself. On the other hand, due to political, cultural and educational purposes, the removal of Egeland’s post may in fact help him to avoid criticism and cyber-bullying.

 

“Donald Trump” by Gage Skidmore is licensed with CC BY-SA 2.0. To view a copy of this license

Furthermore, Facebook and Twitter have banned Donald Trump’s social media accounts because of his free speech supports violence movements and contains false information, during the 2021 presidential election. Similar to many users who have been banned by platforms, Trump is furious and criticises these platforms as “total disgrace and an embarrassment” to the U.S.. Whereas the CEO of the Facebook Mark Zuckerberg responds as “the risks are simple too great” for Trump’s freedom of speech. In a way, the decision of banning Trump’s accounts is difficult to make due to political reasons. For the benefit of general public and platforms’ audiences, sacrificing Trump’s free speech is likely to bring peace and civilisation to the platforms. Moderators have to make difficult decisions about how, when and why to intervene (Gillespie, 2018).

 

Why Content Moderation Costs Social Media Companies Billions

Government intervention on content moderation

 “Platform governance” builds a legal, political, and economic relationships with media users, technology companies and government (Gorwa, 2019). Government interventions have increased and challenged the power of the technology companies, wishing to regulate content and operation on platforms (Flew et al., 2019). Laws in relation to online contents have been enforced by countries such as Australia, Germany and Singapore, whereas stakeholders who value freedom of speech may concerns about the changes from government intervention (Keller, 2018, cited by Flew et al., 2019). These governments believe that certain online contents must be removed for the welfare of their citizens and the cultural norms they approved.

Since 2016 U.S. election, public attentions have focused on firms’ right to moderate contents and the decision making of users’ freedom of speech and government from around the globe seek to improve the freedom of speech for their citizens (Gorwa, 2019). It is important to note that many platforms are “data monopolies”, suggesting that these platforms take control of most activities of online content, and it is difficult for the government to intervene (Finck, 2017, p. 20; Marsden, 2018, cited by Gorwa 2019). On the other hand, there are voices against government intervention, as Dawson (2020) highlights, government intervention can be view as a form of “censorship” that restricts users’ freedom to express their opinions and the users may encounter political and cultural difficulties in online platforms. Dawson (2020) also uses China as an example of “censorship” in which Chinese users are under surveillance of their social media activities and there is strict restriction on what contents are allowed and what contents are strictly forbidden to share. The Chinese government reserves the right to oppress any posts that “threatens its authority”. Like content moderation, government intervention should also consider how, when and why to intervene as freedom of speech is a valuable opportunity that only limited users form certain countries possess. Although online safety and platforms experience are important, the least thing that all parities want is to restrain the freedom of expression.

 

“Social Media Monopoly” by clasesdeperiodismo is licensed under CC BY-NC-SA 2.0

Conclusion

Content moderation comes from good intention to enhance online safety and protect the users from any potential harms. Social media platforms are not the fantasy world that is completely open for any ideas, just like in reality, there are illegal, vicious and violence contents. Platforms value the utopian notation of online community and eager to maintain it as a civilised place. Although content moderation may undermine freedom of speech, there are sacrifices to be made for the benefits of the most users. Government intervention may be necessary for online safety reasons, but such intervention should be implemented considerably as valuable voices may be muted.

 

 

(1302 words, including titles, excluding figures’ captions)

 

References

ABC News. (2021, January 7). Twitter, Facebook, Instagram and Snapchat Lock Out Trump in wake of ‘unprecedented’ and ‘violent’ protests. ABC News. https://www.abc.net.au/news/2021-01-07/twitter-facebook-lock-donald-trump-account-for-policy-violations/13038816.

ABC News. (2021, May 5). Trump labels facebook a ‘total disgrace’ after his ban is upheld. ABC News. https://www.abc.net.au/news/2021-05-05/donald-trump-remains-banned-from-facebook/100119542.

Dawson, M. (2020, February 26). Why government involvement in content moderation could be problematic. Impakter. https://impakter.com/why-government-involvement-in-content-moderation-could-be-problematic/.

Facebook community standards. Transparency Center. (n.d.). https://transparency.fb.com/en-gb/policies/community-standards/.

Facebook wants to create a ‘Supreme Court’ for content moderation. will it work? Facebook wants to create a ‘Supreme Court’ for content moderation. (n.d.). https://www.pri.org/stories/2019-09-05/facebook-wants-create-supreme-court-content-moderation-will-it-work.

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation AS media policy: Rethinking the question of Digital Communication Platform Governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1

Gillespie, T. (2018). All Platforms Moderate. In Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1–23). Yale University Press.

Gorwa, R. (2019). The platform governance triangle: Conceptualising the informal regulation of online content. Internet Policy Review, 8(2). https://doi.org/10.14763/2019.2.1407

Roberts, S. T. (2019). Behind the screen: Content moderation in the shadows of social media. Yale University Press.

YouTube. (2021). Why Content Moderation Costs Social Media Companies Billions. YouTube. https://www.youtube.com/watch?v=OBZoVpmbwPk.

 

Content moderation by Linglu Yang is licensed under a Creative Commons Attribution 4.0 International License.