https://upload.wikimedia.org/wikipedia/commons/4/4c/Social-media-1.webp
Content moderation is part of the management of digital platforms for user-generated content. Due to the flexible nature of the internet, control over posts users generated cannot be monitored or guaranteed. Social media emerged from the highly disorganized web, even though it provides users a terrace to stay connected with the world. However, at the same time, the dark side of human beings has also emerged in terms of misogyny, child pornography, racism and even fake news (Gillespie, 2018). Thus, the digital platforms act as content regulators started taking responsibility for drawing up terms and regulations to restrict and censor uncomfortable content for some users to maintain a better online environment.
Issues arose from Content moderation and controversies encountered when implementing.
For online platforms to maintain a healthy online environment, user behaviour needs to be regulated by rules. However, according to Gillespie (2018), the standard to follow is vague, and often the platform is in an awkward position as a content distribution agent. However, there are no clear legal restrictions on online distribution, so digital platforms need to set their own standards to govern user behaviour (Gillespie, 2018). Turley (2020) argues that this subjectivity of censorship is why the cure is worse than the illness. Gillespie (2018) mentioned that:
“Unanticipated kinds of content or behaviour may be spotted first through the complaints of users, then formalized into new rules or written into existing ones. Moreover, changes can also come in response to outcries and public controversies. In these guidelines, we can see the scars of past challenges.”
Also, Turley (2021) indicated that:
“There is no such thing as a content-neutral algorithm that removes only harmful disinformation — because behind each of those enlightened algorithms are people who are throttling speech according to what they deem to be harmful thoughts or viewpoints.”
This creates an unequal situation, as platforms can censor user-generated content at their own discretion. So the first question is whether platforms should be responsible for content users post when there are no specific guidelines on managing them?
Moreover, it is sometimes difficult to control the effort at which the platform regulates the removal of content. Some would argue that it is sometimes too restrictive, and vice versa. Platform auditing consists of both AI and humans, and it is more efficient to use technology to audit, but machines are, after all, machines, and they are designed to scan what looks to be illegal and rely on humans for the essential aspects of auditing such as hate speech, discriminatory user complaints or reporting (Robert, 2016, as cited in Myers West, 2018)
In the absence of regulation, online platforms can become chaotic, with posted uncontrolled public discourse and uninhibited content. Misogyny, pornography, racism and even terrorism can cause fear in much of the public (Massanari, 2016). The video of the beheading of a journalist by a terrorist group ten years ago disturbed many people. Because of the failure of platforms to remove it promptly, this video circulated like a virus at a time when the internet was not strictly regulated, and platforms did not censor harmful content sent by users on time, the age group and the area of coverage of traumatized people in the aftermath is unimaginable.
It is a dilemma for digital platforms to balance the scale between various moral perspectives and the execution of the content removal. “The Napalm Girl” was a picture shoot in a historical event- The Vietnam war. Facebook removed the picture when it was first released to the public, claiming that it violates nudity regulations in its protocols. Such behaviour by no means received condemns among many internet users and media commentators. After that, Facebook recovered the picture and explained that they care about its users and communities overall, but the line between a picture of nudity and a historical event is hard to define (Gillespie, 2018). The way users decode information cannot be anticipated due to the user coverage of different cultural backgrounds. On a moral level, removing this content should be criticized because it is a stamp to alert each of us to the heartbroken moments war brought us. Nevertheless, on the other hand, as a platform with such diverse background of users, it is reasonable to consider the people who might find the picture itself uncomfortable despite its historical attribute(Gillespie, 2018). In this sense, digital platforms are stuck between the moral and historical ethics of removing the content.
Furthermore, even though social media provides a platform for people to share their thoughts and connect with us, online censorship is involved to some extent in controlling and violating the public’s right to know and privacy. In the documentaries the great hack (2019) and The social dilemma (2020), many former Facebook,
Twitter and Google employees have admitted to using algorithms to control users and even society as a whole or to use user data for political warfare. The frenzied and uncontrolled spread of fake news can cause social chaos. Fake news is beneficial to politicians, but only for a while. For example, former US President Donald Trump, a significant producer of fake information, had his Twitter account blocked, proving the unfeasibility of this practice. In this case, the spread of misinformation violates the public’s right to know the true stories.
Should the government have a more significant role in enforcing content moderation?
I argue that government should intervene in content moderation more because if the government did not intervene too much in it, there would be problems with platform censorship—for example, social media abuse of censorship or unjust censorship due to their own subjective fact.
Hence, the government should be involved in related practices to take part in balancing contemptible social and values on the digital platforms.
According to the United Nations of the highest commissioner (“Moderating online content: fighting harm or silencing dissent?”, 2021), the restrictions imposed by the State should be based on the law, should be clear and should be necessary, proportionate and non-discriminatory.
Section 230, also known as ‘safe harbour’, is a treaty to protect platforms raised by the US government to assure the social media companies firstly is not in charge of the public discourse of the users since they only provide them with a platform to share ideas(Gillespie, 2018). Secondly, a safe harbour is still valid if these platforms want to censor what users share online with good intentions (Gillespie, 2018).
What government should cooperate with digital companies could refer to Adam Schiff, the United States representative. He sent a message to the heads of Google, Twitter, and YouTube, asking them to moderate anything seen as misunderstanding messages and false information. He told the companies that they should restrict the contents and that “while taking down hazardously misleading information is a vital step”, they also have to educate those users who accessed it by making available the facts(Turley, 2020).
Conclusion
This article illustrates several issues in implementing content moderation; firstly, platforms could make subjective decisions when censoring content since no specific law is issued by the government, which might lead to free speech. Secondly, the degree of the relationship between moral value and censorship remains further research. Moreover, lastly, examine the contents might lead to invasion of users right to know and privacy, including the spread of fake news. In the end, I argue that government should participate more in content moderation not only spread the correct social values but also set clear rules with social media companies to create a safer and equal online environment.
References
Gillespie, T. (2018). All Platforms Moderate. In T. Gillespie, Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1-23). Yale University Press. Retrieved 15 October 2021, from https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029-001.
Gillespie, T. (2018). Governance by and through Platforms. In J. Elizabeth, The SAGE handbook of social media (pp. 254-278). Retrieved 15 October 2021, from.
Hackabee, M. (2021). Mockery of free speech [Video]. Retrieved 16 October 2021, from https://www.facebook.com/watch/?v=251082356981007&t=0.
Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. https://doi.org/10.1177/1461444815608807
Moderating online content: fighting harm or silencing dissent?. Ohchr.org. (2021). Retrieved 16 October 2021, from https://www.ohchr.org/EN/NewsEvents/Pages/Online-content-regulation.aspx.
Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366-4383. https://doi.org/10.1177/1461444818773059
Turley, J. (2020). “China Was Right”: Academics and Democratic Leaders Call For Censorship Of Social Media and The Internet. JONATHAN TURLEY. Retrieved 15 October 2021, from https://jonathanturley.org/2020/05/04/china-was-right-academics-and-democratic-leaders-call-for-censorship-of-social-media-and-the-internet/.
Turley, J. (2021). Throttling free speech is not the way to fix Facebook and other social media. TheHill. Retrieved 16 October 2021, from https://thehill.com/opinion/technology/576062-throttling-free-speech-is-not-the-way-to-fix-facebook-and-other-social?fbclid=IwAR2ELrdfkEeCDI_hdTWv22zYCkLPN1yN__syIfV5SNkx1aRTlJpiCHCRfOo&rl=1.