Content moderation on digital platforms: Is it the shackle of freedom or the guarantee of the audience?

"meta_creation lab: inter-actors, attractors and the aesthetics of complexity" by dancetechtv is licensed under CC BY-SA 2.0
https://unsplash.com/photos/qLULbkatgpw

What is Content Moderation?

Content Moderation is data labeling. Data deemed inappropriate or data that does not meet the company guidelines are labeled as such and reviewed by content moderators (“Content Moderation & Social Media”, 2018). Content moderation is a controversial topic because the public views it as a counter to the First Amendment. However, private companies are not bound by the First Amendment (Samples, 2019). Therefore, the controversy is not about the legality of content moderation but the morality of it.

 

Challenges faced in Content Moderation

Conversations always seem to focus on the biggest, US-based platforms. These platforms are colossal, and their strategies influence billions of clients. Pundits talk about Facebook and YouTube as substitutes for the whole arrangement of platforms (Mansell & Steinmueller, 2020, CH. 3).

How Does Content Moderation Affect Social Live?

Digital platforms see the rise of waves of the females’ activist movements. Early studies on gender subtleties in the developing economic production pattern failed to specify that technology may reduce gender discrimination. Quite the reverse, technology may strengthen gender inequality. This similarly advances a universal query of managing diversity from an intersectional outlook seeing gender with its linking to different alignments of inequality, for instance, race, origin, and class.

Take the example of Elsagate. In 2017 an issue involving YouTube Kids, an app supposedly designed for children, caused a stir among parents. Several parents discovered that a significant amount of inappropriate content erupted throughout the internet and was made readily available for unsuspecting kids to see.  From using ‘-gate,’ which is used to pertain to scandals, it successfully victimized children with harmful content. 

study conducted in 2019 at the Pew Research Center showed that 72% of United States adults used at least one social media platform, and the majority of the users visited that platform at least once a week (Gallo & Cho, 2021, Pg. 4). This study showed that the majority of the older population uses social media platforms every week. However, the younger people who have had access to technology their whole life and are in constant use of it are most often the victims of misleading information.  

As worry about moderation has developed, insightful consideration has evolved with it-somewhat-from explicit discussions to more significant, primary inquiries regarding how moderation is coordinated and authorized. Yet, there remains an inclination for research to be driven by high-profile incidences and individuals, e. g. the election of 2016 in the United States candidly stresses over dishonesty (Napoli, 2018, Pg. 57); the Christ church shooting pushed disdain and homegrown psychological warfare as the highest need; the Covid-19 pandemic set falsehood and trick back in front.

https://unsplash.com/@anniespratt?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

Photo by Annie Spratt on Unsplash 

 

What Are the Attempts to Implement Content Moderation?

Political

This inclination stretches out to policy-making. US and European policymakers have likewise centered around the most recent discussions and the most noticeable players (Flew et al., 2019, Pg. 37). It isn’t shocking that when the US Congress started to test these inquiries, first in a challenging situation was Mark Zuckerberg, trailed by senior pioneers from Google, Twitter, and YouTube (Flew et al., 2019, Pg. 37). As gigantically huge and tricky as Facebook is, it has become unreasonably noticeable with some moderation discussions-the favored object of examination in portraying the issue, the substitute for any remaining platforms (Fuller, 2020, Pg. 13).

Content moderation is drawing a lot of consideration around the world. While the government is asking private entertainers to make a substantial move to stop the circulation of inappropriate content on the web, analysts and scholars dispute the degree to which such actions are dangerous for freedom of expression. There has, without a doubt, been an increase in unsafe discourse and maltreatment via online platforms as of late, which are driving numerous nations to make a move to manage what is regularly alluded to as “the global village” (Samples, 2019). This clarifies why content moderation is drawing in such a lot of consideration in France. Recently, Laetitia Avia presented a law to handle contemptuous discourse on the web. Also, Emmanuel Macron invited Mark Zuckerberg at the Elysée Palace to examine guidelines, particularly the issue of contempt speeches on Facebook.

 

Sociocultural

The government wants to ban several kinds of content, including unlawful content, like racism, hate speech, gender, sexual inclinations, or contentions denying wrongdoings against humankind like the Holocaust. One more content that the government is battling against are those supporting terrorist organizations; there is additionally the issue of porn and sex on public social networks, just as for the adolescents. Another one is the live streaming of incidences such as the Christchurch shootings; ultimately, there is misinformation and deliberately fake propaganda made up to undermine political race cycles both inside and out of the nations involved.

Furthermore, many arrangements are established to manage moderation or check online damages, some targeted at private businesses. In July 2000, a state magistrate in San Francisco liquidated Napster, the trending music-swapping Website. The judge stated that the online company encouraged “wholesale infringement” against copyrights in the music industry (“ABC News”, 2021). Simultaneously, it might sensibly have Facebook or YouTube in its sights, which will likely practically apply to all platforms and client content administrations. The outcome could additionally merge the force of the most incredible tech organizations, those best ready to deal with the administrative weights. This has been a worry in corresponding regions, for instance, security and copyright assurance.

https://unsplash.com/@nublson?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText

Photo by Nubelson Fernandes on Unsplash 

 

Should the Government play a greater role in enforcing content moderation? 

Absolutely, in as much as they are already part of the fight, they can still do more. Content moderation improves the safety of the online environment by monitoring content that is deemed inappropriate or content that goes against company guidelines. Therefore, government intervention would help to remove such content on that platform. However, banning or removing content or users would only result in them moving to other platforms to share inappropriate content. To ensure that this will not happen, not only should companies moderate content, but the government should take action to help stop users from posting inappropriate content on all public platforms. An example of government interference with content moderation is Germany. In Germany, “social networks could pay up to $60 million in fines if hate speech isn’t removed within 24 hours” (Armstrong, 2017). Therefore, government interference would show the public zero-tolerance for content and motivate companies to remove inappropriate content.

Based on the information above, content moderation may seem reasonable to implement within all social media platforms. However, many people argue that it goes against the First Amendment (“First Amendment”, 2017). The First Amendment “protects the freedom of speech, religion, and the press.” Although the First Amendment does not bind private companies, many view it as a right that should not be taken away in any circumstance.

Furthermore, the argument continues to say that private companies have the freedom to silence any opinion other than their own through content moderation, even if it is not deemed ‘offensive’ (Kelty, 2014, Pg. 195). Therefore, those against content moderation argue that companies become a monopoly, not bound by the law.

 

Conclusion

It is not only the government but also these tech companies that have a presumption of legitimacy to fight this inappropriate content. These companies need to engineer their legitimacy to moderate content. The owners of these firms might regret being summoned to encounter this considering the political and social threats and difficulties associated– but this task is unavoidable.

 

References 

ABC News. (2021). Napster Shut Down. Retrieved 15 October 2021, from https://abcnews.go.com/Technology/story?id=119627&page=1.  

Armstrong, Paul. (2021). “Why Facebook’s Content Moderation Needs To Be Moderated.” Forbes, Forbes Magazine. Retrieved 14 October 2021, from https://www.forbes.com/search/?q=Why%20Facebook%27s%20Content%20Moderation%20Needs%20To%20Be%20Moderated.%E2%80%9D&sh=5e920df3279f 

 Content Moderation & Social Media. (2021) Lionbridge AI, Lionbridge Technologies. Retrieved 12 October, 2021, from https://www.telusinternational.com/solutions/trust-safety-security/content-moderation-and-social?INTCMP=ti_lbai  

First Amendment, A&E Television Networks, 4 December 2017, www.history.com/topics/unitedstates-constitution/first-amendment. 

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governanceJournal of Digital Media & Policy, 10(1), 33-50. 

Fuller, G. (2020). Sharing News Online: Commendary Cultures and Social Media News Ecologies, Fiona Martin and Tim Dwyer (2019). Australian Journalism Review, 42(2), 333-334. Retrieved 14 October 2021, from, https://link.springer.com/book/10.1007%2F978-3-030-17906-9  

Gallo, J. A., & Cho, C. Y. (2021). Social Media: Misinformation and Content Moderation Issues for Congress. Congressional Research Service Report, 46662. Retrieved 12 October 2021, from https://www.bespacific.com/social-media-misinformation-and-content-moderation-issues-for-congress/. 

Kelty, C. M. (2014). The fog of freedom. Media technologies: Essays on communication, materiality, and society, 195-220. 

Mansell, R., & Steinmueller, W. E. (2020). Advanced introduction to platform economics. Edward Elgar Publishing. Retrieved 13 October 2021, from https://books.google.co.ke/books?hl=en&lr=&id=Gr72DwAAQBAJ&oi=fnd&pg=PR1&dq=info:CvgpFQTQX1sJ:scholar.google.com/&ots=WQcD1dvDd1&sig=JwpewOcU7DgByrhNRRE2FTRxwGo&redir_esc=y#v=onepage&q&f=false   

Napoli, P. M. (2018). What if more speech is no longer the solution: First Amendment theory meets fake news and the filter bubble. Fed. Comm. LJ, 70, 55. 

Samples, John. (2019). “Why the Government Should Not Regulate Content Moderation of Social Media.” Cato Institute. Retrieved 12 October 2021, from https://www.cato.org/policy-analysis/why-government-should-not-regulate-content-moderation-social-media