With the development of the internet, people gradually rely on using the internet to do everything instead of face to face, such as social contacts, online shopping and expanding social circles. It greatly facilitates people’s daily life. The emergence of Web 2.0 evolves the internet into a participatory mode, increasing the interaction between users through digital platforms, such as Twitter, Instagram and LinkedIn. However, as digital platforms become more powerful, people have to re-examine some problems. The extensive use of digital platforms should be regulated. Content moderation is committed to eliminating illegal content on the internet, such as hate speech. There are a lot of potential issues with the widespread use of digital platforms under the inappropriate content moderation of digital platforms, such as the spread of misinformation, hate speech, conspiracy theory and public indignation about violating the freedom of speech. To better moderate the content of digital platforms, the government should participate in content moderation restrictions by promulgating the legislation.
The History of Content Regulation:
The content moderation of media did not emerge suddenly because of the rise of social media in recent years. According to research from Daskal et al. (2020), the earliest media regulatory system was embodied in the European and American media organisations in the 1960s and 1970s. They created a position called media ombudsperson, which is responsible for supervising the fairness of news content and correcting the misinformation in news reports. Moreover, the Communications Decency Act was promulgated in 1996. Thus, in the current Web 2.0 era, more direct interaction has made content moderation more important.
Consequences of inappropriate content moderation:
Improper regulation leads to the dissemination of massive misinformation that causes unnecessary social unrest. The circulation of social media platforms is large-scale and efficient. For instance, the outbreak of COVID-19 in 2020 causes social turmoil. Because of the ignorance of COVID-19 and the public panic at the beginning, many suspicions and conspiracies burst on the internet platform in an instant. Lack of platform regulation leads to the widespread transmission of misinformation, conspiracy theories and hate speech, which causes social panic. For instance, misinformation pointed out that enhancing the immune system can effectively resist the invasion of COVID-19 (Wagner et al., 2020). It did not be supported by any scientific basis, but it became a hot topic, #immunebooster on Instagram during the COVID-19 (Wagner et al., 2020). Misleading information can cause the masses to follow blindly, and in serious cases can endanger their lives. For example, the ingestion of bleach and disinfectant not only cannot eliminate the virus, but it can also threaten people’s lives. The American Association of Poison Control Centers reported that about 17,000 calls of disinfectant poisoning (Hart, 2021). Not enough social media content moderation will lead to social panic and cause more serious consequences.
The efforts made by platforms and the government:
In the early years, the platform and government have made several attempts on content moderation in digital platforms. In 2016, the European Commission, Facebook, Microsoft, Twitter, and YouTube have signed the German Telemedia Act and the Code of Conduct against Illegal Online Hate Speech, which is committed to removing and suppressing the illegal content in hate speech within 24 hours (Oliva, 2020). Furthermore, YouTube bans the revenue and monetization of anti-vaccine videos, which prevents people from obtaining profits from misinformation propaganda (Yang et al., 2019). To tackle the illegal content on digital platforms, many regulations have been implemented by platforms and governments.
The never-ending controversy surrounding social media regulation:
However, as the functionality of social media platforms is blurred within the boundaries between public communication and private communication, the debate about damaging human rights has never stopped. During the content moderation of digital platforms, the public is dissatisfied with its regulatory system and believes that it violates the right to freedom of speech. Even when the user uses the social media platform, using offensive language is the norm. Based on a survey from Jorgensen and Zuleta (2020), approximately 62 percent of interviewees in Denmark think that protecting the freedom of speech is more important than preventing offensive posts. Most users argue that content moderation of social media platforms seriously harms the right to freedom of speech. For instance, Karadeglija and Platt (2021) mention that a congressman was accused of seriously damaging freedom of speech for deleting a social media post without authorization.
Moreover, the compulsory content supervision system may cause a counterproductive consequence that the public believes that platform deletion is discriminatory. Oliva (2020) indicates that the hate speech related to “black children” will not be forcibly deleted, but those aimed at “white men” will be treated as hate speech and thus forced to be deleted. After the compulsory regulation of social media content by the digital platform and the government, the public’s discontent with content regulation cannot be eliminated in the short term.
How to effectively regulate the content on social media platform?
To implement and restrict the content moderation of digital platforms in a better way, the government should interfere and play an important role in it. Besides, the platform can do the self-regulation to prevent fake news to some extent. For example, Twitter completely banned any political advertising during the 2020 U.S. election (Rochefort, 2020). It effectively prevented the internet from spreading false information about politics in advance. Nevertheless, government regulation is more appropriate than self-regulation of digital platforms. Firstly, the influence of social media platforms, such as Facebook can affect the election of a democratic country by spreading fake news and misinformation (Rochefort, 2020). The content moderation by the platform itself may involve political bias, and the platform can selectively put politics-related advertising to users by collecting benefits from politicians. During the 2018 Brazil presidential election, Facebook deleted 196 pages and 87 personal profiles. Many of them were related to “Movimento Brasil Livre”, which is a policy that supports right-wing movements (Oliva, 2020). Not only that, but Twitter also uses content moderation of social media to influence U.S. presidential election results, such as protecting Clinton (West, 2018). Self-regulation of the platform cannot be as effective and objective as the government.
Secondly, the government’s legal framework to restrict social media content moderation can play a more obvious role. The Communications Decency Act of 1996 restricts the spread of online erotic content (Napoli, 2019). The European Commission has removed nearly 70 percent of illegal content by implementing the Code of Conduct in 2016 (Alkiviadou, 2019). Moreover, in 2017, Germany successively proclaimed the Network Enforcement Act to solve hate speech on social media platforms (Jorgensen and Zuleta, 2020). Platform owners who do not delete the illegal content within 24 hours will be paid a penalty of 50 million euros (Jorgensen and Zuleta, 2020). The compulsory laws to remove the illegal content are more efficient to regulate the content of digital platforms.
Laws may stop terrorists from using social media:
“Lawmakers are working to combat terrorist use of social media” by WKYT via YouTube. https://www.youtube.com/watch?v=f0x3TVo1DWo
Social media platforms facilitate hate speech from terrorist organizations, but government interference in terrorism content moderation is the only way to solve it. Due to the rapid spread and confidentiality of the Internet, terrorist organizations prefer to use social media to threaten. Government has more responsibilities than the platform to prevent severe circumstances. Legislation is the only effective method to suppress terrorist organizations on social media platforms (Azani & Liv, 2020). The government resolves organizations that use the internet for terrorist purposes by imposing responsibilities on third parties, such as Facebook and Twitter. For instance, Twitter suspended 235,000 accounts for violating the policy of promoting terrorism in 2016(Azani & Liv, 2020). Government intervention in regulating social media content can effectively control the number of times terrorism appears on social media.
Conclusion:
To better regulate the content on digital platforms, government intervention is necessary. Digital platforms provide a place for people to communicate easily and facilitate people’s daily lives. However, the convenience and dissemination of the Internet have been used to spread disinformation to disrupt social order. The content moderation with digital platforms is significant to maintain social peace, but its regulation leads to many discontents and controversies based on human rights, such as the freedom of speech. Although the compulsory provisions made by the government may impact the freedom of speech, it is the best approach to restrict social media content moderation.
Reference List:
Alkiviadou, N. (2019). Hate speech on social media networks: towards a regulatory framework? Information & Communications Technology Law, 28(1), 19–35. https://doi.org/10.1080/13600834.2018.1494417
Azani, E., & Liv, N. (2020). A Comprehensive Doctrine for an Evolving Threat: Countering Terrorist Use of Social Networks. Studies in Conflict and Terrorism, 43(8), 728–752. https://doi.org/10.1080/1057610X.2018.1494874
Daskal, E., Wentrup, R., & Shefet, D. (2020). Taming the Internet Trolls With an Internet Ombudsperson: Ethical Social Media Regulation. Policy and Internet, 12(2), 207–224. https://doi.org/10.1002/poi3.227
Hart, R. (2021, September 17). Americans Are Poisoning Themselves In Large Numbers With Bleach, Hand Sanitizer And Quack Covid Cures Like Ivermectin. Forbes. https://www.forbes.com/sites/roberthart/2021/09/17/americans-are-poisoning-themselves-in-large-numbers-with-bleach-hand-sanitizer-and-quack-covid-cures-like-ivermectin/?sh=7ba5b2f342b8
Jorgensen, R. F., & Zuleta, L. (2020). Private Governance of Freedom of Expression on Social Media Platforms: EU content regulation through the lens of human rights standards. Nordicom Review, 41(1), 51–67. https://doi.org/10.2478/nor-2020-0003
Karadeglija, A., & Platt, B. (2021, October 10). The first 100 days: Major battle over free speech, internet regulation looms when Parliament returns. National Post. https://nationalpost.com/news/politics/the-first-100-days-major-battle-over-free-speech-internet-regulation-looms-when-parliament-returns
Napoli, P. M. (2019). User Data as Public Resource: Implications for Social Media Regulation. Policy and Internet, 11(4), 439–459. https://doi.org/10.1002/poi3.216
Oliva, T. D. (2020). Content moderation technologies: Applying human rights standards to protect freedom of expression. Human Rights Law Review, 20(4), 607–640. https://doi.org/10.1093/hrlr/ngaa032
Rochefort, A. (2020). Regulating Social Media Platforms: A Comparative Policy Analysis. Communication Law and Policy, 25(2), 225–260. https://doi.org/10.1080/10811680.2020.1735194
Wagner, D. N., Marcon, A. R., & Caulfield, T. (2020). “Immune Boosting” in the time of COVID: selling immunity on Instagram. Allergy, Asthma, and Clinical Immunology, 16(1), 1–76. https://doi.org/10.1186/s13223-020-00474-6
West, S. M. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society, 20(11), 4366–4383. https://doi.org/10.1177/1461444818773059
Yang, Y. T., Broniatowski, D. A., & Reiss, D. R. (2019). Government Role in Regulating Vaccine Misinformation on Social Media Platforms. JAMA Pediatrics, 173(11), 1011–1012. https://doi.org/10.1001/jamapediatrics.2019.2838