Moderation and regulation of media content on digital platforms

Should the issues of media content be regulated by the digital platforms or the government?

Introduction

Nowadays, with the rapid development of technology, digital media has become indispensable prevalent in human daily life and society. The digital media platform has built a bridge for people to communicate with the outside world. People gradually rely on it and like to share their ideas on it. At the same time, the digital media platform also has various types of potential problems, such as hate speech, conspiracy theories, fake news, insulting information, and cyberbullying, which have led to the reconsideration of the supervision of digital platforms.

 

What are the specific examples of these issues?

  • Hate speech and conspiracy theories

Controls to manage fake news in Africa are affecting freedom of expression‘ by Ashwanee Budoo-Scholtz is licensed under CC BY-SA 2.0.

Hate speech refers to the use of offensive, violent or offensive language against specific groups of people with common attributes, including gender discrimination and racism, etc (Watanabe, Bouazizi, & Ohtsuki, 2018). At present, online hate speech and conspiracy theories are widely defined as negative language targeting a specific identity and have become a serious problem on popular digital platforms. For example, in early 2020, the sudden spread of the Novel Coronavirus caused panic around the world. Because people do not know anything about this kind of infectious disease, a lot of wrong information, fake news, and conspiracy theories about COVID – 19 has spread on digital media platforms like Twitter, Facebook, a large number of transmissions has led to the so-called “epidemic” information on cyberspace, unstable quality of information overload, with unprecedented speed, brought a lot of potential adverse consequences (Uyheng & Carley, 2021).

 

  • Cyberbullying

Furthermore, cyberbullying has become an international phenomenon that occurs in a wide range of social groups and increasingly calls for legislative action. For example, according to the statistics, only in 2013, 17% of German students have already become the victims of cyberbullying (Pieper & Pieper, 2017). Online bullies carry out violence not only through words, but also through pictures and videos. And through the internet, the degree of the anonymity of criminals, potential public breadth, and unlimited ubiquity of time all increase the intensity and scope of the harm, as their behavior may be more unrestricted (Pieper & Pieper, 2017). In particular, mobile access to the Internet through the increasingly popular smartphones now encourages unlimited victimization. For the victims, they are not sure who committed the violence and may fight back against whom.

Therefore, it is necessary to regulate social media from both political and social perspectives.

 

To implement content moderation, what attempts have been made so far?  What were the effects?

Digital platforms’ attempts

In response to these issues, the digital platforms have attempted a series of measures for content moderation. However, as Alkiviadou (2019) highlights that digital platforms such as YouTube, Facebook, Instagram, and Twitter all have implemented internal approaches related to content moderation such as the ‘content view’ method and signed up to a code of conduct on the regulation of illegal hate speech with the European Commission, there are still flaws in their internal policies and implementations. Similarly, Watanabe, Bouazizi and Ohtsuki (2018) at the same time indicate that the scale of the networks and sites makes these internal implementations almost impossible to control all the content.

Facebook, Twitter, Instagram fined $3.8M each for failing to obey new social media rule‘ by Daily Sabah is licensed by CC BY-SA 2.0.

 

Controversies and issues

These implementations cause issues and controversies to society.

Riedl, Whipple and Wallace (2021) argue that content regulation brings issues related to civil liberties, such as the extent to which citizens want private actors such as digital media platforms to regulate content, and how freedom of speech affects content governance mechanisms. In 2019, Facebook announced that political actors are free to post any advertisements, therefore compared with other normal users, there appear different standards in terms of truth and speech norms (Riedl, Whipple, & Wallace, 2021). Moreover, Lan Millstein in the University of Pennsylvania illustrates that in the supervision process of social media platforms, they may choose to control their content, delete some users, and not delete other similar illegal content, all of which are subjective biases (Wall Street Journal, 2019).

 

In addition, the self-management of social platforms has limitations such as low auditing efficiency and unclear definition of speech boundaries. For example, in the shooting of New Zealand, Facebook’s artificial intelligence algorithm was slow to review because viewing violence from the murderer’s perspective was beyond the scope of the program. Extreme comments and videos cannot be deleted when they cross the line, and the line of terror speech is not clear (Riedl, Whipple, & Wallace, 2021).

 

Therefore, there are big problems with media platforms, but the companies that operate these platforms seem to be unable or unwilling to solve these problems. And through investigations, scholars found from the users’ ideas that they support government regulation and believe that the government should exercise greater supervision over social media platforms than they currently do (Wall Street Journal, 2019).

 

Should the government play a role in regulating digital platforms?

Comparing with digital media platforms, the better way of regulating media content is government supervision.

Firstly, the government is a non-profit entity that works for the benefit of society as a whole, not for specific organizations. This ensures the non-biased approach towards regulation and prevents any company from ‘owning’ information and using them for its own purposes.

Secondly, comparing to regulate digital media content by the digital platforms, the government can protect the right to privacy and freedom of speech to the greatest extent possible. The government could help form the ‘participatory democracy’ by applying means for the public to participate in democratic activities, for example, testimony hearings, regulation solicitations responses, and elected officials’ interactions (Bertot, Jaeger, & Hansen, 2012).

Crisis deepens in the international system and United Nations‘ by Muhittin Ataman is licensed under CC BY-SA 2.0.

Thirdly, the government could implement the regulation in a variety of ways, while others have limited power over social media platforms. The government can work with the legislature to ensure specific laws against false information and inappropriate content on social media. More importantly, the role of the government is not limited to the national or federal government, but also includes the international government-the United Nations (Bertot, Jaeger, & Hansen, 2012).

 

 what are the examples of the government supervision?

An example of the European Union regulating digital platforms

 

Regulating social media: Governments considering options‘ by CBC NEWS:The National 

From the video, it is clear that the Facebook breach leads to some consequences in Canada, however, in response to the issues, the European Union is not following the Canadian prime minister’s decision which leaves the responsibility to media companies, but to set up a new law called General data protection regulation which applies to any country of any media companies if relates to EU citizens information.

 

Social media and content moderation in times of COVID-19‘ by Rachel Griffin is licensed under CC BY-SA 2.0.

Another successful and supported example in Germany shows the advantages of government supervision as well. In April 2017, Germany announced a social media bill to combat the spread of fake news, hate speech, and conspiracy theories on digital media. The bill passed by the German parliament requires social media such as Facebook and Twitter to remove these fake news reports that incite hate or other “criminal” content, otherwise facing a fine of up to 50 million euros (Andorfer, 2018). The bill makes social networks responsible for users who use social platforms to “spread hate crimes or illegal fake news”. This action has received support from the citizens, they state that it leads social media sites to comply with Existing German laws governing hate speech and incitement (Andorfer, 2018).

 

 

Conclusion

Overall, aiming at the issues of content moderation in digital platforms including hate speech, conspiracy theories, fake news, and cyberbullying, comparing with the internal implementation of digital platforms to regulate media content, government supervision has shown outstanding advantages and good results. However, at present, the problems have not been completely solved, more prominent and targeted resolutions and implementations of the government are needed in the future.

 

Word count: 1289

 

 

Reference list

Alkiviadou, N. (2019). Hate speech on social media networks: towards a regulatory framework? Information & Communications Technology Law, 28(1), 19–35. https://doi.org/10.1080/13600834.2018.1494417

Andorfer, A. (2018). Spreading like wildfire: Solutions for abating the fake news problem on social media via technology controls and government regulation. Hastings Law Journal, 69(5), 1409-1432. Retrieved from: https://heinonline-org.ezproxy.library.sydney.edu.au/HOL/Page?handle=hein.journals/hastlj69&id=1410&collection=journals&index=

Bertot, J. C., Jaeger, P. T., & Hansen, D. (2012). The impact of polices on government social media usage: Issues, challenges, and recommendations. Government Information Quarterly, 29(1), 30–40. https://doi.org/10.1016/j.giq.2011.04.004

Pieper, A. K., & Pieper, M. (2017). The insulting Internet: universal access and cyberbullying. Universal Access in the Information Society, 16(2), 497–504. https://doi.org/10.1007/s10209-016-0474-z

Riedl, M. J., Whipple, K. N., & Wallace, R. (2021). Antecedents of support for social media content moderation and platform regulation: the role of presumed effects on self and others Information, Communication & Society, 1–18. https://doi.org/10.1080/1369118X.2021.1874040

Uyheng, J., & Carley, K. M. (2021). Characterizing network dynamics of online hate communities around the COVID-19 pandemic. Applied Network Science, 6(1), 20–20. https://doi.org/10.1007/s41109-021-00362-x

Wall Street Journal. (2019). Should the Government Regulate Social Media? Students debate regulating social media—scrub hate speech, make moderation neutral, or leave well enough alone? The Wall Street Journal. Eastern Edition. Retrieved from: https://www.proquest.com/docview/2246487926/18D5E2C644E74EAFPQ/10?accountid=14757

Watanabe, H., Bouazizi, M., & Ohtsuki, T. (2018). Hate Speech on Twitter: A Pragmatic Approach to Collect Hateful and Offensive Expressions and Perform Hate Speech Detection. IEEE Access, 6, 13825–13835. https://doi.org/10.1109/ACCESS.2018.2806394