The Controversy of Platform Regulation—-Content Moderation

facebook application screengrab

The development of the Internet has brought about the development of platform economy and platform society. Nowadays, people communicate and interact more through digital platforms, and at the same time, extremist ideologies and hate speech have become common. A permissive approach to content can, to some extent, attract more audiences to platforms and gain more profit and voice. However, as the user base grows and diversifies, platforms find themselves embracing users and communities with very different value systems, making it imperative for platforms to monitor content and resolve disputes. In most cases, platforms do not make content, but platform content is still heavily influenced by them.

closeup, photo, androids smartphone application logos, Facebook app, icon, media, social media, apps © 2021 by Piqsels is licensed under CC BY-NC-ND 4.0 

In fact, there is no such thing as an unmoderated digital platform (Gillespie, 2019). Moderation has always existed, it has just been denied and hidden to a considerable extent, in order to avoid legal and ethical responsibilities in the early stages of development and to maintain the illusion that platforms are fair and open and unregulated (Gillespie, 2019). Open platforms emphasize new, expanded, and unrestricted intellectual and social opportunities, and they provide audiences with more opportunities to speak, shaping the shape of public discourse to some extent. However, as digital media gradually becomes a vehicle for the proliferation of terrorism, violent extremism, and hate, the Internet and digital platforms need to adapt to monitor, block, and moderate the spread of such content.

There are no absolute standards for content moderation, and sometimes it is very difficult to distinguish between acceptable and unacceptable. The Facebook and the Napalm Girl controversy is a case in point (Ibrahim, 2017). When the photo was initially removed, Facebook defended that it was difficulty in distinguishing between photos of naked children allowed in one context and not the other. After widespread criticism from news outlets and media worldwide, the photo was reinstated. In fact, it won’t be the first or last time the digital platform restores deleted content in the face of angry public accusations. Back in 2008, Facebook was criticized for removing posts depicting women breastfeeding. The platform’s own standards for moderating and filtering content need more attention. Although the ban on napalm girls was brief, the manipulation of history and collective memory through technological ‘intelligence’ such as algorithms, human censorship rather than editorial and meme aids, in new media and social media platforms poses a broader ethical challenge to humanity, with the increasing circulation and accumulation of content.

1972 Nick Ut - USA

Generic Napalm Girl 1972 Kim Phuc In A Napalm Attack In South Vietnam Poster Black White Wall Decor Art Print 24×16 Inches Canvas Material Custom Poster © 1972 by Amazon is licensed under CC BY-NC-ND 4.0 

Moreover, the promotion and refinement of content censorship has two issues that must be faced:

On the one hand, platforms need to weigh freedom of expression against potential harm, and freedom of speech against the quality of information. It is clear that the rule set of platforms is exploding, that more content review is not always better review, and that on the one hand, platforms’ audiences face increasing restrictions on their access to the Internet, and on the other hand, these restrictions can cause platforms to lose audiences and thus reduce revenue (Regulation of and by Platforms, 2017).

On the other hand, the cost of moderation for platforms has risen steeply. Currently, there are three main methods to regulate and review content. The first is to hire a large number of professionals to manually review the content themselves, which raises the cost of employment and the risk of exposing inappropriate content. The second is artificial intelligence and automated review, which has shortcomings in the accuracy of information processing. The third is a review that relies on mutual monitoring by users, i.e., users communicate their content to the review team by flagging it, which may be effective to some extent (Amélie & Dreyer, 2021).

In addition, despite several waves of heavy investment in artificial intelligence and human reviewers, no platform has succeeded in limiting the harm of third-party content. Even if platforms would develop solutions based on content review, the problem would not be completely solved, and the massive investment in AI and human moderators would not stop the spread of millions of harmful messages. By the way, censorship comes at a cost, and combating harm or suppressing dissent is difficult to distinguish under the vast amount of content moderation that interferes with the freedom of expression and privacy of audiences (Regulation of and by Platforms, 2017).

Fighting Harm or Silencing Dissent?

In most cases, Internet platforms play an important role in managing users and content . Providers of Internet platforms, especially social media platforms, are often under pressure from law enforcement agencies and private actors around the world to interfere with content that does not meet their interests’ needs (WSJ, 2019). Sometimes even conflicting requests for interference are encountered as different stakeholders try to understand and influence the way online content is managed. Unfortunately, censorship of social media content lacks transparency and censorship regulations are often not targeted, leaving the freedom of expression and other legal rights of the platform’s audience unprotected. In 2018, the Santa Clara Principles on Transparency and Accountability Around Online Content Moderation have been released by New America’s Open Technology Institute, part of a coalition of organisations, advocates and academic experts who support the right to free expression online. Technology Institute, part of a coalition of organisations, advocates and academic experts who support the right to free expression online. pushed YouTube, Facebook and Twitter in implementing aspects related to ‘notifications’ and ‘appeals’. However, although they have all published transparency reports covering content removal based on violations of their terms of service, these reports fail to meet the detailed recommendations made by the Santa Clara Principles and the content of the reports lacks standardization.

As digital platforms expand their power, policymakers need to be aware of the growing power of commercial platforms and address the potential harms they pose to society, such as extremist ideology, terrorism, and fake news. Yet developments in this area can easily lead to adverse effects such as the compromise of user privacy and the restriction of free expression. The regulation of digital platforms by government agencies has been much criticised in public opinion. Many times, the perspectives of communities of color, women, LGBTQ+ communities, and religious minorities are at risk of disproportionate enforcement, while the harms against them often remain unaddressed. No one has the right to force people to agree with their ideas, let alone to disseminate them. Yet contempt for these principles is now commonplace across the political spectrum. In the face of the bloated power of digital platforms and the increasing proliferation of bad information online, governments should play a role in imposing content review restrictions on social media. However, if governments interfere excessively, digital platforms will be restricted and content producers will be limited. Social media companies have strong liability protections and are largely self-regulating. There are growing calls for government regulation, while social media platforms may be forced to remove content in order to absolve themselves of liability, blocking the flow of information to some extent.

In short, Internet censorship is a costly process that empowers one group of people to override another. The desire to bring government into controversies over online speech is understandable, but misguided. Censorship all but causes potential audiences to change their attitudes about the position advocated by the communication and increases their desire to hear that communication. The appropriateness and development of the Internet environment is still being explored and requires mutual constraints and balances between audiences, digital platforms and government agencies.

 

References

 

The Transparency Report Tracking Tool: How Internet Platforms Are Reporting on the Enforcement of Their Content Rules. (n.d.). New America.

 

Ibrahim, Y. (2017). Facebook and the Napalm Girl: Reframing the Iconic as Pornographic. Social Media + Society3(4), 205630511774314–. https://doi.org/10.1177/2056305117743140

 

Should the Government Regulate Social Media? Students debate regulating social media—scrub hate speech, make moderation neutral, or leave well enough alone? (2019). The Wall Street Journal. Eastern Edition.

 

Bertot, J. C., Jaeger, P. T., & Hansen, D. (2012). The impact of polices on government social media usage: Issues, challenges, and recommendations. Government Information Quarterly29(1), 30–40. https://doi.org/10.1016/j.giq.2011.04.004

 

Amélie Heldt, & Stephan Dreyer. (2021). Competent Third Parties and Content Moderation on Platforms: Potentials of Independent Decision-Making Bodies From A Governance Structure Perspective. Journal of Information Policy (University Park, Pa.)11, 266–300. https://doi.org/10.5325/jinfopoli.11.2021.0266

 

Gillespie, T. (2019). All Platforms Moderate. In Custodians of the Internet (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029-001

 

Regulation of and by Platforms. (2017). In The SAGE Handbook of Social Media.