The Consequences of Content Moderation

group10,Jiachen Guo,assignment 2

The Consequences of Content Moderation

Executive Summary

While the internet has transformed the world into a global village thus easing communication and enhancing business activities, it has given rise to many problems. The sharing of harmful content over the internet has been a problem that many have raised, especially to the various digital platforms. There have been calls to moderate the content shared thus weeding out harmful content. However, content moderation has given rise to various problems. This essay explores some of the problems that arise from content moderation.

Introduction

Digital platforms have proved to be a blessing and curse as they present the human race with positives and negatives. These advantages and disadvantages arise from the content that is posted on digital platforms. There is a lot of helpful content on digital platforms, such as health advice from qualified doctors, credible academic resources, business tips, humorous content that helps people relax, etc. According to Hruska & Maresova (2020), social media is crucial in helping people acquire and spread information in domains such as politics, entertainment, business, and crisis management. Similarly, there is a lot of harmful content on these platforms. These could include criminals preying on children, pornography, hateful and extremist groups that recruit people to plan and execute violent activities (Ganesh & Bright, 2020), etc. For example, Massanari (2017) demonstrates how the popular platform Reddit turned into a hub for anti-feminist activism.

It is impossible to weed out all the harmful content on digital platforms. However, there should be a deliberate effort to reduce harmful content on digital platforms for the benefit of society. According to Gillespie (2018), platforms must moderate content to protect users from other users, groups from other groups, and remove offensive, illegal and vile content. Content moderation to reduce harmful content is a noble cause. However, some issues and controversies arise when implementing strategies geared towards content moderation. Similarly, some concerns are raised when the idea of letting governments enforce content moderation on digital platforms.

 

Figure 1. Countering Extremism on Social Media .Bharath Ganesh,Jonathan Bright, All Rights Reserved.

 

Effects of Content Moderation on Digital Platforms

While content moderation seems to be a good idea that would benefit digital platforms and society, it does have some negative consequences to the digital platforms. It is important to note that many people use certain digital platforms for the content that they provide, harmful or not. From a large number of users, digital platforms are able to raise revenue. Their business models are based on the fact that many daily active users attract advertisers (Falch, et al., 2009). Therefore, a decrease in the number of daily active users on a platform reduces its revenue streams. Moderating content that may be deemed harmful by some and not harmful by others may be detrimental to a digital platform. Content moderation that may alienate a large part of the platform’s users may cause the users to move to competitors hence losing revenue.

A good example of this is OnlyFans. Founded in 2016, the site was originally meant to become a subscription-based platform for exclusive content, where the content creators and the platform would share the subscription money. However, it did not take long for the site to become synonymous with sex work as people selling nude pictures and sex videos flooded it. However, Shaw (2021) reports that the platform, with more than 130 million users, decided to bar sexually explicit videos. This move is bound to make content creators and users move to competitor sites that will allow them to make money and receive the content they want. As such, the exodus will reduce the revenue that the platform generates. However, the site sees this move as necessary as it requires funding to expand, and investors are unwilling to fund platforms synonymous with sex work or pornography.

 

Figure 2. OnlyFans to bar Sexually Videos

Apart from backlash from content creators and users, digital platforms also find it difficult to moderate content because of the large amount of data generated every day by the billions of users worldwide. According to Gillespie (2020), major challenges that arise when implementing content moderation strategies on digital platforms include the large amount of data created, the large frequency of violations made by users, and the need for human judgments to decide whether certain content violates the platform’s community guidelines. While the use of Artificial Intelligence (AI) to monitor content is attractive as it is efficient and effective, it is bound to make many mistakes. For example, it would be hard for AI to distinguish a joke from harmful content since mist jokes are offensive to certain groups. There would be many complaints from content creators as their content would be falsely flagged as harmful. Therefore, although it would not be cost-effective, human moderators would be best in moderating content.

Figure 3. Challenges in AI Moderation.

 

Although content moderation is a tasking job, social media companies have been trying to ensure that only acceptable content is allowed or even stays up on their platforms. This is necessary as even Members of Congress have expressed concern over the spread of misinformation, that is, incorrect or inaccurate information, on the platforms (Congressional Research Service, 2021). Social media companies have implemented several strategies to curb undesirable content. Removing content is the most basic form of content moderation. However, to prevent content creators from constantly uploading undesirable content, digital platforms lock content creators from their accounts for specified periods. However, upon their persistence in uploading harmful content, digital platforms are forced to permanently close the channels and bar the content creator from opening an account on the platform. YouTube has introduced the three-strikes system, where channels are hit with strikes for violating community guidelines. Once a channel is hit with three strikes, it is permanently banned. These measures help keep creators in check as they would not want to lose their followers.

 

Bias in Content Moderation

While these content moderation tactics have proved effective, there have been issues that have arisen as a result. The most common complaint with these strategies has been the fact that the moderators are biased. Since the moderators are human beings prone to biases, a certain section of content creators claim that their views and content are being deliberately restricted unfairly. This is common in the political sphere, where content creators and users who share political beliefs are punished. Apart from that, people from one political side claim the existence of double standards when enforcing the community guidelines. Two users or content creators can produce the same content, but only one will be punished depending on their political stance. This amounts to censorship. Research by Pew Research Center shows that most Americans think that social media platforms censor political viewpoints(Vogels, Perrin & Anderson, 2020). The study showed that Republicans, when compared to Democrats, feel that tech companies favor liberal viewpoints over conservative ones.

 

Figure 4. Social Media Platforms Censor Political Viewpoint ,Emily A. Vogels, Rewperrin and Monica and Ersobs, All Rights Served.

Bar-Tal (2017) notes that censorship prevents free access to information, freedom of expression, and the free flow of information. It is important to note that freedom of speech is protected and guaranteed by the US constitution.

 

The Government’s Role in Content Moderation

Government involvement in people’s lives is greatly frowned upon. Government increasing their role in content moderation would seem desirable since it has enough resources to catch people engaging in criminal activities online. This would protect children from sexual predators, drug peddlers, and extremists.

However, there is a danger that lurks in letting the government to determine which content should be allowed and which one should be banned. China is a good example of online censorship from the government and its association with authoritarianism. A study by Wang & Mark (2015) showed that Chinese internet users who scored high on authoritarian personality measures supported censorship. With most of the world adopting democracy, government content moderation, which would amount to censorship, would seem undesirable as it would contradict the basic principles of freedom. Any online political dissent is monitored and crushed by the Chinese government. This ensures that the Chinese Communist Party (CPP) maintains power, thus continuing its authoritarian rule. Giving a democratic government the power to regulate speech would eventually lead to authoritarianism.

Conclusion

In summary, content moderation to reduce harmful content is a noble cause. However, some issues and controversies arise when implementing strategies geared towards content moderation. Similarly, some concerns are raised when the idea of letting governments enforce content moderation on digital platforms. Content moderation that may alienate a large part of the platform’s users may cause the users to move to competitors hence losing revenue. Digital platforms also find it difficult to moderate content because of the large amount of data generated every day by the billions of users worldwide. Some content moderation strategies include removing content from the platform, locking creators and users from their accounts, giving warnings, and ultimately permanently deleting their accounts and banning them from opening other accounts. A major problem with these strategies is the presence of biases associated with their enforcement. Finally, governments should not have a role in content moderation as they would turn into authoritarian governments.

 

 

 

 

 

 

 

References

Bar-Tal, D. (2017). Self-Censorship as a Socio-Political-Psychological Phenomenon: Conception and Research. Political Psychology, 38, 37–65. https://doi.org/10.1111/pops.12391

Congressional Research Service. (2021). Social Media: Misinformationand Content ModerationIssues for Congress.

Falch, M., Henten, A., Tadayoni, R., & Windekilde, I. (2009). Business Models in Social Networking.

Ganesh, B., & Bright, J. (2020). Countering Extremists on Social Media: Challenges for Strategic Communication and Content Moderation. Policy & Internet, 12(1), 6–19. https://doi.org/10.1002/poi3.236

Gillespie, T. (2018). Custodians of the internet : platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323. https://doi.org/10.1177/2053951720943234

Hruska, J., & Maresova, P. (2020). Use of Social Media Platforms among Adults in the United States—Behavior on Social Media. Societies, 10(1), 27. https://doi.org/10.3390/soc10010027

Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

Shaw, L. (2020). Bloomberg – Are you a robot? Www.bloomberg.com. https://www.bloomberg.com/news/articles/2021-08-19/onlyfans-to-block-sexually-explicit-videos-starting-in-october

Vogels, E., Perrin, A., & Anderson, M. (2020, August 19). Most Americans Think Social Media Sites Censor Political Viewpoints. Pew Research Center: Internet, Science & Tech. https://www.pewresearch.org/internet/2020/08/19/most-americans-think-social-media-sites-censor-political-viewpoints/

Wang, D., & Mark, G. (2015). Internet Censorship in China: Examining User Awareness and Attitudes. ACM TRANSACTIONS on COMPUTER-HUMAN INTERACTION, 22(6), 1–22. https://doi.org/10.1145/2818997