Content Moderation: Issues, Controversies, and the Role of Government

“Fake News – Computer Screen Reading Fake News” by mikemacmarketing is licensed under CC BY 2.0

Content Moderation is necessary, but what are the issues and controversies? Via Mike Mackenzie

People who use social media very often might have the experience of getting notices that the materials they uploaded or posted violated the rules of the platform, and some might even have the experience of being suspended by social media websites. Such phenomenon are the outcomes of content moderation which generally refers to the activity of reviewing or screening user-generated contents before they are posted and distributed on websites or after they are flagged or reported by others (Roberts, 2019, p. 33). Content moderation is necessary since social media websites are open to not only encouraging and inspiring contents but also such negative contents as hate speeches, violence, and obscenity. However, there are also some issues and controversies surrounding content moderation, which are the focus of the essay. Aside from that, the role of government in in will also be discussed.

Two Issues

The first issue regarding content moderation is those who get to decide the rules or standards according to which moderators review and screen user-generated materials could take advantage of it for private interests. Whether content moderation is used to serve for the betterment of the digital world where people or the agenda and commercial considerations of politicians and companies is determined by the purpose of it. The stark reality is that content moderation, like many other services, is heavily commercialized and there are even companies that provide moderating services to help brands maintain their positive images and prevent brands from being hurt by the spread of negative contents (Roberts, 2019). As the result of commercial content moderation, those who pay for content moderation get to decide the purpose of moderating. The problem with this is the interests of consumers or social well-being will be in danger when they conflict with the interests of companies. Ideally, content moderators have their moral standards, value judgements and conscience, and therefore are able to be gatekeepers of the common good. However, they have little discretion since they have to do what they are told to do to get the pay. What is worse is that content moderators are often low-wage earners and low-status workers (Roberts, 2016, p. 1). This means that they are not of critical importance to the company and therefore, have little say in decision-making let alone moderating against the company’s interests for social well-being.

Another issue is that although algorithms are used in content moderation and seem to be more objective than humans, they can over block contents, lack transparency and might be used for evil purposes. Gorwa, Binns and Katzenbach (2020), Parks (2019), and Roberts (2019) have found that AI, machine-learning, and algorithms are increasingly used to improve efficiency in content moderation. Indeed, it is both theoretically and practically possible for platforms such Facebook and Twitter to use algorithms to screen user-generated materials and identify negative contents such as hate speeches more effectively and efficiently. However, algorithms do not ensure that they are used transparently, fairly, responsibly and objectively. Gorwa, Binns, and Katzenbach (2020) point out that algorithms can over block materials, enable companies to identify contents that are unfavorable to them more effectively and quickly. Moreover, since platforms like Facebook can determine the standards and rules themselves, they could be selective or lack transparency. Companies often are vague about the types of contents, say racist or hateful contents, they will hold and the extent to which certain types of contents they will hold (Roberts, 2016, p. 5). They could disclose standards and rules of content moderation to pretend to be transparent while manipulating their algorithms for their benefits. For example, Tufekci (2018) points out that people are more attracted by more extreme content, and websites such as YouTube are using algorithms to recommend such contents to users to keep them glued to their websites, get their attention, and sell the attention to advertisers for profits (refer the video for more). Such practices are both problematic and controversial. Such practices also indicate that the lack of transparency of content moderate can lead to controversial attempts. Among them are attempts to implement content moderation to advance private or political agendas at the cost of freedom, consumer rights, and the suppression of the marginalized and disadvantaged groups.

<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/iFTWM7HV2UI” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture” allowfullscreen></iframe>

Controversies

Attempts to implement content moderation can be controversial when they are concerned with rights and politics. One of the most controversial attempts is the attempt that aims to control negative contents but ends up infringing the freedom of expression. Gillespie (2018) in his book described Facebook’s controversial move of deleting a journalist’s post that includes a Pulitzer Prize-wining photo, “Napalm Girl”. Facebook’s moderator deleted the post for underage nudity while the journalist criticized Facebook’s decision and got suspended. The examples demonstrates that the line between content moderation and the infringement of freedom can be quite blurry.

Another type of attempts that are also controversial is commercial content moderation that could hurt consumers’ rights to know products and services without undue influences. As discussed earlier, companies and brands take advantage of content moderation to manage reputation and create a positive image on social media platforms. Some attempts, however, seem to go too far. For example, companies that provide content moderation services work for brands by influencing consumers to focus on positive sides (Roberts, 2019, p. 45). This unobjective and partial user-generated information could mislead consumers and hurt their rights.

Besides the aforementioned attempts, attempts that could hurt marginalized groups can be controversial as well. Gorwa, Binns, and Katzenbach (2020), for example, point out that some languages such as English and some regions such as those in Global North get more attention from platforms, and the result is that content moderation attempts that could bring benefits might not be able to reach those groups whose languages are less commonly used or who are in the Global South. Another scenario is that platforms are influenced by their political stances and manipulate their content moderation standards and rules to suppress disadvantaged groups that want to advance their political agendas. Massanari (2017), for example, found that Reddit suspiciously suppressed feminists because its algorithm and even company culture support anti-feminist activism.

The Role of Governments

Given the issues and controversies mentioned above, governments should play a greater role in content moderation. As platforms and companies could sacrifice the public good and manipulate content moderation by being vague and selective to make profits, governments should act and hold them accountable when that happens. Meanwhile, controversies that affect infringe rights require governments to act as well. For example, the controversy regarding freedom of expression could be solved by introducing the governmental guidelines whose development includes consulting users. Lastly, if governments fail to act, there could be consequences. For example, their citizens could be left unprotected, they could lose support and be criticized for not acting, and they could fall behind other countries in regulating tech companies such as social media platforms (MacCarthy, 2019, p. 1).

Conclusion

Content moderation is necessary for a better digital environment. However, it also has resulted in issues and controversies that require action. Because platforms and companies have a big say in deciding the rules, standards and purposes of content moderation, they could sacrifice public interests for private interests. Also, algorithms are not perfect. They could over block content and might be used selectively due to the lack of transparency. Controversies arise when content moderation is implemented to whatever purposes but end up standing in the way of freedom of expression, hurting consumer rights, and pushing marginalized groups to a disadvantaged position. Governments should play a greater role to protect users and hold platforms and companies accountable, or their image will get hurt for not acting, and they could be outperformed by other governments.

 

References

Gillespie, T. (2018). Custodians of the Internet. Platforms, content moderation, and the hidden decisions that shape social media. Yale University Press.

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: technical and political challenges in the automation of platform governance. Big Data & Society, 1-15. https://doi.org/10.1177/2053951719897945

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329–346. https://doi.org/10.1177/1461444815608807

MacCarthy, M. (2019). A consumer protection approach to platform content moderation. SSRN Electronic Journal, 1-22. https://dx.doi.org/10.2139/ssrn.3408459

Parks, L. (2019). Dirty data: content moderation, regulatory outsourcing, and the cleaners. Film Quarterly, 73(1), 11-18. https://doi.org/10.1525/FQ2019.73.1.11

Roberts, S. T. (2019). Behind the screen. Content in the shadows of social media. Yale University Press.

Roberts, S. T. (2016). Commercial content moderation: digital laborers’ dirty work. Media S tudies Publications, 12, 1-12. https://ir.lib.uwo.ca/commpub/12

Tufekci, Z. (2018). YouTube, the great radicalizer. The New York Times. https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html

 

 

 

About Joyce 2 Articles
I am Bowen Jia. You can call me Joyce. I am major in media and management. I love travel and music!