Content Moderation by Clickworker is licensed with CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/
In today’s era of advanced and widely used Internet, Internet users are gradually changing from recipients of content to creators of content. In addition to needing to address three serious problems that have emerged from the World Wide Web such as inappropriate incentives, hacking and online fraud, Internet platforms need to have departments that regulate user-produced content in order to better maintain the Internet environment. But digital platforms have many problems with content control, and different platforms may encounter different problems. In this essay, I will describe the attempts made and controversies caused by digital platforms and governments on content control.
Content moderation of platforms
In September 2016, a report that was mainly intended to show the “Terror of War” was removed because of the use of a child’s nude body. The question then is whether this photograph, as Gillespie and Tarleton suggest, is it the kind of obscenity that should never be seen no matter how relevant, or is it historical obscenity that must be shown, no matter how damaging it is (2018)?
https://time.com/4485344/napalm-girl-war-photo-facebook/
From the platform’s perspective, the photo depicts a naked child in distress and was taken without her consent; it violates Facebook’s policy and should have been removed. Besides, as a reviewer, should the platform have moderated the user’s posting at the time it was made or should it have intervened only after someone complained or reported it. It is worth considering whether this should be done by issuing warnings or removing information that does not comply with the platform’s policies is worth considering. Even if a platform is willing to uphold its platform policies, all the content moderation and monitoring takes so many manpower and resources that platforms are unable to provide timely feedback to all complaints or properly moderated all information. It is also very difficult for platforms to regulate from a content control perspective. Platforms first need to determine what the problem is and why they intervene. Secondly platforms also need to consider what is acceptable and unacceptable, and also balance offense and importance; reconcile competing value systems; mediate when people intentionally or unintentionally hurt each other mediate respect the contours of political discourse and cultural tastes; and manage relationships with other countries. What’s more, respect the contours of political discourse and cultural tastes; grapple with inequalities of gender, sexuality, race, and class; and expand the scope of morality should be considering. It also includes racial and class inequalities, and issues of nation, culture, and language all need to be considered (Gillespie, Tarleton, 2018).
On the Reddit platform, many users are anonymous, and Jesse Fox’s (2015) findings suggest that anonymity can promote higher levels of sexism. As anonymous users, individuals can refrain from sharing identifiable information or simply make one up. This lack of connection to their identity may encourage users to act antisocially because they can convince themselves that their online behavior is not representative of who they are offline (Suler, 2004).Reddit’s inaction against Toxic technocultures, while helping the platform to acquire new users, also contributes to the occurrence of unethical incidents on the Internet and represents Reddit’s surrender to “Internet trolls” (Adrienne Massanari, 2017). Reddit’s platform provides moderators with tools to handle the complexities of managing subreddits, such as removing offensive content and banning users. Moderators must rely on third-party plugins, which are not only unpaid labor, but most of them are considered inadequate and cumbersome. Because moderation is an unpaid position, few people are willing to do this time-consuming work. Further, moderators are very reluctant to alienate a portion of their audience because the reduction in traffic affects their income (Adrienne Massanari, 2017).
Similar to the previous problems encountered by the World Wide Web, digital platforms are difficult to reach perfectly even when trying to regulate their content, so Nicolas’s (2019) point is desirable. A mature Internet regulatory team or appropriate help from the state and government can provide some regulation and control of platform content and help the platform maintain the harmony of its community. In 2016, in the face of increasing criticism, Mark Zuckerberg issued a pointed statement that Facebook is not a “media company”. This statement was both inaccurate and strategic: Zuckerberg and his colleagues did not want to take on the social and legal obligations that apply to media companies (Gillespie, Tarleton, 2018).
Government Monitoring
With the advent of the Web 2.0 era, the Internet industry has a tendency to monopolize. In particular, since the 2016 U.S. election and the Cambridge Analytica scandal, there have been substantial concerns raised about platforms’ involvement in the dissemination of “fake news” and alleged manipulation of electoral politics; privacy breaches and data abuse; and abuse of market power (Flew, 2019).
Media and communication policies are at the heart of many of the social and political issues faced society today, and Picard’s (2017) report describes how their policies include competing social demands to determine how to obtain the best social benefits. As a healthy democratic society, the principle is one that can be openly debated among diverse constituencies and involves all members of society in defining policy principles. But today’s digital platforms are global, and there is still no consensus on who is best qualified to assume the role of regulating global digital media(Flew, 2019). Whether it is the “national cyber-sovereignty” proposed by China, the “hands-off” approach prevalent in the U.S. mainstream, or the more interventionist approach used by the European Union, they can only be implemented in their own cultural context or country and not in their own way, cultural contexts or countries and not for global digital media platforms (Flew, 2019). If different standards continue to be applied in different countries and regions, it will lead to fragmentation of services in different regions and a different but bad experience for users in different regions. Also, since a greater governmental control would lead to greater differences in services across regions, the role of government in enforcing content moderation is now sufficient. Of course, it would be nice to be a little more democratic. For example, some of the people involved in content moderation are selected based on community votes, in addition to government and platform backgrounds. The best balance between populism and democracy is to let the government, the platform and the community govern together, and to prevent too much political interference (Schlesinger, 2020). When it comes to news, content moderation can adopt Napoli’s recommendation that social media platforms, content aggregators, policymakers, and courts adjust their commitment to anti-speech and adopt more institutional commitments, and that content filtering based on the public interest can reduce the harm of fake news and other harmful speech (Napoli, 2019).
For the purpose of keeping revenue and saving human resources, digital platforms are certainly under-regulated on the platform. But if the national government interferes excessively, not only will the service be fragmented in different regions, but it will also politicize the platform content. Therefore, the best way for content moderation is to combine the government platform and the community together.
Reference
Flew, T. Martin, F. & Suzor, N. (2019) Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50.
Fox, J., Cruz, C., & Lee, J. (2015). Perpetuating online sexism offline: Anonymity, interactivity, and the effects of sexist hashtags on social media. Computers In Human Behavior, 52, 436-442. doi: 10.1016/j.chb.2015.06.024
Gillespie, Tarleton. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. pp. 1-23.
Massanari, Adrienne. (2017) #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3): 329–346.
Napoli, P. (2019). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal, 70(1), 57–104.
Nicolas P. Suzor. (2019). Lawless. Retrieved from https://www-cambridge-org.ezproxy.library.sydney.edu.au/core/books/lawless/lawless/F570386D439250352EC7D885576972AF/core-reader
Picard, R. G., & Pickard, V. (2017a). Essential Principles for Contemporary Media and Communications Policymaking. Reuters Institute for the Study of Journalism: University of Oxford. Retrieved from: https://reutersinstitute.politics.ox.ac.uk/our-research/essential-principles-contemporary-media-and-communications-policymaking.
Schlesinger, P. (2020) After the post-public sphere. Media, Culture & Society, 42(7–8), 1545–1563.
Suler, J. (2004). The Online Disinhibition Effect. Cyberpsychology & Behavior, 7(3), 321-326. doi: 10.1089/1094931041291295