“Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather. […] I declare the global social space we are building to be naturally independent of the tyrannies you seek to impose on us. You have no moral right to rule us nor do you possess any methods of enforcement we have true reason to fear (Suzor, 2019).”
Digital Platforms in the modern age
During the dawn of the internet, John Perry Barlow, one of the founding fathers of the digital rights group EFF, believed that Cyberspace should be free from any government influence, and secondly, all users have the democratic ability to voice their opinions as well as be heard (Suzor, 2019, p.88). The modern internet however, is much more centralized and is dominated by a couple corporations in the form of digital platforms who act as internet intermediaries and search engines such as Google, Facebook, and Twitter (Suzor, 2019, p.89). These digital platforms have anchored into our daily lives, we have learned to rely on these platforms for almost everything from entertainment, to communication, and even education.
The Double-Edged Sword of Content Moderation
Digital platforms such as youtube and instagram have become one of the main ways we receive and share content. Inspired by the freedom the web promised, digital media platforms capitalize on this promise by providing spaces for the world’s best and most social aspects (Gillespie, 2017, p.254). However, with rapid growth of society’s dependence towards digital platforms also arises the issue of content moderation.
Although digital platforms are not legally required to moderate their platforms, most digital platforms decided to police the content and behavior of users to ensure the safety of the online community and for economic purposes (Gillespie, 2017, p.262). However, content moderation can also become a double-edge sword: little to no content policing can deter users from a toxic and potentially dangerous environment; too much moderation can make users feel too confined and patrolled (Gillespie, 2017, p.262-263). Therefore, platforms must find a grey area between the two to be able to provide a safe space for users, while they are able to enjoy the freedom of voice that they have been promised.
Content Moderation and The Aftermath
Regulating the internet is no simple task, it presents a major issue that any method of moderating will never be completely effective. However, this does not mean that the internet is completely ungovernable (Flew et al., 2019). Majority of the regulating and moderating today is done by large technology companies. Mainly through automation such as AI moderation, manual moderation through frontline reviewers, or a mix of both.
The main thing to consider with content moderation on digital platforms is the issue of scale. With 4.20 billion online users, comes the overwhelming amount of user generated content everyday (Kemp, 2021). In response to the problem of scale, AI moderation was implemented among many digital platforms such as Youtube and Twitter. These AI moderation use tools such as filters to be able to detect prohibited and harmful words and tags, determining if an imagine or video shows too much bare flesh (suggesting, but not always indicating pornography), and match, report, and remove copyrighted material (Roberts, 2019, p.37). Although AI moderation has lifted some burden off of manual moderation, this method is not a one-size-fits-all solution, there will always be content that cannot be moderated through automation. Furthermore, this type of content moderation is prone to errors and collateral damage, content that were intended to not be harmful are being wrongly accused of expressing explicit and dangerous material (Suzor, 2019). Gillespie claims that this method of content moderation may be felt as an injustice to the users, by not being able to understand the intent and legitimacy of their content, and failing to protect their freedom of speech (Gillespie, 2020). This suggests that machine automated moderation will never be a complete solution to the issue of analyzing endless content streams.
VICE. (2021). The Horrors of Being a Facebook Moderator | Informer [Video]. Retrieved 12 October 2021, from https://www.youtube.com/watch?v=cHGbWn6iwHw&list=WL&index=2.
In the world of content moderation we would ideally prefer having AI moderation detect and remove harmful content. However, AI moderation is still incapable of moderating content with an objective mindset- that is to understand the context and intent of the user (Dang et al., 2020). Therefore, many digital platforms still rely on manual labour of content moderation. In many instances, digital media companies equip a team of staff that oversee moderation. They set the rules and take on very complex cases. Furthermore, the moderation team usually recruits a large group of frontline content reviewers to oversee content where the AI moderation has failed to do so, it flags the content and passes it to a manual content moderator. This method of moderation has caused a lot of controversy over the past couple of years, mainly due to the psychological harm such as anxiety and post-traumatic stress many manual content moderators experience due to reviewing gruesome and violent posts (Gillespie, 2017). However, according to Roberts, manual content reviewers are indispensable, as they act as digital gatekeepers for the platform, they play a significant role in ‘curating the flavor of a platform’ while guarding against content that might harm users and the company’s digital presence and reputation (Roberts, 2016).
Government’s hand at content moderation
Whether the government should be more involved in enforcing the content moderation on social media is a widely debated topic. In the past two years, many governments worldwide have adopted 40 new laws to regulate content on digital platforms. These new laws have an emphasis on content take-downs and limited judicial insight, and an over-dependence on AI moderation, and therefore, limits user’s human rights and especially freedom of speech (“Moderating online content: fighting harm or silencing dissent?”, 2021). This can be seen in the case of the Twitter shutdown back in June, where the Nigerian government suspended the usage of Twitter in response to the platform deleting the president’s post due to a violation in its company’s policy. Within hours, millions of residents were unable to access Twitter and were threatened with prosecution if they bypassed the block (“Nigeria’s Twitter ban: Government orders prosecution of violators”, 2021). Cases like Nigeria blocking Twitter, is becoming more and more prevalent and consistent worldwide over the years. Furthermore, acts like these are dangerous as it not only limits user’s access to information but could also affect work, health and education (“Moderating online content: fighting harm or silencing dissent?”, 2021). Therefore, digital platform companies can and must do better.
The Future of Moderation
The UN Human Rights proposed a ‘five actions for a way forward’ framework for both companies and states. The framework argues that instead of adding content specific restrictions, we should be focusing on improving content moderation processes. Civil society and experts should be involved in the design and assessment of regulations. Furthermore, users should be given the opportunity to appeal against moderating decisions that they consider to be unjust. Companies should also be more transparent with how they moderate content. This follows Gorwa ‘s suggestion that the content moderation process should be more transparent to allow both users and experts to understand the patterns of governance that they must follow (Gorwa et al., 2021). Finally, Laws imposed by States should be clear, necessary, proportionate, and non-discriminatory.
The vision of a truly “open” platform with notions of community and democracy is only a fantasy (Gillespie, 2018, p.5). Platforms can and must do better to moderate their content. Through a human-rights approach such as the UN Human Rights ‘five actions for a way forward’ and Gorwa’s framework to improve AI moderation to prevent government intervention. Furthermore, the government should not intervene with digital platforms as it may be used as a ploy to restrict user’s human rights, freedom of speech and be used for political gain. Although the online space is not perfect, it is crucial to maintain the digital platform’s content as it has promised users freedom in terms of speech as well as government control.
References
Dang, B., Riedl, M., & Lease, M. (2020). But Who Protects The Moderators? The Case Of Crowdsourced Image Moderation. Retrieved 14 October 2021, from https://arxiv.org/abs/1804.10999.
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal Of Digital Media & Policy, 10(1), 33-50. https://doi.org/10.1386/jdmp.10.1.33_1
Gillespie, T. (2018). Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.
Gillespie, T. (2017). Regulation of and by Platforms.
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 205395172094323. https://doi.org/10.1177/2053951720943234
Gorwa, R., Binns, R., & Katzenbach, C. (2021). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Retrieved 10 October 2021, from.
Kemp, S. (2021). Digital 2021: Global Overview Report — DataReportal – Global Digital Insights. DataReportal – Global Digital Insights. Retrieved 6 October 2021, from https://datareportal.com/reports/digital-2021-global-overview-report.
Moderating online content: fighting harm or silencing dissent?. Ohchr.org. (2021). Retrieved 14 October 2021, from https://www.ohchr.org/EN/NewsEvents/Pages/Online-content-regulation.aspx.
Nigeria’s Twitter ban: Government orders prosecution of violators. BBC News. (2021). Retrieved 6 October 2021, from https://www.bbc.com/news/world-africa-57368535.
Roberts, S. (2016). Commercial Content Moderation: Digital Laborers’ Dirty Work. Retrieved 13 October 2021, from https://ir.lib.uwo.ca/commpub/12.
Roberts, S. (2019). Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/978030024531
Suzor, N. (2019). Lawless. https://doi.org/10.1017/9781108666428
VICE. (2021). The Horrors of Being a Facebook Moderator | Informer [Video]. Retrieved 12 October 2021, from https://www.youtube.com/watch?v=cHGbWn6iwHw&list=WL&index=2.
Barlow’s Broken Dream: The Reality & Future of Content Moderation is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.