The Content Moderation Problem: The Enormity of Policing the Internet

"Social Media Keyboard" by Shahid Abdullah is used under Creative Commons License

With algorithmic content distribution becoming the norm on social media platforms such as Tik Tok, Facebook and Instagram, the proliferation of controversial and possibly dangerous content on these sites has increased dramatically. As a result of this, Governments, platforms, stakeholders and platformed communities themselves have been posed with the difficult and complicated task of moderating these types of content. Whether it be overt offensive content posted publicly, or subtle ‘dog-whistles’ posted in private groups, moderating such content is proving to be one of the greatest challenges faced by social media platforms in our current era. 

For the platforms themselves, the issues surrounding content moderation are wide. Firstly, these platforms must moderate content evenly and fairly around the world. Obviously, this poses the problem of different cultures finding different things appropriate where others would not. With Facebook being referred to as “the world’s most powerful editor” (Gillespie, 2018), this gives the platform unparalleled power to determine, worldwide, what is, and what is not, considered appropriate for users to post, observe, and consume as content. On top of this, algorithms are used by many of these platforms to automatically remove (or restrict) content, often regardless of the subtext of the content at hand.

With these two things taken into consideration, it can be seen why there is an absolutely mammoth job at hand for these platforms. For the scale of users that exist, they must essentially find a way to automatically moderate content around the world, at a level that is deemed appropriate for the majority of users. The issue that this clearly poses is best exemplified in the case of the Norwegian journalist Tom Egeland posting the Pulitzer Award winning photo ‘Napalm Girl’ on Facebook, as part of a series of photos. Despite this photo being historically significant, and being extremely well known around the world, “Egeland was suspended twice, first for twenty- four hours, then for three additional days” (Gillespie, 2018). This example is an amalgamation of the problems faced by platforms when attempting to moderate: there is more that goes into a single piece of content than just the content than just the content itself. Rather, the entire context must be considered before moderation decisions can be made, and algorithms at this stage do not have the subtlety nor the complexity to understand such context yet.

With platforms themselves facing such difficulties, Governments around the world are also facing challenges in regard to moderation. Where in the past content was published via newspapers or similar outlets and easily trackable, the proliferation of digital platforms, such as 4Chan and Reddit has made moderating content at a governmental level extremely difficult. Users can post anonymously from brand new accounts via a VPN, making them essentially untraceable. This anonymity has led to difficulties for both platforms, and for governments in enforcing the moderation discussed above. A person may post content that is deemed problematic or inappropriate and be subsequently banned, but there is little to no way to stop people from simply making new accounts and continuing to post the same content. As a result of this, incidents such as “‘Gamergate’ and ‘The Fappening’” (Massanari, 2016) came about as  “toxic technocultures we see on Reddit are… [places where] these kinds of anonymous interactions seem to cultivate” (Massanari, 2016). Indeed, the problems the government faces when dealing with anonymous platform governance can best be highlighted with a look at the QAnon conspiracy.

Originating on 4chan, QAnon is a “far right conspiracy theory originated on 4Chan by an anonymous poster named ‘Q’ whom claimed that a group of Satanic pedophiles run a sex trafficking operation around the world and actively fought against Donald Trump during his term as the President of the United States” (Roose, 2021). Though it began as a seemingly niche conspiracy theory/cult on the internet, it grew quickly, and was one of the main groups responsible for the January 6 attacks on the US Capitol building. As the conspiracy originated online, Twitter, Youtube and Facebook have all taken extreme steps to remove accounts that post content relating to the group, but as discussed above, due to the simplicity in simply creating new accounts, and the use of anonymous chat boards, the group still has a large presence around the world.

This group highlights the idea that, while we as a society are quick to stamp out congregations of hateful speech in person, groups such as QAnon have shown that it is far more difficult to police and erase in online spaces, especially with the emergence of encrypted messaging apps such as Telegram and Whatsapp

In response to these challenges in tracking and moderating content online, the Australian Government introduced and passed legislation “permits government enforcement agencies to force businesses to hand over user info and data even though it’s protected by cryptography” (Bocetta, 2021). This ability to moderate private content by introducing this ‘backdoor’ was met with serious backlash. For one, it meant that companies operating apps in Australia had to make altered versions to be able to comply with this demand from the Government. Secondly, it meant that the Australian Government had legal precedent to be able to access any Australian citizens private messages. On top of this legislation, in 2019 the Australian Government introduced and passed legislation that, in earnest, “expects the providers of online content and hosting services to take responsibility for the use of their platforms to share abhorrent violent material” (Douek, 2019). Once again, this received intense scrutiny and backlash from both the Australian public and the tech sector. The main reason for such backlash was, as detailed previously, the sheer amount of content needed to be moderated, both publicly and privately, is enormous. As such, algorithms, as discussed previously, are employed to moderate this content, and as such, content that is often appropriate for public viewing is censored. This leaves these large platforms, such as Facebook and Twitter, in a difficult position in which the solutions to the problems being faced are somewhat limited.

 

“The Australian Government expects the providers of online content and hosting services to take responsibility for the use of their platforms to share abhorrent violent material” – Doeuk

 

Though it is a very slippery slope for Governments to get involved in moderating any content, let alone online content, a way in which both the Government and platforms can possibly reach a solution to these issues of content moderation is through setting up an independent, partially Government and partially ‘Industry’ funded, moderation hub. This hub would be a centre for community managers and content moderators to work full time to moderate Australian content more diligently and more fairly. This, as well as a clearer framework for what is and is not needing to me moderated online, would be an easy first step for the Government to intervene in moderation, without encroaching into authoritarian governance, while also allowing these large platforms take responsibility in some way for the content that is being posted online, as well as giving the platforms and government a middle ground to moderate this content from. Despite this, it is always a very precarious situation for the Government to get involved with content moderation. It can come across as fascist, and can set a precedent of what can and cannot be said about the Government. As Australia does not have ‘right to free speech’ as an amendment in its constitution, allowing the Government to decide what can and cannot be posted online is ultimately a very dangerous game to play.

Thus it can be seen why content moderation, whether governmental or platformed, is extremely complex, and as such, though dangerous content may continue to proliferate, Government intervention is ultimately not a viable content moderation strategy. Instead, giving platforms the tools to be able to moderate content more effectively, and the frameworks in which to do it, are much more effective strategies to curb hateful and dangerous content online.

Group 14

Reference List:

Bocetta, S. (2021). Australia’s New Anti-Encryption Law Is Unprecedented and Undermines Global Privacy | Sam Bocetta. Fee.org. Retrieved 13 October 2021, from https://fee.org/articles/australia-s-unprecedented-encryption-law-is-a-threat-to-global-privacy.

 

Bracewell, L. (2021). Gender, Populism, and the QAnon Conspiracy Movement. Frontiers In Sociology, 5. https://doi.org/10.3389/fsoc.2020.615727

 

Douek, E. (2019). Australia’s New Social Media Law Is a Mess. Lawfare. Retrieved 13 October 2021, from https://www.lawfareblog.com/australias-new-social-media-law-mess.

 

Gillespie, T. (2018). Custodians of the internet (1st ed., pp. 1-23). Yale University Press.

 

Massanari, A. (2016). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329-346. https://doi.org/10.1177/1461444815608807

 

Roose, K. (2021). What Is QAnon, the Viral Pro-Trump Conspiracy Theory?. Nytimes.com. Retrieved 13 October 2021, from https://www.nytimes.com/article/what-is-qanon.html.

Creative Commons Licence
This work is licensed under a Creative Commons Attribution 4.0 International License.