Online content moderation – is there a ‘best’ approach?

Each day, around 5 billion YouTube videos are watched, 500 million tweets are posted, and 95 million posts are shared on Instagram (Burtea, 2020). As online platforms continue to dominate society’s digital communications sphere, with these immense amounts of daily content rising, the continuously controversial issue of moderation is more prevalent now than ever before.

As platforms are shaped to both invite and promote public discourse on a global scale, it is inevitable that explicit, harmful, and offensive content will force its way onto some users screens. Defined by Roberts as the organized practice of screening user-generated content posted to internet sites, social media, and other online outlets (Roberts, 2019), digital content moderation seeks to both censor and remove these types of posts from the internet, however, this process comes with a host of challenges and controversy.

Challenges associated with content moderation

A primary challenge associated with current systems of content moderation online is “adjusting to the balance that is required between a right to freedom of speech and reasonable protection of the public to harmful content online” (Ofcom, 2019). Whilst explicitly illegal or abusive posting is often strictly regulated, there are grey areas of content that fall out of these parameters but can still be sources for harm in online communities. Finding this balance can be hard, as public opinion and personal values vary so vastly between users.

“internet is freedom of speech” by BEE FREE – PGrandicelli [the social bee] is licensed under CC BY-NC-SA 2.0
Recently, during the aftermath of the 2020 US election, the sharing of potentially harmful and false information was a huge area of concern that online platforms sought to address through moderation. However, this regulation was met with backlash due to alleged infringement on users’ rights to freedom of expression. Most notably, former President Donald Trump was banned permanently from Twitter, alongside Instagram and Facebook. Following weeks of false claims about voter fraud, Trump was booted from the platform amid claims he was violating Twitter’s “Glorification of Violence Policy” after several tweets praising those storming the capital as ‘patriots’ (Clayton, 2021). In retaliation to this, Trump then turned to the official POTUS Twitter account, sharing:

“We will not be SILENCED! Twitter is not about FREE SPEECH. They are all about promoting a Radical Left platform where some of the most vicious people in the world are allowed to speak freely.”

It is cases like this that have highlighted just how delicately online platforms must approach areas of moderation, as any permanent or significant action taken against content that may not be explicitly illegal at first glance can be met with backlash, and potentially lead to worse incidents than the original itself.

Methods of moderation

The actual methods of moderation that platforms use vary and have increasingly become the centre of conversation surrounding digital socialisation, as many are unsatisfied with current procedures in place, both users and governments alike.

Primarily, platforms such as Twitter, Facebook, and Instagram, rely on a combination of manual and automated moderation, each of these coming with its own challenges.

The majority of traditional platform moderation comes from a significant amount of human labour, either internally or outsourced across the globe (Burtea, 2019). Working with reference to each company’s own content policy and documents highlighting what to “approve, remove, or escalate” (Gillespie, 2019), workers will sift through large quantities of content trying their best to maintain a user-friendly platform. Whilst probably considered the current most effective form of moderation, this manual effort is not without flaws. As the vast majority of moderation is performed once user content is published (post-moderation) (Burtea, 2019), there is often a significant window of time between a post being shared, reported, reviewed, and removed if deemed inappropriate.

“Playing on the computer” by fd is licensed under CC BY-NC 2.0

This timing issue, alongside the sheer quantity of content now being shared becoming overwhelming, has led to platforms embracing innovation and attempting to implement automated, or AI-based moderation. By using software intelligence to sweep through thousands of posts, platforms are attempting to streamline their moderation process, as well as reducing the impact on human moderators by “varying the level and type of harmful content they are exposed to” (Ofcom, 2019). Again, however, this method is far from perfect. Firstly, when gathering data for AI training, if the views of online users are poorly represented, AI algorithms can learn to treat them ‘unfairly’ or inconsistently. This could potentially “affect the freedom of speech of smaller online communities and minority groups.” (Ofcam, 2019).

Further, AI moderation can prove troublesome when content online requires a deeper understanding of context to interpret correctly. This kind of contextual confusion has already caused several controversies for popular platforms attempting to moderate content, even without the additional hurdle of artificial understanding. In September of 2016, Facebook found themselves at the receiving end of swarms of online backlash after controversially removing the iconic ‘Napalm Girl’ image, depicting a naked child running injured from a military attack, off of their platform. After Facebook claimed that it was “difficult to create a distinction between allowing a photograph of a nude child in one instance and not others” (Kleinman, 2016), many users berated the platform, accusing the moderation guidelines of having completely ignored the cultural and historical significance of the image. This kind of deliberation surrounding content falling in ‘grey areas’ has been an ongoing issue for digital platforms, and with the attempt to introduce AI moderation, will continue to be so.

Calls for government regulation

As these platforms continue struggling to maintain favour with the communities they host due to moderation issues, there has been a build up of pressure for national governments to step in and enforce their own regulatory policies, a movement that has already seen some nations take action. For example, in a recent Supreme Court filing, India has implemented a set of new regulations, the goal of which will be to make social media platforms more liable for the content they host, as well as to enforce traceability of content, ostensibly to enable accountability (Kumar, 2019). This will mean India will join a handful of other nations already making moves to crack down on social media regulation, however, these changes are not being met with total support from the public, with divided opinions on whether this kind of interference will improve a currently turbulent situation.

For those supporting the involvement, a major reoccurring reason is that by increasing government regulation, fines or other more serious ramifications can be applied to those in repetitive breach of community guidelines for users, and also to the platforms themselves for an inability to adhere to would-be national moderation standards. Further, government regulation would ensure that the ‘common good’ of the public would be at the centre of policy, as opposed to potential corporate self-interest (Samples, 2019).

“Justice Gavel” by toridawnrector is licensed under CC BY-SA 2.0

However, there are a number of those who argue that government regulation would cause more problems than it solves. In the US, for example, there has been ongoing argument from primarily conservative voters for minimised government involvement, as they feel that conservative speech and values have been unfairly removed from social media platforms – a problem they believe would only worsen with fears of political agendas being forced through these platforms (Samples, 2019). Alongside this, there is argument that previous involvement of government in non-government industries has led to suppressed innovation and the creation of monopolies, as well as driving away small or new platforms entering the market due to the high costs associated with some government regulation compliance (Kumar, 2019).

Where does that leave us?

So, as digital platforms continue to grow and dominate the way we connect, so too do the challenges associated with moderating those platforms. With millions of posts being made daily, digital platforms are continually attempting to keep up, innovating and evolving the way they operate to try and balance community safety and freedom of expression. Whether the involvement of governments in these operations would prove beneficial is not yet clear, but there is no doubt that these debates will remain prevalent for as long as we choose to rely on digital platforms and social media as we are now.

 

 

References

Burtea, I., 2020. Talking Tech | Content Moderation and Online Platforms: An impossible problem?. Talkingtech.cliffordchance.com. Available at: <https://talkingtech.cliffordchance.com/en/industries/e-commerce/content-moderation-and-online-platforms–an-impossible-problem–.html

 

Clayton, J., 2021. Twitter ‘permanently suspends’ Trump’s account. [online] BBC News https://www.bbc.com/news/world-us-canada-55597840

 

Gillespie, T. (2018). All platforms moderate. In T. Gillespie (Ed.), Custodians of the internet: Platforms, content moderation, and the hidden decisions that shape social media (pp. 1-23). New Haven: Yale University Press. https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300235029

 

Kleinman, Z., 2016. Fury over Facebook ‘Napalm girl’ censorship. [online] BBC Newshttps://www.bbc.com/news/technology-37318031

 

Kumar, R., 2019. Government should not regulate social media – The Statesman. [online] The Statesman. https://www.thestatesman.com/opinion/government-not-regulate-social-media-1502820180.html

 

Ofcom, 2019. Use of AI in online content moderation. [online] Cambridge Consultants.  <https://www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf

 

Roberts, S. (2019). 2. Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media (pp. 33-72). New Haven: Yale University Press https://doi-org.ezproxy.library.sydney.edu.au/10.12987/9780300245318-003

 

Samples, J. (2020). Why the government should not regulate content moderation of social media. Cato Institute Policy Analysis, 865(1), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3502843#

 

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License