Social media content moderation. Is it for the better?

Social media platforms create space for the public to gather to discuss, debate and distribute information. However, these platforms are owned by large corporations with their own commercial interests. While these platforms are advertised as an ‘open forum’ with users believing they are in control of what they can share and access, the users have very little understanding and influence on their mediation and governance. This idea is commonly referred to as the ‘black box’, a system whose workings are mysterious; users can observe its inputs and outputs but the transition from one to the other is unknown (Pasquale, 2015). In terms of content moderation, this is done by humans but mainly through algorithmic control, of which very little is known.

Firstly, it is important to understand exactly what content moderation is. In its most simple form, content moderation exists in order to “monitor and regulate user-generated posts through implementing a set of pre-arranged rules and guidelines” (Walker, 2020). The reasons for content moderation differs from platform to platform, but essentially content moderation ensures that users maintain a positive experience on these platforms. The content that is moderated varies, but in most cases refers to spam, disturbing images, violence, nudity, illegal activity etc. and how the content is moderated is also individual to the platform e.g. account being banned, post being removed or temporary suspension. As Langvardt states, “the work of content moderators is indispensable”. Without content moderation the Internet would be a confronting place. Social media users would be inundated with spam and very disturbing content.

As the internet continues to grow, there has been great movement shifting away from the “attention economy” and instead towards the “creator economy”. Traditionally, the Internet was a place to find and gather information that had been professionally or scientifically sourced. As the Internet has expanded, content creation has extended past those of professionals and can now be done by anyone, with almost 90% of Australia’s population using the Internet (“Social media statistics for 2022”, 2021). Users are engaging on interactive media platforms not just as consumers, but as content creators as social networking sites in particular, thrive on user-generated content. However, with the ability for all consumers to be content creators, there is an increasing unmediated nature of speech and conversation on these platforms. As Barlow states, ‘we are creating a world where anyone, anywhere may express his or her beliefs no matter how singular, without fear of being coerced into silence of conformity’, (Barlow, 1996). Thus, as more and more people are utilizing social media to share their own beliefs, values, opinions and ideas, there is greater need for moderation as platforms maintain a positive experience for all users.

Content moderation has been long debated throughout time and navigating between over-moderating and allowing for freedom speech has been proven to be a difficult task. Written constitutions and bills of right govern and protect freedom of speech as one of the most fundamental rights to all humans (Barendt, 2005). As defined by the Oxford Dictionary, freedom of speech is the ‘power or right to express one’s opinions without censorship, restraint, or legal penalty’. Thus, when it comes to content moderation, there is a very fine line between moderating and restricting freedom of speech as moderation essentially means censorship. The First Amendment protects individuals from government censorship; however, social media platforms are private companies and can individually monitor and censor what users post of their platforms.

Larger media platforms have implemented stricter content moderation in an attempt to avoid criticism. With limited government interference, individual media platforms are forced to create their own rules and policies around content moderation. Each platform applies their own moderation practices very differently, catering to many different communities. What may be viewed as ‘offensive’ on one platform may be deemed ‘normal’ in another which creates the challenge of maintaining consistency. For example, as Facebook and Twitter have gradually implemented stricter regulation rules, alternative platforms such as Parler and Reddit have emerged. Users who deem Facebook to be ‘too strict’ may alternatively seek to use a platform such as Reddit to express their views without the fear of their content being removed or flagged.

The main issue arising with content moderation is that not everything can fit into a particular set of rules or guidelines and it can be quite subjective. A recent example of this occurred when Facebook censored the famous historical photograph of nine-year-old Kim Phuc. The photo captured the young girl running away from an attack with no clothing on her upper body. Originally the photo was removed due to nudity, however, was later reposted as Zuckerberg (Facebook CEO) was accused of abusing his power after the social media platform censored the image. He stated, “while we recognize that this photo is iconic, it’s difficult to create a distinction between allowing a photograph of a nude child in one instance and not others” (“Facebook backs down from ‘napalm girl’ censorship and reinstates photo”, 2016). Thus, this example reveals this inconsistency and controversies that content moderation can bring.

Furthermore, the task of moderating content is a confronting and often traumatic experience. Take for example a young woman, Sarah, working as a content moderator for Instagram. Every time a photo or comment is posted, it goes through algorithmic processes to determine its safety. If a photo is deemed unsafe, it then goes to the human moderators for review. A very explicit photo of a crime scene is posted to Instagram which is then passed onto Sarah to decide whether the post should be deleted. Just recently, the famous social media platform TikTok was shutdown after the bloody aftermath of a school shooting was posted online. Someone like Sarah would have been the one to make this decision. Just like Sarah, there are many content moderator workers globally who are all faced with the same trauma that she experiences. Whilst it is shocking that these people are reviewing this information, it is important to note that somebody has to take responsibility for it. The trauma experienced by people like Sarah can be avoided but would simultaneously mean that the rest of the users would gain access to this confronting content. This is where the faintness between content moderation being a good thing and a bad thing is questioned.

Can this issue be resolved? Can governments intervene to better this? The issues of content moderation have long been debated and many people believe that in order to equalize and create fairness as opposed to commercial gain, governments should gain control. Asking if governments should have a greater role in enforcing content moderation restrictions on social media?

Social media platforms hold significant power in the online world. Some argue that these platforms hold too much power. But, with governments intervening to exert greater control, this line between content moderation and free speech will once again be very fine. Ensuring that these platforms do not abuse their powers should be the focus of governing bodies, as Langvardt says ‘adapting that system to the new technological reality without betraying its values should be the central problem of free speech in the 21st century’ (Langvardt, 2018). In order to allow for free speech but to also mitigate the risks of social media, governments must seek to enforce as much of the free speech standards as possible.

 

REFERENCES

Barendt, E. (2005) Why protect free speech? Freedom of speech. 5, 1-6.

Barlow, J. (2019). A declaration of the Independence of cyberspace. Duke Law & Technology Review, 18(1), 5-7.

Facebook backs down from ‘napalm girl’ censorship and reinstates photo. (2016, September 10). The Guardian.

Gillespie, T. (2010). Content moderation, Al, and the question of scale. Big Data & Society. 7(2), p. 1-5. https://doi.org/10.1177/2053951720943234

Langvardt, K. (2018) Regulating online content moderation. The Georgetown Journal. 106(5), 1353-1388.

Pasquale, F. (2015) Introduction – the need to know. The Black Box Society: the secret algorithms that control money and information, 1-18.

Social media statistics for 2022. (2021). Retrieved October 10, 2021, from https://www.smperth.com/resources/social-media-statistics/

Walker, S. (2020). What is content moderation: our complete guide. Retrieved from the New Media Services website: https://newmediaservices.com.au/fundamental-basics-of-content-moderation/

West, M. S. (2018) Censored, suspended, shadow banned: user interpretations of content moderation on social media platforms. New Media & Society. 20(11), 4366-4383. https://doi.org/10.1177%2F1461444818773059