The Content Moderation Debate : Fighting danger or silencing dissent?

“Social Media” by magicatwork is licensed with CC BY 2.0. To view a copy of this license, visit https://creativecommons.org/licenses/by/2.0/ 

The moderation of digital content published to online platforms is an arduous task, that demands equilibrium between a desire to build user communities that promote ongoing engagement and the enforcement of guidelines to ensure said platforms are inherently adept. As “custodians” of the internet, digital network platforms who moderate content are tasked with the challenge of intervention when monitoring material, employing their own decision-making processes based upon different parameters of ‘acceptable’ social conduct (Roberts, 2017 p.44 in Gillespie, 2010). Maneuvering the reins of moderation further proves to be laborious for platforms, when issues of inconsistent policy enforcement, censorship and wrongful regulation are introduced. The continuation of these matters are further perpetuated by the inexistence of regulatory bodies that aim to stymie dangerous content. The absence of these teams has created an abundance of inconsistencies, identifiable within differing platforms’ approaches to monitoring and creating policies that diminish the production of unacceptable online material. 

The content moderation debate faces initial complications when platform users identify clear inconsistencies discerning treatment towards what is deemed as inappropriate online, causing said individuals to question the validity of a platform’s self-regulatory legislation and what ethics determine its policies. Whilst the algorithms that platforms use to decode user content and surveil its appropriateness appear to “be neutral and wholly driven by user data”, they represent design decisions about “how the world is to be ordered” (Bucher, 208 p.67) and are therefore “selective, partial and constructed” (Gitelman and Jackson, 2013 in Gerrard et. al, 2020). In countries like Australia, whilst the Australian Communications and Media Authority are tasked with ensuring media and communications legislations operate effectively, they do not regulate the content published on social media. The absence of administrative moderator teams have consequently allowed platforms to operate in self-regulatory fashion, creating an abundance of selective inconsistencies that are identifiable within organisations’ policies. Manifestations of these inconsistencies were visible following the publication of an egregious Twitter post in 2020 by a Chinese Foreign Ministry Official Zhao Lijian, withholding a digitally altered image of “an Australian soldier holding a bloodied knife to an Afghan child’s throat” (Burgess and Bladen, 2020). Whilst Twitter’s Synthetic and Manipulated Media Policy clearly highlights that “manipulated media that are likely to cause harm may not be shared” on the platform and are “subject to removal”, no action was taken to withdraw this “deeply outrageous” post (Morrison in Burgess and Bladen, 2020). Comparatively, Indian technology writer Varun Krishnan published a comedic, altered image of a cat in a tiny business suit to Facebook in 2016, and in turn faced the immediate deletion of his post and account from the networking sites’ platform (Alazzeh 2016) on the grounds of synthetic content production. These discrepancies foreground a requirement for platforms “to be transparent about how they moderate content” in order to overcome both this issue and the public scrutiny of legislative inconsistency (Hicks, 2021). Digital platforms who choose to adopt content moderation must be thorough and consistent when enforcing their legislations in practical contexts. 

As concerns about online content moderation have evolved, digital networking platforms have been enforced to navigate the growing public scrutiny and unpopularity of regulatory practices, particularly when the agenda of a suppressed freedom of expression is argued. Platforms, which grant users the “paired values of community and collectivity, with the imperative of personal freedom and empowerment” through the “sharing [of] expressive and communicative content” (Van, Dijck, 2018 p.18), are thus subject to levels of condemnation, when users feel they are subject to censorship or erasure. Platforms hence often find difficulty in maintaining equilibrium between their users’ “right to freedom of speech” and “the rights of individual users to privacy, security and freedom from bullying and harassment” (Macnamara, 2019). Such claims of content moderation enacting as an oppressor of free speech were exhibited following Twitter’s permanent block of former U.S. President Donald Trump in February of 2021. Banned for violating Twitter policies of inciting violence by encouraging the breach of the US Capitol (Segal, 2021), Trump argued that the platform had “gone too far in banning free speech” and “coordinated with the Democrats and the radical left” in order to “silence [him]!” (Trump, 2021). This view was further shared by Republican Fox News Anchor Tucker Carlson who compared the suspension to Orwellian tyranny and an act of deep-state collusion (Carlson in Marantz, 2021). To combat this, The United Nations’ Human Rights Council provides an adept solution, proposing that regulatory bodies that operate in diverse nations should provide users who feel silenced with “effective opportunities to appeal against decisions they consider to be unfair” and independent courts should have “the final say over lawfulness of content” (Hicks, 2021). Online platforms thus now enact as “engines of free speech” (Gillespie, 2018) and provide digital toolboxes to users, amplifying individual voices that can promote positivism, or in worst cases incite danger, thus calling for moderation to manage user experience. 

Platforms further face challenges in establishing and enforcing a “content moderation regime” that is invariably applicable to address all circumstances of inappropriacy online (Gillespie, 2018 p.11). Screeners of digital platforms which employ “an array of high-level cognitive functions and cultural competencies” (Roberts, 2019 p.35) to determine the appropriateness of user content, are often challenged when their policies do not take into account discrepancies such as “culturally valuable” material from local contexts with differing social normalities (Gillespie, 2018 p.11). On occasion, these rules enforced by said media platforms, have been known to cause offence by wrongfully removing or banning individuals’ content. This sentiment was reflected in April of 2015, following Facebook’s removal of the Australian Broadcasting Corporation’s (ABC) trailer video promoting an upcoming television program ‘8MMM’, for breaching a nudity policy. The program; an Indigenous show based in Australia’s Northern territory, contained scenes of a traditional ceremony with Aboriginal women painted in ochre displaying bare breasts and was immediately flagged for “containing potentially offensive nudity” (Terzon, 2015). In response to the platforms’ moderation, the show’s co-creator Trisha Morton-Thomas described the censorship as “utterly ridiculous” and “culturally insensitive” (Morton-Thomas in Terzon, 2015). This controversy following such an attempt to moderate online content only further demonstrates the need for national, professional regulatory teams who withhold expertise in platforms’ presumed audiences and their cultural knowledge (Roberts, 2019 p.35). This is further supported by the Office of the High Commissioner at the United Nations’ Human Rights Council, who states that when faced with complex issues of moderation, “people should be making the decisions” on inappropriacy, “not algorithms” (Hicks, 2021). To combat a similar circumstance in future, countries like Australia should complete revisions to the currently upheld 1992 Broadcasting Service Act to ensure media platforms collaborate with the Statutory Authority and oversee the removal of moderated content. Content moderation conducted by platforms can thus be problematic when operating under ambiguous policies that do not account for diverse social or cultural philosophy. 

Such copious issues felt by platforms when enacting moderation thus clearly necessitate the inclusion of assistance from an external party. Government intervention, though a potentially useful tool in this instance, has proven to be a “disturbing and unfamiliar expansion” of authoritative power “into the private sphere” (Langvardt, 2017), as visible earlier this year within actions undertaken by the Nigerian government. Following the removal of a Twitter post from the countries’ President Muhammadu Buhari’s account for violating company policy, authorities announced the indefinite suspension of the platform for its citizens. Utilising the nations’ systems of major telecommunications companies to block millions from accessing the interface, Nigerian authorities also threatened to persecute anyone who aimed to override the ban. (Nwaubani, 2021). It is hereby argued that individual nations should each be appointed an independent, regulatory body that is government-funded and composed of experienced moderators to oversee discourse and published content, whilst simultaneously designing and evaluating regulation codes alongside “civil society and experts” (Hicks, 2021). These teams should aim to reduce the rates of exposure to content deemed unacceptable and have professional knowledge of user guidelines. Furthermore, the existence of these groups within differing countries would ensure that all members withhold a linguistic and cultural competency within that nation’s context, and thus be accustomed to differing traditions and beliefs, avoiding the wrongful removal of historically significant or valuable material. 

Platforms that moderate user content face numerous impediments that complexify their operations and allow issues of inconsistency, censorship and wrongful regulation to proliferate. Whilst Government arbitration proves to be ineffectual, aiding platforms through the introduction of external regulatory bodies who operate across different nations, is a viable and advantageous solution to counteract these concerns and better the digital landscape.

References

Alazzeh, D. (2016). You could get banned if you share this cat photo on Facebook. Retrieved 15 October 2021, from https://www.sbs.com.au/language/english/you-could-get-banned-if-you-share-this-cat-photo-on-facebook 
Burgess, K., & Bladen, L. (2021). PM shocked by Chinese govt’s ‘repugnant’ digitally altered image. Retrieved 15 October 2021, from https://www.canberratimes.com.au/story/7034039/pm-shocked-by-chinese-govts-repugnant-digitally-altered-image/
Dijck, J. (2018). The culture of connectivity. New York: Oxford University Press.
Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist assemblages. New Media & Society, 22(7), 1266-1286. doi: 10.1177/1461444820912540
Gillespie, Tarleton. (2018) All Platforms Moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. pp. 1-23
Gillespie, T., Aufderheide, P., Carmi, E., Gerrard, Y., Gorwa, R., & Matamoros-Fernández, A. et al. (2010). Expanding the debate about content moderation: Scholarly research agendas for the coming policy debates. Internet Policy Review, 9(4). doi: 10.14763/2020.4.1512
Gillespie, Tarleton (2017) ‘Governance by and through Platforms’, in J. Burgess, A. Marwick & T. Poell (eds.), The SAGE Handbook of Social Media, London: SAGE, pp. 254-278.

Hicks, P. (2021). facebook sharing button twitter sharing button linkedin sharing button Moderating online content: fighting harm or silencing dissent?. Retrieved 15 October 2021, from https://www.ohchr.org/EN/NewsEvents/Pages/Online-content-regulation.aspx 
Langvardt, K. (2017). Regulating Online Content Moderation. SSRN Electronic Journal, 106, 1379. doi: 10.2139/ssrn.3024739
Macnamara, J. (2019). Digital and social media. In R. Tench & L. Yeomans, Exploring Public Relations PDF EBook : Global Strategic Communication (4th ed., pp. 35 – 59). Chicago: Pearson Education, Limited.
Marantz, A. (2021). The Importance, and Incoherence, of Twitter’s Trump Ban. Retrieved 15 October 2021, from https://www.newyorker.com/news/daily-comment/the-importance-and-incoherence-of-twitters-trump-ban 
Our synthetic and manipulated media policy | Twitter Help. (2021). Retrieved 15 October 2021, from https://help.twitter.com/en/rules-and-policies/manipulated-media 
Terzon, E. (2015). Aboriginal video pulled due to Facebook’s nudity guidelines. Retrieved 15 October 2021, from https://www.abc.net.au/news/2015-04-13/indigenous-video-pulled-facebook-nudity-rules/6388090 
Tricia Nwaubani, A. (2021). Viewpoint: Why Twitter got it wrong in Nigeria. Retrieved 15 October 2021, from https://www.bbc.com/news/world-africa-5817570
About Alessandro 1 Article
I’m a Sydney based creative and aspiring photojournalist. My love for writing and creating imagery has led me to pursuing a media degree at university where i’m learning skills and the building blocks that will constitute my future. I come from Italian parents who have always sought to introduce me to new cultures and custom through the process of travel. Growing up in a multicultural environment has therefore enforced an appreciation and deep hunger to see as much of the world as I can. Living in Australia has also cultivated my love for the outdoors and adventure. With any opportunity I get, you’ll find me hiking, swimming and photographing my way through my days.