Trump, Trolls & Terrorists: Content Moderation in the Modern World

Source: "366 - 350: You can't shut me up" by yoshiffles is licensed with CC BY-ND 2.0.

Content moderation is a necessary governance mechanism to make digital platforms safer and more inclusive, but recent developments have seen social media companies employ it as a discretionary device to exert soft power and dictate the user experience to serve their own purposes.

Content moderation is the “detection of, assessment of, and interventions taken on content or behaviour deemed unacceptable by platforms” (Gillespie et al., 2020: 2). Content moderation was first introduced to the Internet as a way to prevent minors from accessing online pornography (Mikaelyan, 2021), but since then, digital platforms have proliferated everyday life and become inextricably linked with social and political discourse.

Content moderation is the “detection of, assessment of, and interventions taken on content or behaviour deemed unacceptable by platforms”

Gillespie et al., 2020

With 456,000 Tweets sent and 293,000 Facebook statuses updated every minute (Domo, 2017), the guidelines that were once appropriate for the static environment of Web 1.0 are no longer compatible. Using notable controversies to demonstrate its purpose and mechanics, this essay will explore the evolution of content moderation from a tool to safeguard users to a tool of influence and control.

 

Free Speech on Digital Platforms

The sovereign right to free speech is itself a complicated notion, especially within the context of social media.

Reddit, a platform distinguished by its alternative culture (Massanari, 2017), has repeatedly found itself in the middle of free speech controversies. In a recent incident, a US woman sued Reddit for allowing her ex-boyfriend to share pornographic images of her as a 16-year-old, claiming the platform “knowingly benefits from lax enforcement of its content policies”.

Digital platforms, due to their structural affordances, act as public spheres for people to engage in unmediated discussion. The issue facing digital platforms is whether they want to maintain the notion of the Internet as a bastion of free speech, and risk the threat of legal action or financial cost.

Famously, former US President Donald Trump has taken exception to their position as the gatekeepers of free speech online. In July 2021, Trump filed class-action lawsuits against Facebook, Twitter and Google for suspending his accounts, alleging they obstructed his right to free speech.

Although Trump’s protests are not grounded in legal basis, his argument does reveal the unprecedented power digital platforms have in determining the extent of free speech. Responsible for setting and enforcing norms, digital platforms are empowered to remove a user’s content if it conflicts with their beliefs. Noted by Gillespie (2018), through content moderation, social media companies act as the de facto custodians of free speech online.

The Power of News

Conservative NGO, Turning Point USA, was accused of paying young Americans to disseminate misinformation throughout social media about the threat of COVID-19. Although Facebook and Twitter suspended participating accounts, the incident highlighted the vulnerability of digital platforms to spread misinformation.

A common gripe facing digital platforms with content moderation is that they are developing into media companies. BBC (2020) reported that 40% of Australians get their news predominately from social media. However, as evidenced by the Turning Point USA controversy, the threat of misinformation and ‘fake news’ has necessitated extensive content moderation measures.

Recently, Instagram stories mentioning COVID-19 and dubious claims made on platforms are now automatically accompanied by verified information. Platforms are justified in implementing stricter content standards in light of recent controversies, but the challenge they now face is how to not become involved in the stories themselves.

Able to censor and promote content at their discretion, digital platforms have the ability to influence social and political discourse in the same fashion as traditional media powers. Whether by altering their top story algorithms or muting a post that conflicts with their values, digital platforms are dictating what news users consume (Tandoc & Maitra, 2018).

In the context of mounting ‘fake news’, content moderation on digital platforms is paramount, but not to the extent that it influences major global events.

 

Profits Before People

In the most recent example of Facebook whistleblowing, former employee Frances Haugen told the US Congress that the company put “astronomical profits before people”. In her testimony, Haugen purported that Facebook was aware of illegal activity and mental health issues on their platform, but failed to act because it damaged their profitability.

Source: Business Insider

On platforms such as Facebook and YouTube, advertising represents the majority of revenue, and thus, adhering to the requests of advertisers is a core priority. This, however, is particularly problematic when considering the content on these platforms is user-generated. Companies want to advertise on platforms they believe are user-friendly, and to ensure this, social media companies remove, restrict or demonetise content if it violates community standards.

In January 2017, PewDiePie, one of YouTube’s largest creators, was sanctioned by the platform for using anti-Semitic language and imagery in his videos, and removed from their Google Preferred advertising program (Hokka, 2021). Similarly, it was revealed that TikTok ordered its content moderators to filter out people who are old, ugly, disabled, etc. because they “decrease the short-term new user retention rate (Biddle et al., 2020).

The perennial struggle of digital platforms is that their economic viability is contingent on content moderation, creating an issue of how to reconcile commercial interests with their open and egalitarian values.

 

Looking Inside the Black Box

The mechanics of content moderation, both human and algorithmic, consolidate the power of digital platforms by being deliberately subjective and opaque.

In its original form, content moderation was conducted by individuals, authorised to remove and censor content. At the inception of Google, general counsel Nicole Wong and her policy team were responsible for content moderation, forcing the New York Time’s to concede that “Wong and her colleagues have arguably more influence over the contours of online expression than anyone else on the planet”.

Source: “Content Moderation: What is it and why your business needs it” by TechAhead.

As a means to standardise decision-making and moderate content at scale, digital platforms are adopting algorithmic moderation. Whilst this makes the process more consistent, it lacks contextualised decision-making and leads to many false positives and negatives (Gorwa et al., 2020). Algorithms, for example, may flag a video on breastfeeding as pornography or news coverage on a terrorist attack as the act itself (Gillespie et al., 2020). In September 2020, YouTube declared it was reinstating many of its human moderators because their AI systems unfairly removed more than double the usual volume of videos.

Often described as a black box, users criticise content moderation because there is little explanation of its mechanics. Unless greater transparency is offered and ambiguous guidelines are clarified, digital platforms will continue to encounter the risk of frustrated users leaving their platform.

 

How Should Governments Respond?

In response to the prevalence of online trolls, Australian Prime Minister, Scott Morrison, denounced the “lack of accountability” of digital platforms and foreshadowed future legislation.

More needs to be done in the content moderation space in order to neutralise the discretionary authority and influence of social media companies. Many of the controversies listed above could be mitigated by establishing consistent and transparent content moderation standards. Unfortunately, there are many practical hurdles preventing greater government involvement on social media.

Here are three reasons why governments struggle to enforce stricter content moderation:

  1. Authoritarianism

In autocratic nation-states such as China and Russia, governments rather than digital platforms perform content moderation, which doesn’t lead to a safer online experience, but one where content is spread or censored for purposes of national interest (O’Hara & Hall, 2018). In nations where free speech is protected, governments will be accused of overreach should they dictate content standards.

  1. Globality

The Internet does not fall under the jurisdiction of a single country. Governments can only legislate digital platforms within their authority, and whilst this may resolve certain issues, it will have an isolated impact. Australia recently became the first country to force Google and Facebook to pay for news content on their platforms. Immediately in response, Facebook removed all news content on the Australian version of the platform, flexing their ability to circumvent regional legislation.

  1. Encryption

How do governments regulate encrypted platforms, such as WhatsApp and Telegram, without compromising the integrity of the platform and the privacy of the user? In certain countries, governments have had to pressure digital platforms to provide ‘backdoor access’ to encrypted messaging, such as in the aftermath of the San Bernardino shootings. Similar to the authoritarian model of the Internet, moderating content on encrypted platforms would lead to state censorship and surveillance (Gillespie et al., 2020).

Allowed to self-regulate, digital platforms are afforded significant leniency in deciding how to balance free and open expression with their own priorities. How governments will supervise content moderation is yet to be seen, but the need to temper the discretionary authority and influence of digital platforms is clear.

 

Trump, Trolls & Terrorists: Content Moderation in the Modern World © by Will Mallett is licensed under CC BY 4.0.

 

Bibliography

Biddle, S., Ribeiro, P. & Dias, T. (2020, March 16). Invisible Censorship. The Intercept. https://theintercept.com/2020/03/16/tiktok-app-moderators-users-discrimination/

Domo. (2017). Data Never Sleeps 5.0. https://www.domo.com/learn/infographic/data-never-sleeps-5

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.

Gillespie, T., Aufderheide, T., Carmi, E., Gerrard, Y., Gorwa, R., A Matamoros-Fernández, A., Roberts, S., Sinnreich, A. & West, S. (2020). Expanding the debate about content moderation: scholarly research agendas for the coming policy debates. Internet Policy Review9(Issue 4). https://doi.org/10.14763/2020.4.1512

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1), 205395171989794–. https://doi.org/10.1177/2053951719897945

Hokka, J. (2021). PewDiePie, racism and Youtube’s neoliberalist interpretation of freedom of speech. Convergence27(1), 142–160. https://doi.org/10.1177/1354856520938602

Instagram (2021, March 16). Helping People Stay Safe and Informed about COVID-19 Vaccines. https://about.instagram.com/blog/announcements/continuing-to-keep-people-safe-and-infaormed-about-covid-19

Knott, M. (2021, October 6). ‘Harms children’: Whistleblower testifies that Facebook puts ‘astronomical profits’ over people. Sydney Morning Herald. https://www.smh.com.au/world/north-america/whistleblower-says-facebook-puts-astronomical-profits-over-people-20211006-p58xjc.html.

Mao, F. (2021, February 18). How Facebook became so powerful in news. BBC News. https://www.bbc.com/news/world-australia-56109580

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329–346. https://doi.org/10.1177/1461444815608807

Mikaelyan, Y. (2021). Reimagining Content Moderation: Section 230 and the Path to Industy-Government Cooperation. Loyola of Los Angeles Entertainment Law Review, 41 (2), 179 –.

Nakashima, E. & Albergotti, R. (2021, April 14). The FBI wanted to unlock the San Bernardino shooter’s iPhone. It turned to a little-known Australian firm. The Washington Post. https://www.washingtonpost.com/technology/2021/04/14/azimuth-san-bernardino-apple-iphone-fbi/

O’Hara, K., & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance (No. 206). Centre for International Governance Innovation. https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance

Robertson, A. (2021, April 25). Reddit faces lawsuit for failing to remove child sexual abuse material. The Verge. https://www.theverge.com/2021/4/25/22399306/reddit-lawsuit-child-sexual-abuse-material-fosta-sesta-section-230

Rosen, J. (2008, November 28). Google’s Gatekeepers. The New York Times. https://www.nytimes.com/2008/11/30/magazine/30google-t.html.

Stanley-Becker, I. (2020, September 15). Pro-Trump youth group enlists in secretive campaign likened to a ‘troll farm,’ prompting rebuke by Facebook and Twitter. The Washington Post. https://www.washingtonpost.com/politics/turning-point-teens-disinformation-trump/2020/09/15/c84091ae-f20a-11ea-b796-2dd09962649c_story.html

Suciu, P. (2021, July 7). Former President Trump Suing Over Social Media Bans – But Will It Actually Go Anywhere?. Forbes. https://www.forbes.com/sites/petersuciu/2021/07/07/former-president-trump-suing-over-social-media-bans–but-will-it-actually-go-anywhere/?sh=1ef2e955beec.

Tandoc, E. C., & Maitra, J. (2018). News organizations’ use of Native Videos on Facebook: Tweaking the journalistic field one algorithm change at a time. New Media & Society20(5), 1679–1696. https://doi.org/10.1177/1461444817702398

Vincent, J. (2020). YouTube brings back more human moderators after AI systems over-censor. The Verge. https://www.theverge.com/2020/9/21/21448916/youtube-automated-moderation-ai-machine-learning-increased-errors-takedowns

Visentin, L. (2021, October 7). Cowards palace: PM slams social media giants and anonymous trolls. Sydney Morning Herald. https://www.smh.com.au/politics/federal/coward-s-palace-pm-slams-social-media-giants-and-anonymous-trolls-20211007-p58y59.html