Algorithmic Extremism or Human Fallibility?

Black Mirror Tackles the Ethics & Accountability of Free Speech in 'Hate in the Nation'

Image: "#DeathTo Jo Powers Twitter profile" in 'Hated In the Nation', Brooker (2016) Black Mirror. All Rights Reserved.

You have over 200 new notifications

Including 96 mentions

This is the notification we see as Jo Powers sit transfixed before her computer, giggling with a sort of gallows humour as a torrential onslaught of hate and abuse fills her Twitter-esque news feed. “Adolf Hitler and @JoPowersWriter stand in front of you. You have one bullet left #dilemma” one user writes. The backlash was merely the first in a series of public vitriols featured throughout ‘Hated in the Nation’, the penultimate episode of Black Mirror’s third season.

Tweet #DeathTo Jo Powers
Image: “#DeathTo Jo Powers tweet” in ‘Hated In the Nation’, Brooker (2016) Black MirrorAll Rights Reserved.

Created by Charlie Brooker, the sci-fi anthology series Black Mirror has garnered critical acclaim for its masterful rendering of disjunct dystopian societies and prescient commentary on the potential trajectory of our digital technologies. ‘Hated in the Nation’ is no different in this regard, tackling questions of morality and accountability as a viral hashtag, #DeathTo, feeds on digital hate speech and public vilification to ultimately sentence the top trending user to a grotesquely painful death.

“What is #DeathTo” in ‘Hated In the Nation’, Brooker (2016) Black MirrorAll Rights Reserved.

The premise is one with which we are all too familiar in today’s “platform society”; trial by social media, whereby judgement in the form of online public shaming is meted out by angry mobs aggregating around trending topics for perceived wrondoing (Justine Sacco is one such infamous example). As Aitchison and Meckled-Garcia point out, “the aggregative public effect of social media” as well as “the speed with which information can be disseminated online” and “the premium attached to being the first with a witty take or biting put-down” invariably result in a punishment that is dispensed without any form of due process and entirely disproportionate to the original action in question (2021, p.5).

“Platform society” refers to the current social reality within which action – be it interpersonal, economic, or political – is largely channelled though a data-fuelled and algorithmically-organised “global online platform ecosystem” (van Dijck, Poell & de Waal, 2018, p.7).

miWhilst the “vagaries of human nature” (Tufekci, 2020) do hold a degree of culpability in the increasing preponderance of such digital lynchings – as ‘Hated in the Nation’ ultimately decreed in the final scenes of the episode, effectively sentencing each and every one of the 387,000 users who took part in the #DeathTo Game to the same grisly fate that they had dispensed to their chosen victims – the role of platformisation in enabling such phenomena to occur cannot be overlooked. Indeed, though “social media platforms have long framed themselves as open, impartial, and noninterventionist”, their performative impartiality obscures the reality that “everything on a platform is designed and orchestrated” (Gillespie, 2017, p.257). The spaces and structures of social media sites like Facebook and Twitter dictate precisely the forms of engagement possible, whilst their algorithmic modes of filtering and ranking content in turn decide what is given value and prominence in the public discourse they facilitate (Gillespie, 2017, p.257; van Dijck, Poell & de Waal, 2018, p.9). As Pasquale so succinctly states, the “power to include, exclude, and rank is the power to ensure that certain public impressions become permanent, while others remain fleeting” (2015, p.14).

illustration of woman with white blindfold
“Disenfranchised by Misinformation” by Truthout.org | CC BY-NC-ND 2.0 license

With social media platforms occupying ever-expanding domains within our public discourse as powerful ‘information intermediaries’ (Bucher, 2018, p.119), their influence increasingly extends not just to the occupation of our attention, but to the shaping of our understanding of the world around us. Tufecki refers to this phenomenon as ‘algorithmic gatekeeping’, whereby “non-transparent algorithmic computational-tools dynamically filter, highlight, suppress, or otherwise play an editorial role — fully or partially — in determining: information flows through online platforms and similar media,” among other things (2015, p.207).

Insofar as these platforms foster the dissemination and accentuation of harmful content, from hate speech, misogyny, and discrimination to the all-too-prevalent misinformation, should we not level them with questions of accountability?

Movie poster with Donald Trump performing magic trick on woman with the title
“Trump’s Fake News” by outtacontext | CC BY-NC-ND 2.0 license

As Kasra aptly states, “even though they empower citizens to self-report, self-represent, and self-organize, they also misinform or advance malicious objectives that lead to what can be called a 21st-century manhunt designed to punish, humiliate, and control viewers” (2017, p.173). In fact, Massanari suggests that the design and “algorithmic politics” of platforms like Reddit “may enable and/or implicitly encourage these spaces to become hotbeds of misogynistic activism” (2017, p.330). Similarly, algorithmically driven content on platforms has been criticised for “facilitating the construction of ‘filter bubbles’” and the “widespread dissemination of false news stories, and inflammatory political advertisements” (Napoli, p.57). Here it is evident that platforms, through their extensive reach, rate, and rampancy, hold the potential to not only influence, but also cause critical harm to the contemporary information networks that we rely upon as a functioning democratic society.

The storming of the US Capitol in January of this year is a chilling example of the very real repercussions that can occur when platforms allow public vitriol and misinformation to flourish. And while many of the players that orchestrated that attack have since been banned from Twitter and Facebook – including Donald Trump, questions of culpability were largely limited to the individual level, with suggestions of regulatory reform or external intervention ultimately eschewed for the paragon of free and open internet, purportedly capable of self-moderation and reform (Cellan-Jones, 2021; Purtill, 2021).

More recently, testimony from whistle-blower Frances Haugen, a former Facebook employee, alleged that “Facebook’s products harm children, stoke division and weaken our democracy” (in Segal, 2021). According to leaked internal research, Facebook is patently aware of the harms its algorithms are inflicting on the general public, from Instagram’s negative impact on the mental health of teen girls, to its inadequacy in combatting the proliferation of COVID-19 misinformation in Facebook newsfeeds and that changes to its News Feed algorithm to prioritise ‘meaningful social interactions’ favoured outrage and sensationalism, resulting in an increase in misinformation, toxicity, and violent content (Wells, Horwitz & Seetharaman, 2021; Schechner, Horwitz & Glazer, 2021; Hagey & Horwitz, 2021). And when these conflicts have been made apparent, the company has chosen profit over people time and time again, “trading off very small decreases in engagement for huge consequences in misinformation and hate speech and violence” (Haugen in Linebaugh & Knutson, 2021).

Such revelations make it patently apparent that the increased fragmentation and personalisation effected through growth-driven algorithms is not only allowing online harm to flourish, but is undermining core foundations of democratic public debate. As Napoli confirms, we can no longer rest assured in today digitally-saturated media ecosystem that counterspeech will prevail in allowing “truthful and accurate news and information to triumph over falsity” (p.63). Indeed, while Facebook has professed that, “We believe that engagement is more powerful than censorship in reforming prejudiced and bigoted opinions and voices, and are committed to amplifying campaigns which encourage positive dialogue and debate,” such sentiments are clearly not reflected in company policy when profit enters as a conflict of interest (in Napoli, p.65). Not to mention that engagement has, by their own reports, been shown as inadequate in the face of socially-harmful content, time and time again.

The question of responsibility here is complex, but ultimately not one that should be left to platforms alone to answer. This is not a ‘purely technical problem’ as the internet-centric idealists framing current regulatory debates would have you believe, capable of being solved through tweaks to digital structures and editing algorithmic codes (Pasquale, 2015, p.8; Lusoli & Turner, 2021). And while we, the users who implicitly or explicitly allow malicious online behaviour to grow and spread should, as Black Mirror suggests, bear a portion of the blame, it is not us alone.

 

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

REFERENCES

Aitchison, G. & Meckled-Garcia, S. (2021). Against Online Public Shaming.
Social Theory and Practice, 47
(1), pp.1-31. https://doi.org/10.5840/soctheorpract20201117109.

Brooker, C. (Writer) & Hawes, J. (Director). (2016, Oct 21). Hated in the Nation (Season 3, Episode 6). In C. Brooker. Black Mirror. Netflix.

Bucher, T. (2018). Programming the News: When Algorithms Come to Matter. In If… Then: Algorithmic Power and Politics, Oxford University Press. DOI: 10.1093/oso/9780190493028.001.0001.

Cellan-Jones, R. (2021, Jan). Tech Tent: Did social media inspire Congress riot? BBC News. https://www.bbc.com/news/technology-55592752

Frenkel, S. (2021). The storming of Capitol Hill was organized on social media. The New York Times. https://www.nytimes.com/2021/01/06/us/politics/protesters-storm-capitol-hill-building.html

Gillespie, T. (2018). Regulation of and by Platforms. In Burgess, J., Marwick, A. & Poell, T. (eds). The SAGE Handbook of Social Media (pp. 254–278). SAGE Publications Ltd. https://doi.org/10.4135/9781473984066.n15

Hagey, K. & Horwitz, J. (2021, Sept). The Facebook Files: Facebook Tried to Make Its Platform a Healthier Place. It Got Angrier Instead. Wall Street Journal. https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-11631654215?mod=article_inline

Kasra, M. (2017). Vigilantism, public shaming, and social media hegemony: The role of digital-networked images in humiliation and sociopolitical control. The Communication Review, 20(3), pp.172-188, https://doi.org/10.1080/10714421.2017.1343068

Linebaugh, K. & Knutson, R. (2021, Oct 3). The Facebook Files, Part 6: The Whistleblower [Audio Podcast]. WSJ Podcasts. https://www.wsj.com/podcasts/the-journal/the-facebook-files-part-6-the-whistleblower/b311b3d8-b50a-425f-9eb7-12a9c4278acd

Lusoli, A. & Turner, F. (2021). ‘“It’s an Ongoing Bromance”: Counterculture and Cyberculture in Silicon Valley—An Interview with Fred Turner. Journal of Management Inquiry 30(2), pp. 235-242.

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society19(3), 329–346. https://doi.org/10.1177/1461444815608807

Napoli, P. M. (2018). What If More Speech Is No Longer the Solution? First Amendment Theory Meets Fake News and the Filter Bubble. Federal Communications Law Journal70(1), 55–107.

Pasquale, F. (2015). Introduction–the need to know. In The Black Box Society: The Secret Algorithms That Control Money and Information (pp. 1-19). Cambridge, Mass: Harvard University Press.

Purtill, J. (2021, Jan). Storming of US Capitol and Donald Trump’s Twitter ban will be ‘tipping point’ for social media regulation, experts say. ABC News. https://www.abc.net.au/news/science/2021-01-13/capitol-storming-trump-ban-tipping-point-social-media-regulation/13052092

Ronson, J. (2015). How One Stupid Tweet Blew Up Justine Sacco’s Life. The New York Times Magazine. https://www.nytimes.com/2015/02/15/magazine/how-one-stupid-tweet-ruined-justine-saccos-life.html

Schechner, S., Horwitz, J. & Glazer, E. (2021, Sept). The Facebook Files: How Facebook Hobbled Mark Zuckerberg’s Bid to Get America Vaccinated. Wall Street Journal. https://www.wsj.com/articles/facebook-mark-zuckerberg-vaccinated-11631880296?mod=article_inline

Segal, E. (2021, Oct). Facebook Has Another Bad PR Day When Whistleblower Testifies At Senate Hearing. Forbes. https://www.forbes.com/sites/edwardsegal/2021/10/05/facebook-has-another-bad-pr-day-as-whistleblower-testifies-at-senate-hearing/?sh=157a715f73e0

Tufekci, Z. (2015). Algorithmic harms beyond facebook and google: Emergent challenges of computational agency. Colorado Technology Law Journal, 13(2), pp. 203-218. https://heinonline.org/HOL/P?h=hein.journals/jtelhtel13&i=227.

Tufekci, Z. (2020). This Social-Media Mob Was Good. The Atlantic. https://www.theatlantic.com/technology/archive/2020/05/case-social-media-mobs/612202/

Van Dijk, J. (2018). The Platform Society as a Contested Concept. In van Dijk, J., Poell, T. & de Waal, M. (eds). The Platform Society. Oxford University Press. https://doi.org/10.1093/oso/9780190889760.003.0002

Wells, G., Horwitz, J. & Seetharaman, D. (2021, Sept). The Facebook Files: Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show. Wall Street Journal. https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-girls-company-documents-show-11631620739?mod=hp_lead_pos7&mod=article_inline