Under the Web 2.0 landscape, the alleged open internet is embroiled in a disputable paradigm where content is increasingly obtained from proprietary media platforms that are identified as “Big Tech”, constituted by Facebook, Apple, Amazon, Netflix, and Google (knowns as FAANG) (Flew et al., 2019). This phenomenon arouses public concerns towards tech giants’ disruptive power play the role of digital catalyst to accelerate and exacerbate monopoly aggressively, the Economists termed the public resistance and animus parallel the concept of “techlash”, a campaign backlashes against dominant power (Flew, 2018).
Specifically, this article will focus on public concerns behind techlash intertwined with monopoly pertinent to three dimensions: political, economic, and cultural concerns. Followed by the evaluation of what extent concerns can be eased by means of regulation, as well as critically analyzing encounter difficulties and controversies that obstruct the regulatory implementations formulated and undertaken by platforms, government, and society organizations.
Public concerns behind techlash
Political concerns: fake news under filtering mechanism penetrate the political election
The rampant fake news on platforms provides fodder for the political propaganda that favors particular politicians. The manipulation of the 2016 US presidential election is an example of how Facebook filters content to highlight fake news in the newsfeed to glamourize Donald Trump while smearing Hillary to stimulate voters to support Trump. It is common to see monopolistic platforms such as Facebook are more capable and obsessed with fabricating news for enormous traffic monetization from curious clicks (Flew et al., 2021). Doubtlessly, Facebook as a monopolistic tech giant suffers techlash for the construction of political prejudices and fake news to manipulate public opinions, almost like brainwashing in a way (Pesce, 2017).
Taking the above example as an analyzing point, the proliferation of fake news depends on mood management through only displaying specific content consistent with interests, whether true or not, which echoes Pariser’s idea of “filter bubble” (as cited in Bruns, 2019). It emphasizes public concerns of generating a confirmation bias to appreciate themselves with blindness, whereas simultaneously, becoming radical resistant to opposing ideas (Pesce, 2017). This distorted mind manipulation intersects with the filtering mechanism indicates an ideological polarization that accelerates a collapse of consensus, regarding a typical expression of epistemological crisis (Pesce, 2017). It results in the fragmentation of discourse diversity and falsification of factual accuracy (Bruns, 2019). Moreover, apart from the revelation that tech giants epitomize the public concerns relates to political transparency, their concerns also expand to the economic and cultural field which will be discussed in the following sections.
Economic concerns: Privacy breach and transparency issue in data collection and selling for maximized monetization
The major income for digital platforms comes from selling advertising. However, it is never a straightforward or undifferentiated marketing promotion for diverse audiences, indeed, it is targeted and triggers public concerns. Since platforms track users’ activities through cookies to produce profiles that comprise individual preferences and sell them as an intangible informational wealth to advertisers in exchange for tangible monetary returns (Gillespie, 2018). Notably, more audiences believe the data collection breaches privacy brazenly, but it is unrealistic to negotiate with monopolistic companies to ask for stop tracking because it will reduce the platform stickiness and undermine economic benefit maximization. It is evidenced by an investigation from the German government’s Federal Cartel Office that one punishment Facebook imposes on those who disagree with data collection would be the exclusion from the online community (Pesce, 2017).
Additionally, more audiences realize that seemingly, algorithmic recommendation satisfies their expectations whereas it is incompatible with transparency and neutrality because what they can see is restricted and filtered by platforms to benefit their advertiser partners through providing prominent positions in searching ranking (Gillespie, 2018), which again reinforces the idea of “filter bubble”. Hence, public techlash is manifested while Facebook pursues maximized monetization from selling data but disregards the public interests.
Cultural concerns: the burst of hate speech, terrorism, and other extremism
Platforms with little accountability become the seedbed of anti-social narratives such as abusive language and hate speech related to race and gender (Flew, 2018), it reflects what is justified by Massanari (2017) as “toxic techcultures”. A loose internet environment that embraces the principle of free speech fosters malicious speech from extremists, such as geeks who express hatred towards females (Massanari, 2017). Internet users are concerned that there are few barriers to enter cyberspace and in most cases, the production of abusive languages is leaderless and amorphous, and hard to track sources in an anonymous context (Massanari, 2017).
More horrifically, ISIS and other extremist organizations invade social platforms in terms of posting gruesome beheading videos and recruitment propaganda (Gillespie, 2017). From March to May 2020, ISIS has used Facebook to spread 50 pieces of content, some include beheading videos, which have been watched over 34,000 times, and some content have been reposted by ISIS-supporting accounts (Wired, 2020). Apparently, the rampage of terrorist organizations on Facebook exacerbates techlash, highlighting public disappointment about the tech giant’s anti-terrorist defenses accompanied by concerns towards internet prospects, and raises questions whether there are regulatory practices to address these concerns. Thus, the next section will explore how different media stakeholders react to these problems and whether they produce desirable results or are entangled with operational dilemmas.
Self-regulation: commercial aims versus public interests
Platforms present themselves as communication intermediaries that primarily provide online services to host and distribute user-generated content, rather than participating or commissioning media productions that traditional media companies do (Gillespie, 2018). This implies digital companies apply such statements to shrug off undertaking responsibilities once audiences raise concerns. However, this statement is equivocal and inadequate, in light of substantial complaints from audiences. It compels digital companies to confront obligations of regulating content, not only to avoid policies but also to pacify advertisers, to maintain users and corporate images (Flew et al., 2019).
However, content regulation is a very laborious process. One debate is whether it restrains the right of free speech, and the other argument is that platforms fail to distinguish what content purely exhibits egregious atrocities, and what content is sensitive but has cultural values and educational meanings that should be kept (Gillespie, 2018). Meanwhile, white males are privileged to regulate content based upon their endorsed liberalism whereas overlooking minority perspectives and imposing their values on international audiences subtly (Gillespie, 2018).
Another challenge immediately follows: self-regulation should be consonant with what criteria? As commercial businesses, platforms find ways to maximize profits at any price even against morality and public interests. Google adopts a laissez-faire approach to moderate hateful and extremist content indicates the world’s most influential company earns from hatred (Flew et al., 2019). The irreconcilable contradiction between commercial aims and public interests implies contemporarily, it is unrealistic to facilitate a criterion that can generate mutual benefits. In fact, the current platform governance is transferring from self-regulation to government intervention.
Government regulation: enforcement of hard laws
For protecting the public interest, government regulators represent a deterrent force that interprets what coercive social responsibilities media companies must take, rendering them operate under an external supervision system. For example, the European Union’s General Data Protection Regulation (GDPR) introduces comprehensive data protection regulations to prevent predatory commercial practices (Flew et al., 2021). Compared to GDPR, America’s Section 230 of the Act empowers platforms, with broad immunity from regulation, that does not require media companies to meet policing standards through monitoring and deleting content (Flew et al., 2019). These conflicting views unveil pressure on government regulation because media globalization means regulation has gone beyond the level of nation-states, which embody domestic regulations face difficulties in tackling contested global affairs (Flew et al., 2019).
Despite someone claiming a move from national media regulations towards international regulations is desirable and feasible, Flew et al. (2019) argues it is a utopian dream, since achieving regulatory harmonization through electing a representative nation to facilitate rules agreed by all sovereignties is impossible, as many nations have ambitions to impose their specific standards on the entire world to be an overarching power of dominating the global media industry. Hence, audiences are concerned that the fragmentation of the internet will form a “splinternet”, where audiences have different internet experiences and lose opportunities for transnational communications (Flew et al., 2019).
Civil Society Organization (CSO) regulation: gentle and non-mandatory regulation
In the recent media agenda, platforms consult with non-government organizations (NGOs), also known as civil society organizations (CSO) more frequently and broadly. These organizations are composed of voluntary citizen groups to address issues in the media field in support of public interests. Gathering people to participate in media governance can enhance citizens’ awareness of the right to personal data protection. Moreover, some of them might be victims of online abuse, which implies their experiences can enhance their understanding of media scandals profoundly and help make regulations more effective and specific. Nevertheless, in practice, their voices are vulnerable as only reflect to individual advocacy without enforced power for implementation.
In 2016, Twitter brought together 40 NGOs to build the Trust and Safety Council to propose suggestions on addressing online harassment, bullying, and other mental health issues (Flew, 2018). While their works are no doubt significant and valuable, however, there is no indication whether their activities, algorithms, and principles of moderating will be made public, indeed, their operational mechanisms are still in a black box. These challenges make Flew et al. (2021) justify their position that “civil society input was relatively limited” (p.128).
Conclusion
The burst of techlash revolutionizes media governance from platform-dominated regulations to the convergence of various stakeholders includes nations, activists and organizations, and their numbers are continuously increasing in scales. The intention of this article is to make platforms realize the co-regulatory management is imperative, with obligations to meet public expectations of accountability in a transparent manner.
References
Bruns, A. (2019). Filter bubble. Internet Policy Review, 8(4). https://doi.org/10.14763/2019.4.1426
Gillespie, T. (2017). Governance by and through Platforms. In J. Burgess, A. Marwick & T. Poell (Eds.), The SAGE Handbook of Social Media (pp. 254-278). SAGE Publications. https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au
Gillespie, T. (2018). Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press. https://doi.org/10.12987/9780300235029-001
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Flew, T., (2018). Platforms on Trial. Intermedia, 46(2), 24-29. https://eprints.qut.edu.au/120461/
Flew, T., Gillett, R., Martin, F., & Sunman, L. (2021). Return of the regulatory state: A stakeholder analysis of Australia’s Digital Platforms Inquiry and online news policy. The Information Society, 37(2), 128–145. https://doi.org/10.1080/01972243.2020.1870597
Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346.https://doi.org/10.1177/1461444815608807
Pesce, M. (2017). The last days of reality. Meanjin Papers, 76(4), 66–81. https://search-informit-org.ezproxy.library.sydney.edu.au/doi/10.3316/informit.284425064291885
WIRED. (2020). Islamic State terrorist propaganda is going viral on Facebook. https://www.wired.co.uk/article/islamic-state-terrorism-facebook