Content moderation of digital platforms

social media platform

According to data from the Australian Competition and Consumer Commission (2019), it is found that the frequency and importance of digital platforms, especially Google and Facebook, have increased significantly. Approximately 19.2 million Australians use Google search every month and 17.3 million visit Facebook. In contemporary society, these platforms have been integrated into people’s lives. Digital platforms bring news, information, and new trends to people daily. People are convinced of what social media brings. In this way, the content of the digital platform is subtly affecting people’s thoughts and concepts. With the emergence of a large amount of information and content, numerous issues related to security and privacy have also emerged. Flew, Martin, and Suzor (2019) pointed out that platforms have also been severely criticized for failing to reduce online hate speech, abuse, and harassment, or prevent terrorist propaganda. ACCC (2019) believes that people rarely reflect on the global responsibilities of digital platforms. As digital platforms have a profound impact on the media market and society, content review has become extremely crucial.

Challenges faced by content moderation

Picard & Pickard (2017) pointed out that several existing policies cannot cope with rapid technological, economic, political, and social development because they are not clearly based on policy principles. As development and trends change, it is hard to change the audit standards at any time. Meanwhile, the impact-dominant nature of platform decisions often leads to inconsistent and sometimes contradictory content standards. The European Union’s hate speech code of conduct ensures the common ground of drafting some speech rules across platforms, but there is almost no consistency in implementation (Flew, Martin & Suzor,2019). The platform knows how to define illegal speech, such as terrorism-related content or child sexual abuse content. However, for any speech outside this category, it lacks clear definitions and boundaries (Wardle, 2019).

For example, in 2019, Steam launched a game called “Rape Day” that angered the public. This is a game with no lower limit on the theme. Players will simulate a serial perverted killer, wantonly killing women in the zombie apocalypse, and commit sexual violence. At first, Steam refused to remove the game. All of this is inseparable from the new Steam policy on content review in June. Valve sets the review standard so that if the game violates the law or maliciously causes a dispute, it will be taken down. However, because the platform has no clear boundaries for the term of maliciously triggering disputes. Therefore, some sensitive themes of games have also been put on the shelves. However, Steam also hides the censorship regulations. For content review, a fair and just review of the content review is almost impossible. Often companies cannot put ethical standards above profits. Content review regulations may not satisfy everyone. In this way, review standards and transparency become important.

However, due to the lack of transparency in current content standards, users are vulnerable, especially marginalized groups. In Community Standards Enforcement Report, Facebook disclosed that its algorithm tool could actively detect 94.5% of hate speech before users report it to the platform (Campbell & Singh, 2020). Nevertheless, Facebook did not point out its success rate. The algorithm is practiced from a large amount of data. Although Facebook operates a hate speech classifier in more than 40 languages around the world, it cannot collect data in some less commonly used languages. Therefore, some harmful content spread through language requires users to manually mark them, and these users are usually victims. This has caused disproportionate damage to groups that have been marginalized.

Content review could be controversial

The traditional reliance on purely manual auditing has been unable to meet the auditing needs of the Internet age. Manual review cannot accurately memorize and recognize a large number of sensitive words, pictures, and content. Thus, it is necessary to use the Internet and technology to fill the gaps in auditing ability and efficiency, so machine auditing was introduced. Machine review operates on content through established rules or algorithms. However, there are problems with machine audits. It cannot control ideology, public opinion orientation, and some other issues. Campbell and Singh (2020) mentioned the example of YouTube, which mistakenly deleted the content of Syrian violence published by human rights and monitoring organizations. The algorithm will mistake this content for extremist propaganda. To a certain extent, these automated tools have limitations in their ability to detect content understanding. Content related to contextual judgment and subjective understanding will be marked and entered manual review again. However, manual review requires a huge amount of work.

Furthermore, content moderation restricts users’ free speech, which also involves the topic of human rights. However, according to Article 19 of the Convention, the transmission of information and ideas is regarded as a human right. Citizens and users should have the right to free speech. Picard believes that digital platforms should supervise the Internet and platform companies, rather than excessively restrict citizens’ ability to receive and transmit information and ideas. In the process of increasing review efforts, digital platforms should weigh sensitive topics, free speech, and public communication. In addition, the sensitive content and vocabulary restricted by the algorithm may affect the user’s timely call for help. Hence, the scale and content of the audit should be appropriately adjusted.

Government implementation and intensity

As digital platforms have greater power and influence than in the past, governments are actively challenging their rights and looking for strategies to regulate content and operations (Flew, Martin & Suzor, 2019). The government should regulate its role in the implementation of content review restrictions on social media. Schlesinger (2020) believes that the public sphere is always constructed according to power relations. The content of the review is provided by the popular political order, economic culture, and technology to provide definitions and systems. In the current crisis of capitalist democracies, the politicization of the communication system is more ubiquitous. Although the government is actively paying more attention to supervision, the law cannot be carried out instantly due to frequent conflicts in national policies.

Moreover, in terms of content review, excessive government involvement may affect the fairness of law enforcement. The government’s participation in the content review may weaken the voice of the people and amplify the content that the government wants people to hear. Especially since the 2016 U.S. election and the Cambridge Analytica scandal, people have raised too many concerns about platforms participating in the spread of “fake news” and suspected manipulation of electoral politics (Flew, Martin & Suzor, 2019). US President Donald Trump, who once had 72.6 million followers, posted content on Twitter from political conspiracies to defamation of critics. Although Twitter’s policy clearly stipulates that users must not threaten or use violence against any group of people, Trump’s many tweets about the nuclear war threatening North Korea have never been deleted. In other words, if the government participates too much, it may lead to bias in the content review due to status and rights.

Not only that, but Gorwa (2019) believes that in addition to this, excessive participation may lead to many potential challenges, including the relative novelty of its business model and concerns about stifling future innovation. Although government interference in the content review may improve review efficiency and quality, it will obliterate the creativity of platforms and development companies to a certain extent. Therefore, government regulatory agencies and platform regulatory agencies should both play a role in content review, and the government should not have too much power.

 

Reference

Australian Competition and Consumer Commission. (2019). Digital Platforms Inquiry: Final Report (pp. 4~38).

Campbell, E., & Singh, S. (2020). The flaws in the content moderation system: The Middle East case study. Retrieved from https://www.mei.edu/publications/flaws-content-moderation-system-middle-east-case-study

Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal Of Digital Media & Policy10(1), 33-50. doi: 10.1386/jdmp.10.1.33_1

Gorwa, R. (2019). The platform governance triangle: conceptualising the informal regulation of online content. Internet Policy Review8(2). doi: 10.14763/2019.2.1407

Picard, R., & Pickard, V. (2017). Essential Principles for Contemporary Media and Communications Policymaking. Retrieved  from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-11/Essential%20Principles%20for%20Contemporary%20Media%20and%20Communications%20Policymaking.pdf

Schlesinger, Philip. (2020). After the post-public sphere. Media, culture & society. Volume (42). 1545~1563.doi: 10.1177/0163443720948003

Wardle, C. (2019). Challenges of Content Moderation: Define “Harmful Content.” Retrieved from https://www.institutmontaigne.org/en/blog/challenges-content-moderation-define-harmful-content