The Government Needs to Balance The Content Moderation of Digital Platforms

"Social Media Cloud by Techndu" by Mark Kens is licensed under CC BY 2.0

With the development of technology and the economy, the internet and all types of electronic devices are becoming more and more popular for the general public. With this, social media has penetrated and invaded every aspect of people’s lives. These platforms have largely provided an arena for discussion and debate, giving the public the space to express their views and opinions on many events. However, for some people, they rely on the Internet’s ability to speak as they wish without revealing their true identity, without any thought of taking responsibility for what they say. In reality, however, many social media platforms already impose certain restrictions on what the public can publish, namely content moderation.

 

The diversity of users makes it difficult for moderation standards

“Automotive Social Media Marketing” by socialautomotive is licensed under CC BY 2.0

Social media platforms bring more people into direct contact with each other, providing them with new opportunities to talk and interact with a wider range of people and organise them into an online public (Gillespie, 2018). These users from all walks of life, all ages and of different classes, races and nationalities are communicating on the same platform. It is therefore natural for them to have different opinions and even arguments about the same thing. Some content on the internet can be offensive to some people, but perfectly acceptable to others. It is difficult for platforms to sift through millions of posts each week, one by one, even if there is a clear criterion.

 

A famous and typical example is “Napalm Girl,” the 1972 Pulitzer Prize–winning photo by Associated Press photographer Nick Ut. The photograph shows several children running through deserted streets to escape napalm attacks, their faces in agony, followed by soldiers in the distance. Prominent among them is a naked girl with napalm burns to her back, neck and arms. The photograph was included by Norwegian journalist Tom Egeland as an iconic and significant image of the war in his article looking back at the photographs that changed the history of the war. However, the combination of underage nudity and pain on this image also led to its removal by the Facebook blogger(Gillespie, 2018). This incident has led to a heated debate about the need for content moderation.

 

Self-regulation of content censorship by digital platforms is an important and quite necessary part of maintaining a positive and positive Internet environment. Commercial content review is an organised practice of screening user-generated content posted to internet sites, social media and other online channels. The activity of reviewing user-generated content may occur before material is submitted for inclusion or distribution on a website, or after it has already been uploaded (Roberts, 2019).

 

For the moderation of digital platforms, there is also a division between automatic system auditing and manual auditing. If an electronic process is set up for automatic, intelligent auditing, for example, content containing certain keywords cannot be sent out or cannot be displayed after it has been sent out. In the case of automatic auditing, this reliance on certain keywords without context can result in content that is not relevant being sent out as well. Accordingly, a number of questions arise. Given the existence of programs and keywords, how do people filter out these keywords or topics? In other words, what are the criteria for content review? Who should set these criteria? Do the same set of criteria need to be adopted between different platforms? On what basis will users agree and comply with these criteria?

 

A human review will be more accurate and humane than a rigid automated review. On many high-traffic platforms, users submit a staggering amount of content. Some content lends itself to partial batch processing or other types of automated machine filtering, especially if the material exists and already exists in a database of known bad material. But given the complexity of these processes and the many issues that must be weighed and balanced, the vast majority of social media content uploaded by users requires human intervention to be properly filtered – especially where video or images are concerned (Roberts, 2019). In the case of a video player called Bilibili, users must pass a human review in the background when uploading videos. The video is sent out only after the review is approved, and the content is returned if it does not comply with the rules. The problem is that manual review is costly in terms of labour and time, and can be less convenient and faster in terms of user experience.

 

Another area of human review is the review of user complaints. The staff will look at and determine what information the user has complained about that is not in line with the rules such as rumours, insults, personal attacks etc. Once the content of the complaint is verified as a genuine violation, it will be immediately removed or blocked. These tasks oscillate between the numbingly repetitive and the mundane, and the long exposure to thousands of potentially violent, disturbing and, worst of all, psychologically damaging images and materials makes the psychological aspects of the staff responsible for reviewing complaints worthy of attention and consideration(Roberts, 2019).

 

‘It’s the worst job and no-one cares’ – BBC Stories. Source: YouTube- https://www.youtube.com/watch?v=yK86Rzo_iHA

 

Content moderation on special topics

 

In addition, on certain controversial topics, the content review of the platform is likely to be the guiding light of public opinion. In China, on Weibo, a popular social media platform, there are many restrictions on many of the nasty social issues, gender issues and especially feminist topics. Many users who have expressed radical views are at risk of mandatory cancellation of their Weibo accounts, and topics on serious social issues are banned from further discussion. Some design decisions may make these spaces implicitly encourage them to become a hotbed of radicalism that hates women or other groups(Massanari, 2017). Moreover, because of the general herd mentality of the public, some bloggers with huge followings and attention spans may be able to steer public opinion in the face of news or events. If these bloggers are maliciously steering, the consequences are even more unpredictable in terms of online violence. The distinction between free speech and malicious steering is a complex one and can easily lead to mistakes.

 

On the other hand, some time ago Twitter , Facebook and Instagram have banned the account of the then US President Donald Trump one after another. Zuckerberg said it was too risky to let the president continue to use our services. Facebook’s move comes as the social media giant is also ending a policy that protected politicians from some content review rules(“Facebook suspends Trump accounts for two years”, 2021). According to their judgment, these Internet companies can directly shut trump up on their platform and deprive him of his freedom of speech. They just think trump has violated the regulations of their community. It can be seen that these digital platforms can control the expression and direction of political opinions on the Internet through content audit if they like. The key question remains, where does this standard come from?

“Donald Trump Sr. at #FITN in Nashua, NH” by Michael Vadon is licensed under CC BY-SA 2.0

 

Perhaps the government should do more

 

Unlike traditional media, the Internet is open to greater social responsibility, especially in many other institutional areas, from daily life to science(Dutton, 2009). In setting the vetting standards mentioned above, the government, as the party with the ability to control and enforce them, needs to strengthen its supervision and management of Internet companies. In addition, there is a need to adopt some control over the competition and restrictions between Internet companies. In the internet ecosystem, platform parties as merchants provide products for users, and users as consumers will expect a better using experience. The government then acts as a third party to control unhealthy competition between businesses and coordinate the scale of freedom of use between businesses and consumers. This will lead to an ecological balance and a mutually beneficial situation for mutual progress and development. At the same time, government control must be proportionate. Too much banning and management may lead to a rebellious mindset of the masses discontented with the one-word government.

 

The whole world is going to develop more and more in terms of technology and people’s ideas, and then better platforms are essential for the whole future society. For a better internet environment and communication atmosphere, more initiatives should be taken by the government to regulate moderately. Self-regulation by users and internal auditing by platforms are also necessary.

 

 

Reference list

 

Dutton, W. H. (2009). The Fifth Estate Emerging through the Network of Networks. Prometheus (Saint Lucia, Brisbane, Qld.), 27(1), 1–15. https://doi.org/10.1080/08109020802657453

 

Facebook suspends Trump accounts for two years. (2021). Adgully. Retrieved from https://www.bbc.com/news/world-us-canada-57365628

 

Gillespie, T. (2018). Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press.

 

Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807

 

Roberts, S. T. (2019). Understanding Commercial Content Moderation. In Behind the Screen (pp. 33–72). Yale University Press. https://doi.org/10.12987/9780300245318-003

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.