Content moderation: some attempts and disputes

Assignment 2

“…platforms find that they must serve as setters of norms, interpreters of laws…adjudicators of disputes, and enforcers of whatever rules they choose to establish…they now find themselves its custodians.” (Gillespie, 2018, p.5)

 

In recent years, digital platforms have grown and expanded rapidly, and their functions have been more specific and more wide-ranging. While the range has broadened, increasing customers and multiple contents have added the responsibility of platforms due to the production and the destruction of harmful information. Moreover, platforms enable to collect users’ data and activity trends, which is an opportunity to deal with bad publishing information. So, platforms are considered to manage online users and regulate bad content (Gillespie, 2017). In order to understand the situation of content moderation, this article will explore the concept, in addition, explain the disputes arising from operations that platforms try to solve problems by harmful content. In the end, the view of government control will be discussed.

Social Media Colours

“Social Media Colours” by TT Marketing is licensed under CC BY 2.0

 

Digital platforms and content moderation

Content moderation is a form of online regulation that users’ contents can publish on social media and network through verification and examination. Moderation is not only a commodity that digital platform provides, but also constructs the platform as a tool and a cultural phenomenon. In other words, moderation enables to maintain the harmonious environment of the Internet——it protects the image and brand of platforms companies, in addition, makes positive contribution to users who produce and consume contents. This situation seems to be serving customers, however, is actually driven by the idea of capital. The existence of moderation is necessary, defining, and constitutional (Gillespie, 2018). Therefore, although platform operators are unwilling to bear the sophisticated content management even in prospect of community and users’ autonomy, they have to be custodians due to its importance (Gillespie, 2018). The setting of the moderation policy needs to complete the commitment to users who have diverse values under the condition of ensuring interests of platforms. They find the process of management is complicated and difficult because cannot catch both sides well. If platforms are too severe or loosen on content moderation, it will cause negative results and public debates.

 

Some attempts and disputes of content moderation

As mentioned ahead, the process of content moderation is sophisticate. Moderation polices need to satisfy multiple requirements so platforms should make a unified judgement criteria to content regulation. Besides, labor is essential that should help algorithm to conduct content examination together. However, the divergence of standards as well as consumption and loss of labor happens gradually with the implementation of attempts. Online environment gives users an open world to talk, but also construct a chaotic debate by users.

About judgement criteria

The basic behavior of content moderation is the construction of own judgement criteria. The aim of the criteria not only includes countermeasures of worst condition, but also avoids the forbidden of valuable content. Hence, the complete policy of moderation should establish a mechanism that can deal with the two extremes. Moreover, the policy should comply with the country law and cultural background obviously. Most of social platforms have their own clause to restrict content publishing, in fact, it helps them to prevent harmful speech of ignorant users in some degree. However, at the same time, there are some problems about the policy that occurred. Firstly, people know the law and breaks the laws. It is inevitable that some users have known the restrictions of clause such as pornographic and violent information, but still sent relevant content on platforms. If the policy did not work, the platform could become a mess again. So, on this basis, further of verification manual and algorithm is required.

Secondly, is about the content classification. Intelligent algorithm can check content automatically and intercept bad contents, which saves a lot of time and energy. The moderation is successful in cases where have obvious violations, but at the same time, it may cause inadvertent obstruction in classification. For example, some algorithms filtered the content of homosexuality in sort of pornographic (Gillespie, 2017).

Finally, cultural difference is particularly importance for some transnational platforms. That means the moderation criteria should be suitable to multi-cultural background. Platforms need to block content that is offensive and illegal to their country and foreign countries so that prevent domestic and foreign publics’ emotion (Gillespie, 2017).

In short, digital platforms want to use own policy to regulate users’ speech. During the process, consumers who break the policy, the deviation of algorithms, and the consideration of foreign culture cause the complexity of content moderation. Although platforms are trying to construct a free and fair image to consumers, the restrictive moderation is inconsistent with this promise. This situation about “free and shackles” and categories of criteria making are arousing some consumers’ dissatisfaction.

Labor and injury

Using labor in content moderation is a method to perfect the loophole of the algorithm. When users saw the publishing that made them feel untranquil, this marked content will shifted by content moderators through users’ complaint (Roberts, 2019). In order to protect the moderation process and prevent the internal information that may give potential benefits to competitors, content moderators who touch platforms directly are required to sign confidentiality agreement (Roberts, 2019). This work is not easy. Like the BBC stories indicates, moderators are important to Facebook, they watch and screening the terrible information such as “children pornography” and “animal abuse” (BBC News, 2018). They need to have higher cognitive ability and cultural literacy so that determine complex content with problem (Roberts, 2019). Higher requirements but give moderators inconsistent returns. They do not have much salary, additionally, the browse of harmful content makes them mental fragility.

 

Should the government play a great role in content control?

As the result of national political systems, different countries have different levels of interventions at present. For example, Chinese platforms are cooperating with the government, hence, if there are some illegal and harmful information, the government will force to deletion required. Qinglang policy is a movement of network cleaning, Chinese government and network institutions prohibit illegal accounts and solve the bad cultural phenomenon and violation behaviors from department rectification to words prohibition. For example, infatuated celebrity culture was crazy on Chinese Tik Tok and Weibo. Such management is mandatory but effective, however, completely lose the meaning of free speech. When the content threat the public order and bring worst results, the government should step in to help. But if the government manage much, digital platforms will be restricted, and content producers will be restricted. In fact, it contains the development of platforms, moreover, imprison producers’ creativity.

Weibo.com Logo

“Weibo.com Logo” by bfishadow is licensed under CC BY 2.0

 

Conclusion

In general, digital platforms use content moderation to manage the environment of the Internet. They make policy to limit the content and utilize algorithm and labors to promote the regulation so that avoid more complex problems. Platforms strive to maintain the promise about free to consumers, however, it

is hard to avoid some trouble and divergence. The judgement of content involves many aspects, such as cultural backgrounds and information categories. Moderation policy could not restrict everyone, so the moderators need to further review their content that affect moderators’ mental health. The government is powerful to regulate stubborn users and terrible information, however, excessive interference is not necessarily a good thing.  Moderation over the Internet environment and development is still being exploring. Maybe one day, people can find more appropriate way to treat it.

 

Reference list

BBC News (2018). It’s the worst job and no-one cares [Video]. Retrieved from https://www.youtube.com/watch?v=yK86Rzo_iHA&list=WL&index=2&t=115s

CGTN (2021). Chinese social media platforms announce to rectify harmful financial content. Retrieved October 17, 2021, from https://news.cgtn.com/news/2021-08-28/Chinese-social-media-platforms-to-rectify-harmful-financial-content-13677ANg96o/index.html

Gillespie, T. (2017). Governance by and through Platforms. In J. Burgess, A. Marwick & T. Poell (eds.), The SAGE Handbook of Social Media, London: SAGE, pp. 254-278. Retrieved from https://ebookcentral-proquest-com.ezproxy.library.sydney.edu.au

Gillespie, T. (2018). All platforms moderate. In Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. Yale University Press, pp. 1-23. doi: 10.12987/9780300235029

Massanari, A. (2017). #Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. In New Media & Society, 19(3), pp. 329–346. doi: 10.1177/1461444815608807

Roberts, S. T. (2019). Understanding Commercial Content Moderation. In Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven: Yale University Press, pp. 33-72. doi:10.12987/9780300245318

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.