Content Moderation in Digital platforms

Content Moderation Strategies for Social Media by Jessica Rangel is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at https://mention.com/en/blog/content-moderation-strategies-for-social-media/.

 

Introduction

In today’s high-speed Internet era, with the maturity of technology, Internet audience is no longer a passive receiver of the Internet, they are gradually changing into the active creator of the Internet. Content moderation is proposed to better balance the relationship between the Internet and its users. As commented by Papakyriakopoulos, Serrano & Hegelich (2020), content moderation is considered to be the practice of monitoring along with applying the pre-determined set of different rules and guidelines to the user-generated submissions. There are often unmonitored two-way interactions that mainly get offensive. Brands that mainly offer services or apps that feature two-way interactions must be vigilant, especially regarding preventing hate-based speech. It is done to determine the best if the communication is mainly permissible or not. The different attempts need to be highlighted, which are seen in implementing content moderation and the controversies resulting from it. Also, the role of the government should be analyzed whether they should be playing an integral role in enforcing the content moderation-based restrictions on social media platforms.

 

Issues Rising from Content Moderation for Digital Platform

In the different social media such as Facebook or Twitter, the content moderation is operationally as well as technically challenging in nature. Madio and Quinn (2021) argued that no matter the number of employees working on the respective issue, there will be more offenders than the moderators. This mainly leaves the different tech organizations struggling to contain the offensive along with the harmful behavior on the platform.

“Automotive Social Media Marketing” by socialautomotive is licensed under CC BY 2.0

This kind of scale, combined with the platform quest for maintaining software margins, mainly means the areas such as moderation are under-invested. When the content is being served to the different moderators, it is done primarily in a disjointed manner. It mainly requires them to work row by row, jumping among various kinds of content with various levels of severity, which makes it highly difficult to identify the different emerging patterns. These will be quite ineffective for the companies or the social media platforms like Google or Facebook to perform effectively. What is more, it is often noted that content moderation means practicing the monitoring as well as applying pre-determined sets of the different rules that will be suitable for understanding the various breaches taking place on social media platforms. These will be quite helpful for understanding the overall effectiveness and ineffectiveness and solve the issues in an effective way (Haimson, 2021).

Additionally, as Gorwa, Binns and Katzenbach (2020) discussed, content moderation often creates conflict with the freedom of speech. The social media platforms cannot be mainly seen as the digital overlords primarily trying to censor the people or exhibit the bias mainly towards the particular kind of political parties or viewpoints. With this as consideration, the different moderators have the capability to adhere to the strict set of guidelines mainly. The majority of content that people mostly complain about on social media falls below the threshold, soliciting the moderation of the platform, including banning of the accounts or the suspension (Gillespie, 2020). The different kind of problematic content mainly requires user-based interventions of the un-following, muting or blocking. Only different extreme content requires the proper platform-based moderation along with banning of the accounts while proper content moderation could better regulate social platforms to some extent.

 

Analysis of Attempts for Implementing Content Moderation and Controversies

There are situations wherein the inclusion of the unmoderated content, which is published in the real time exposes the brand to become offensive content. The social media platforms have been mainly designed for the overall engagement and after that, the revenues. However, unfortunately, the most engaging level of content is controversial in nature, which means that the algorithms of the platform often end up contributing towards the platform (Gerrard & Thornham, 2020). One such controversy for implementing content moderation is that Facebook is mainly working on mainly fixing the problem.

“Automotive Social Media Marketing” by socialautomotive is licensed under CC BY 2.0

In the year 2018, Gerrard (2020) discussed company mainly announced the overhaul to their algorithm for optimizing for the different meaningful social-based interactions. However, it becomes quite difficult to change such algorithms, user-based culture, along with features overnight. These were considered to be the misaligned incentives that impacted the overall effectiveness of the situation and the content moderation implementation was quite hard in this regard. From the analysis of the respective incident, it is found that users mainly need more control as the social media platforms were first designed for maximizing the engagement of the customers (Ganesh & Bright, 2020).

The accidental kind of byproduct of the same is the development of a polarizing environment. However, the same itself mainly comments that a the vast the proportion of the controversial level of content is subjective and personal in nature. Therefore, the same leads to why the different users are not able to play a more prominent role in moderating their social media feeds. The platforms should be capable of continuing to majorly invest in moderation for ensuring that dangerous content is majorly dealt with. Through the proper adoption of user-led moderation, the different social media platforms are capable of delivering a healthier along with happier social media environment (Banchik, 2021). It is the aspect wherein innocent kind of users are not victimized unexpectedly and yet no one fined the opinions of these people as well.

 

Role of Government in Enforcing Content Moderation Restrictions on Social Media

It seems that content moderation is majorly eating up the world. The rule sets are mainly exploding in the platform, the services are mainly peppered with different labels, and there are millions of users who are given the boot in regular fell-based swoops (Banchik,2021). No such platform is mainly immune from the different demands it steps in and imposes the guardrails on the content that is user-generated in nature. Many conservatives have been capable of arguing that Facebook and Google, is the monopolies that are majorly seeking to restrict conservative speech (Baker, Wade & Walsh, 2020). On the contrary, there are few left which complains that the immense level of social media platforms mainly fostered election of Trump in 2016 along with violence in Charlottesville during the year 2017.

As opined by Baker, Wade & Walsh (2020), many on both these sides mainly confirm that the government should be capable of actively regulating moderation of the entire social media-based platforms for attaining a high level of fairness, other values and balance. The government has been refrained from mainly forcing the private property owners for abiding by First Amendment. The government should be able to take action which will be achieving values in the question without any significant costs to other essential values. As the people have seen in social media, the fundamental values are highly at stake and the second major part of the public interest argument might be climbing the steep level of incline.

 

Conclusion

Therefore, it is inferred that content moderation is always responsible for the user-generated content which is mainly submitted to the online platform. The content moderators’ job is making sure that the different items are mainly placed in the right category are mainly free from the scams does not include illegal items and much more though there are still problems and confusion on some social media platforms. The privacy of the users is being hampered, due to this, it mainly impacts the overall effectiveness of the simplicity of the different social media platforms. There is a role of government in restricting the content moderation wherein the power of the government is unlikely for attaining public interest, which advocates of different regulations cite. Finally, in a few of the cases, it is found that regulation may be only containing and attaining some of the values, at the unacceptable price in different rights along with values.

 

 

 

 

Reference list

 

Baker, S. A., Wade, M., & Walsh, M. J. (2020). <? covid19?> The challenges of responding to misinformation during a pandemic: content moderation and the limitations of the concept of harm. Media International Australia, 177(1), 103-107. (https://doi.org/10.1177/1329878X20951301)

Banchik, A. V. (2021). Disappearing acts: Content moderation and emergent practices to preserve at-risk human rights–related content. New Media & Society, 23(6), 1527-1544. (https://journals.sagepub.com/doi/abs/10.1177/1461444820912724)

Ganesh, B., & Bright, J. (2020). Countering Extremists On Social Media: Challenges for strategic communication and content moderation. (doi: 10.1002/poi3.236)

Gerrard, Y. (2020). Social media content moderation: six opportunities for feminist intervention. Feminist Media Studies, 20(5), 748-751. (https://doi.org/10.1080/14680777.2020.1783807)

Gerrard, Y., & Thornham, H. (2020). Content moderation: Social media’s sexist assemblages. new media & society, 22(7), 1266-1286. (https://doi.org/10.1177/1461444820912540)

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), 2053951720943234. (https://doi.org/10.1177/2053951720943234)

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society, 7(1), 2053951719897945. (DOI: 10.1177/2053951719897945)

Haimson, O. L., Delmonaco, D., Nie, P., & Wegner, A. (2021). Disproportionate Removals and Differing Content Moderation Experiences for Conservative, Transgender, and Black Social Media Users: Marginalization and Moderation Gray Areas. (https://deepblue.lib.umich.edu/bitstream/handle/2027.42/169587/content_moderation_CSCW_2021_camera_ready.pdf?sequence=1)

Madio, L., & Quinn, M. (2021). Content moderation and advertising in social media platforms. Available at SSRN 3551103. (https://www.researchgate.net/publication/340050562_Content_moderation_and_advertising_in_social_media_platforms)

Papakyriakopoulos, O., Serrano, J. C. M., & Hegelich, S. (2020). The spread of COVID-19 conspiracy theories on social media and the effect of content moderation. Harvard Kennedy School Misinformation Review, 10.( DOI: https://doi.org/10.37016/mr-2020-034)

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.

About HAN YU 1 Article
Major in marketing and digital culture, like dancing.