Content Moderation in Social Media Platforms

Issues, Implementation, Controversies and Government Involvement in Social Media Content Moderation

Introduction

Content moderation is known to be the practice of monitoring as well as applying the pre-determined set of guidelines and rules to the user-generated submissions for determining the best that if the communication is permissible or not. The previous studies have stated that the work of the content moderators has been portrayed negatively. Moreover, content moderation is the process through which online platform screens and monitors client created media to decide if it ought to be shown on the stage or not, contingent upon stage explicit guidelines and principles. The essay will focus on discussing about content moderation and the issues that arise for the digital platforms with content moderation. Furthermore, it will discuss the attempts to implement content moderation along with the controversies if presented by Myers (West, 2018). Last, it will discuss whether the government has a greater role in enforcing the content moderation restrictions on social media.

 

Content moderation and issues

“#ISRU11 – Children WILL see things that unsettle them” by OllieBray is licensed under CC BY-NC-SA 2.0

According to the studies, it has been found that content moderation as a form of practice is known to be common all across the online platforms that tend to heavily rely on user-generated content such as online marketplaces, social media platforms, dating sites, forums, and communities, sharing economy and many more. There has been a debate about content moderation and there exist various problems with content moderation in social media. It has been argued that social media networks swiftly spread the news to billions of people across the world (Gerrard, 2018). According to the Pew Study Centre, 72 percent of U.S. adults utilized at least one online media website in 2019, with the bulk of people visiting at least once a week. Some representatives of Congress are worried about the dissemination of disinformation (i.e., false, or misleading content) on social networking sites, and are looking into how corporations that run these platforms can solve it. Others are worried that content control techniques used by social media providers may stifle free expression. Both sides have relied on Section 230 of the Communications And multimedia act of 1934 (47 U.S.C. 230), which shields users of “interactive software services” against liability for publishing, deleting, or limiting access to another’s material and was enacted as a component of the Communications Decency Act of 1996 (Gorwa, Binns & Katzenbach, 2020). It has been mentioned that users may establish individual accounts, form connections, generate content by publishing text, photos, or videos, and engage with the material by reacting on and exchanging it with others via social networking platforms. Operators of social media platforms may choose to filter the material submitted on their platforms by permitting some postings but not others (Ganesh & Bright, 2020). They restrict users from publishing information that violates copyright laws or encourages criminal behaviour, and some have policies prohibiting undesirable content (for example, some violent or sexual content) or information that does not promote to the local area or administration they plan will to give (Gillespie, 2020). Social media businesses can decide what information is permitted on their platforms because they are private enterprises, and content moderation choices could be safeguarded by the First Revision. However, users are concerned that operators’ information moderation techniques give them too much power over what information is allowed on their services, with some critics claiming that operators are trespassing on clients’ First Amendment privileges by stifling discourse (Gerrard, 2018).

 

Implementation and Controversies of content moderation

“Social Media” by mgysler is licensed under CC BY-NC-SA 2.0

Moreover, there have been many attempts in the implementation of content moderation in the past with the help of various techniques which can be discussed below:

Firstly, the policy. It is believed that the establishment of written and informal policies is frequently the first step in content control. Broad value declarations (e.g., pledges to free expression) and bans against particular kinds of material and conduct are common in policy papers (Fagan, 2020).

Secondly, the automation technique which states that moderating material on big internet networks is a challenge and requires cost-effective solutions. The biggest platforms put a lot of money into automated systems that try to enforce content moderation standards on a broad scale. Machine learning is employed in more complex techniques to automated moderation: a large corpus of formerly moderated material is used to develop a prediction model, which is then utilized in the future to determine if new types of information are comparable to previously detected prohibited content. The interference of the automation technique is that it has proved to be accurate and reliable as the technique is trained on the datasets reflective of the types of speech and content that it is meant to evaluate (Fagan, 2020).

“Internet! 243/365” by Skley is licensed under CC BY-ND 2.0

Thirdly, the technique of crowdsourcing stated that the method to do it at scale and a minimal cost is to depend on the platforms’ users to regulate material. Reddit, for instance, has a hierarchical moderation system in which the platform establishes and executes content standards across all discussion forums, while individual comment threads are governed by human administrators selected from the audience who develop and implement subreddit-specific standards. Consider networks like Twitter and Facebook: they give methods for users to “like” and “heart” material, as well as systems for users to report information that they think is detrimental or in breach of the platforms’ standards. The interference of crowdsourcing has been proved to be risky as the volunteers who work under this technique and filters the damaging content for the companies can be the people who are not trained to perform the task in an objective way (Dias Oliva, 2020).

“internet” by bbtkull is licensed under CC BY-NC-SA 2.0

The final approach or the attempt for the moderation of content used was professional, human review. This content moderation approach takes advantage of the work of tens of thousands of experienced moderators who evaluate content behind the curtains. For example, Facebook plans to recruit 15,000 people moderators by 2020, and it has already stated that it will do so in the aftermath of content moderation controversies. Skilled administrators, unlike crowdsourced moderators, may be taught to enhance the consistency of their choices, but this work may be hampered by the networks’ content moderation standards’ frequent changes. The impact of this technique is directly on the human resource as they are well trained and their decisions presents the uniformity and consistency even after the task being complicated (Hilscher, et al., 2021).

 

Role of Government

“Social Media” by magicatwork is licensed under CC BY 2.0

In addition to this, many studies have also argued about the role of the government in the enforcement of content moderation restrictions on social media and it has been debated that government should not regulate and control content moderation of social media. Web look, as indicated by Donald Trump, are slanted against Conservatives. A few conservatives accept that Google and Facebook are imposing business models that are endeavouring to restrict traditionalist talk. A portion of those on the left, then again, guarantee that significant web-based media channels helped Trump’s 2016 success and the carnage in Charlottesville in 2017. A few groups on the two sides accept that the public authority ought to effectively control web-based media stage balance to accomplish equity, balance, or different objectives (Samples, 2019). Hence, the debate concluded that Government authorities may try to persuade technology firms to suppress unpopular speech direct or indirect. Targets of such general populace censorship would have few options other than to engage in political action. The tech businesses, which are among the most inventive and lucrative in the United States, would then be pulled into the quagmire of politicized and divisive politics. To keep technology from being politicized, private content censors must be able to disregard threats to their independence from government authorities, whether explicit or implicit (Langvardt, 2017).

 

Conclusion

From the above essay, it can be concluded that content moderation is known to be the practice of monitoring as well as applying the not really set in stone arrangement of rules and rules to the client created entries for deciding the best that if the communication is passable or not. There has been a debate about content moderation and there exist various problems with content moderation in social media. It has been argued that social media networks swiftly spread the news to billions of people across the world which is not true sometimes and affects the general public, it also stated that government should not regulate, and control content moderation of social media and Government authorities may try to persuade technology firms to suppress unpopular speech direct or indirect.

 

 

 

References

 

Dias Oliva, T. (2020). Content Moderation Technologies: Applying Human Rights Standards to Protect Freedom of Expression. Human Rights Law Review20(4), 607-640. https://doi.org/10.1093/hrlr/ngaa032

Fagan, F. (2020). Optimal social media content moderation and platform immunities. European Journal of Law and Economics50(3), 437-449. https://doi.org/10.1007/s10657-020-09653-7

Ganesh, B., & Bright, J. (2020). Countering Extremists On Social Media: Challenges for strategic communication and content moderation. https://doi.org/10.1002/poi3.236

Gerrard, Y. (2018). Beyond the hashtag: Circumventing content moderation on social media. New Media & Society20(12), 4492-4511. https://doi.org/10.1177/1461444818776611

Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society7(2), 2053951720943234. https://doi.org/10.1177/2053951720943234

Gorwa, R., Binns, R., & Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1), 2053951719897945. https://doi.org/10.1177/2053951719897945

Langvardt, K. (2017). Regulating online content moderation. Geo. LJ106, 1353. https://heinonline.org/HOL/LandingPage?handle=hein.journals/glj106&div=39&id=&page=

Myers West, S. (2018). Censored, suspended, shadowbanned: User interpretations of content moderation on social media platforms. New Media & Society20(11), 4366-4383. https://doi.org/10.1177/1461444818773059

Samples, J. (2019). Why the government should not regulate content moderation of social media. Cato Institute Policy Analysis, (865). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3502843

 

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.