—Should digital platforms strengthen content moderation?
Introduction
In recent years, the rapid development of digital technology has changed the way we interact, share information, and communicate. As a result, the question of how digital platforms should be supervised and moderated has become a paramount concern. These platforms play a vital role in providing a space for the free exchange of ideas and fostering community interactions. However, they also raise concerns related to user data collection and the control of user-generated content. Should the platform strengthen content moderation? How do we balance content moderation, freedom of speech, and public privacy?
Definition of content moderation
Content moderation means filtering the content published on the digital platform. When the content published by users does not meet the platform specifications, it will usually be reminded, marked, or deleted. Content moderation is committed to maintaining platform security and protecting users.
The necessity of strengthening content moderation
One view is that it is necessary to strengthen the content moderation of digital platforms because it can control content quality, maintain community security, and improve user experience. Due to the digital platform providing people with a direct contact place where people can speak freely, the spread of more hate speech, obscene and profane content, and other bad materials becomes more obvious and rapid. (Gillespie, 2018)
A striking example of this is the Gamergate controversy. This misogynistic online movement, initiated by Eron Gjoni, triggered a torrent of hate speech and cyberbullying on the platform, which ultimately escalated into more severe forms of harassment, including flesh search and death threats. Research shows that social media platforms such as YouTube and Facebook have not made corresponding supervision. The algorithms of these platforms do not make users aware of the hatred and violence behind this incident, but encourage hatred and extreme content, because they will push relevant content to users according to their preferences and browsing habits. When user browses hate content many times, he will inadvertently receive more hate content. (Romano, 2020)
“File:Gamergate sentiment (Royal Park).jpg” by Cafuego (https://www.flickr.com/photos/cafuego/) is licensed under CC BY-SA 2.5.
Therefore, digital platforms must evolve their algorithms and bolster content moderation to reduce the prevalence of fraudulent content, threats, harassment, and cyberbullying. This not only enhances user security while navigating the internet but also fosters healthier online interactions. Moreover, strengthening content moderation plays a pivotal role in curbing the dissemination of misinformation, boosting user trust, and preserving social stability.
In addition to the above-mentioned, strengthening content moderation is also conducive to reducing the spread of misinformation, enhancing user trust, and promoting social stability. Misinformation means false or incorrect information, it can change memory, encourage prejudice, and influence public opinion and decision-making. (Kaufman, 2023)
For example, in India, misinformation has linked the new-onset pneumonia with a specific religious group, which has led to physical violence and discrimination. In 2016, when Twitter robots released fake news about replacing senators, it led to more hate activities and false followers (Muhammed T & Mathew, 2022). The spread of false information had a great impact on citizens’ health, political elections, and social stability. Therefore, the platform must strengthen content moderation to prevent the spread of false information and maintain social stability.
Efforts made by the digital platform
In fact, Twitter has made some efforts. Twitter tries to review and supervise the content from the aspects of marking wrong information, reminding harmful information, and actively announcing the election situation.
How we address misinformation on Twitter? https://help.twitter.com/en/resources/addressing-misleading-info
Reasons why users object to content moderation
Although Twitter’s practice aims at maintaining community rules and protecting users, it still raises users’ concerns about the freedom of speech and infringement of privacy by digital platforms.
Freedom of speech means a person’s right to express his ideas without interference or punishment. (Cornell Law School, 2021) Many argue that digital platforms should not intensify content moderation, as it poses a grave threat to users’ freedom of speech. They contend that digital platforms have no authority to monitor, delete, or block user-generated content. A typical representative of opposition is the Nigerian government. Due to Twitter deleted a post of the government’s President Buhari during the content review, in response, the country’s telecommunications company directly blocked millions of citizens from accessing Twitter. (United Nations, 2021)
In addition to freedom of speech, concerns about personal privacy exposure are also the reasons why users oppose strengthening content moderation. This means that the digital platform is based on content moderation to collect user data, access user privacy, and push advertisements to users through the obtained information to obtain economic benefits. A real-world example is the FTC complaint that Twitter collected mobile phone numbers and email addresses from users on the grounds of protecting their accounts and profiting from them. (Fair, 2022) In this case, content review becomes an excuse to collect data, and public privacy becomes a tool for the platform to profit. This will not only cause public concern and anxiety about private conversations and personal information disclosure but also enhance distrust of digital platforms and aversion to content moderation. Therefore, how to balance the relationship between content moderation, freedom of speech, and personal privacy on the digital platform is very important.
“data privacy” by stockcatalog is licensed under CC BY 2.0.
Challenges faced by the platform
The digital platform is making an active attempt for content audit. For example, it has been suggested that platforms can filter a lot of inappropriate information by using automatic auditing technology. But a new question has emerged, that is, what is the standard of automatic audit? Where is the boundary of content moderation? There has always been a dispute about the boundaries of audit. The famous photo of naked Kim Phuc escaping from napalm bombs is a good example. Some people think that photos must be deleted because it is obscene. But others think it can’t be deleted because it helps people understand the cruelty of war. (Gillespie, 2018)
Responsibility for the platform
Therefore, the platform must formulate a clear content moderation policy, and avoid disputes by increasing the transparency of moderation. In addition, the digital platform also needs to organize content management and be wary of algorithm errors to ensure that hate speech and misinformation can be accurately removed. At the same time, the platform has the responsibility to authorize privacy settings and data encryption permissions for user accounts and prohibit commercial interests through user privacy. (Data Privacy Manager, 2020) Besides, the platform can also enhance the fairness and transparency of content moderation through user feedback and the introduction of third-party review agencies.
The future of digital platform
Content moderation is a part of the digital platform’s responsibility, and the future of digital platforms still needs to strengthen content moderation to solve the problems of hate information, misinformation, and network security. Content moderation is not only conducive to maintaining community order and user safety but also to improving information quality and platform credibility. If the digital platform needs to take the responsibility of balancing freedom, privacy, and supervision, and protect the rights and interests of users while safeguarding the interests of the platform, the positive impact of content moderation will outweigh the negative impact, thus building a healthy, open, and diversified online environment.
In conclusion, the responsibility and challenge of digital platforms in content moderation are multifaceted and complex. Striking the balance between content moderation, freedom of speech, and user privacy is paramount. With the development of digital technology, one day, the content audit of digital platforms will play a more important role in shaping the online environment and promoting a more responsible and ethical digital environment.
This work is licensed under CC BY-NC-ND 4.0
References
Gillespie, T. (2018). All platform moderate. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media, 1-23. Yale University Press. https://doi.org/10.12987/9780300235029
Romano, A. (2020, January 20). What we still haven’t learned from Gamergate. Vox; Vox Media. https://www.vox.com/culture/2020/1/20/20808875/gamergate-lessons-cultural-impact-changes-harassment-laws
Kaufman, A. (2023, June 7). What is disinformation? Misinformation? What to know about how “fake news” is spread. USA TODAY. https://www.usatoday.com/story/news/2023/06/07/what-is-misinformation/70199478007/
Muhammed T, S., & Mathew, S. K. (2022). The disaster of misinformation: a review of research in social media. International Journal of Data Science and Analytics, 13(4). https://doi.org/10.1007/s41060-022-00311-6
Cornell Law School. (2021, June). Freedom of Speech. LII / Legal Information Institute. https://www.law.cornell.edu/wex/freedom_of_speech
United Nations. (2021, July 23). Moderating online content: fighting harm or silencing dissent? OHCHR. https://www.ohchr.org/en/stories/2021/07/moderating-online-content-fighting-harm-or-silencing-dissent
Fair, L. (2022, May 23). Twitter to pay $150 million penalty for allegedly breaking its privacy promises – again. Federal Trade Commission. https://www.ftc.gov/business-guidance/blog/2022/05/twitter-pay-150-million-penalty-allegedly-breaking-its-privacy-promises-again
Data Privacy Manager. (2020, July 14). How to Protect Your Privacy on Social Media? Data Privacy Manager. https://dataprivacymanager.net/how-to-protect-your-privacy-on-social-media/