“Professional Girl Gamer Plays in MMORPG/ Strategy Video Game on Her Computer. – Credit to https://www.lyncconf.com/” by nodstrum is licensed under CC BY 2.0.
Introduction
With the COVID-19 pandemic reshaping global social interactions and entertainment, the significance of social gaming platforms has grown immensely. Platforms such as Discord and Roblox have become primary hubs for online socializing, entertainment, and communication. However, the high interactivity and complexity of user-generated content on social gaming platforms pose significant challenges for AI moderation.
Although AI moderation tools can assist platforms in automatically detecting and filtering inappropriate content, a crucial question arises: does the application of AI moderation on social gaming platforms create a utopian online world or a dystopian one? In the upcoming article, we will delve deep into this issue, presenting our own perspectives and viewpoints.
Section 1: Positive Impact of AI Moderation
Undeniably, on today’s social gaming platforms, AI moderation can monitor a vast amount of content in real-time, extensively identifying hate speech, pornographic contents, and extremist material more rapidly and broadly than human moderators. Ideally, this allows such content to be quickly removed before anyone can see it(Ozanne et al., 2022).
For example, Roblox recently acquired the speech technology startup company Speechly. Roblox has implemented Speechly’s technology to deploy a generative AI voice assistant. This advancement not only enhances digital experiences but also integrates speech moderation algorithms for real-time monitoring of voice conversations on the platform, enabling the detection of harmful behaviors that violate the rules. It is worth noting that this technology can quickly recognize voice commands, analyze conversations, and respond promptly. This means the system can execute commands before users finish speaking and simultaneously alert administrators. This capability helps minimize the transmission of online harm, including hate speech, discriminatory language, and harassment, to innocent players.
Additionally, this real-time voice recognition technology introduced by Roblox enables efficient management and monitoring of large-scale voice conversations within its virtual 3D environment. This technology provides new voice chat capabilities for Roblox’s vast user base, which reaches up to 65.5 million daily active users, and contributes to maintaining order within the gaming environment.
Section 2: Negative Impact of AI Moderation
While AI moderation is successful in protecting users from harmful content and malicious behavior, it raises several questions: Can these AI technologies entirely eliminate misjudgments and avoid restricting freedom of speech? Can they provide comprehensive protection for user data privacy to prevent leaks? Are they capable of accurately detecting underage users and harmful game chatting? In the upcoming discussion, we will thoroughly explore these three aspects.
Part1- Unreasonable Account Blocks and Speech Restrictions by AI Moderation
Machine learning systems may experience false positives, where harmless content is incorrectly classified as harmful. This error can result in unwarranted account bans or restrict users’ freedom of speech when they shouldn’t be limited (Corea et al., 2022).
“Robloxer” by Ahmedsaadf is licensed under CC BY-SA 4.0.
A representative example occurred on the social gaming platform Roblox this year, involving significant false positives in AI content moderation. Roblox’s CEO, David Baszucki, announced in June on Twitter that Roblox would be using artificial intelligence moderation to assist with in-game voice chat.
However, Roblox’s voice chat moderation system has sparked dissatisfaction among many users. They claim they have not uttered any inappropriate language, such as bullying or insulting others, yet their accounts were inexplicably banned. The content moderation system appears to have not thoroughly reviewed these recordings, instead taking action based on reports, resulting in users’ accounts being banned and their voice chat privileges restricted for up to a week. Additionally, users claim that all Roblox users can report others without providing evidence, leading to bans on their accounts.
A user conducted an experiment regarding Roblox’s voice chat moderation. In a scenario where she did not violate any terms of service, she allowed other players to report her directly. Unsurprisingly, Roblox’s AI moderation system automatically banned her account from using voice chat.
Part2- Risk of Personal Data Leakage in AI Moderation
We cannot ignore the potential risk of user data breaches brought about by AI moderation. One of the fundamental factors contributing to this risk is technical vulnerabilities. Despite AI moderation tools typically being designed as highly secure systems, the existence of technical and security loopholes remains an ongoing concern(Comiter, 2019).
In July of this year, Roblox’s developer community experienced a data breach incident, affecting nearly 4,000 members. The compromised personal information included phone numbers, email addresses, and birthdates. It is worth noting that this incident has already caused harm to users. Insiders have disclosed that some users have started receiving malicious phone calls, messages, and emails due to the leaked information.
In addition, the incident raised significant concerns because Roblox, despite not being exclusively for children, has a substantial underage user base. According to a report from the first quarter of 2023, Roblox has 66.1 million daily active users, with 43% of them being aged 13 or younger. Users in this age group often lack sufficient awareness and experience regarding privacy protection, making them more susceptible to deception or manipulation by malicious individuals. This increases their risk of becoming victims in data breaches, making such data leakage incidents highly sensitive and concerning for this age group.
“Playing Roblox (48707377571)” by Henry Burrows from Winchester, United Kingdom is licensed under CC BY-SA 2.0.
Another case, in August of the same year, a third-party Discord application was breached, resulting in the data leak of 760,000 Discord.io users. It is worth noting that the database of Discord.io has been listed and is being offered for sale by an individual named “Akhirah” on the hacker forum “Breached.” To authenticate the authenticity and value of this data, he has shared four user records, including their usernames, Discord IDs, email addresses, and passwords that have undergone salted and hashed processing.
“Homepage of DISCORD Website magnified on logo with magnifying glass (53147449398)” by Jernej Furman from Slovenia is licensed under CC BY 2.0.
Users affected by this data breach express anger, asking whether the company should provide some form of compensation to the affected users. This consideration arises from the company’s obligation to protect user data, especially given the apparent importance of this leaked information.
Part3- Limitations of AI Moderation in Detecting Inappropriate Age Users and Harmful Game Content
While machine learning techniques commonly rely on accuracy metrics such as true positives (correctly identifying harmful content) and true negatives (correctly identifying harmless content) to evaluate performance, AI moderation still faces challenges in fully recognizing harmful content (Corea et al., 2022).
Regarding the case of Roblox, complaints arose from users, parents, and privacy advocates after it granted a new voice chat feature to a large number of players. A Roblox player named “Kairoh” raised questions about whether some children under the age of 13 could successfully bypass the moderation system. He posted a video in which it appeared that a young player in a role-playing shooting game called “Da Hood” was using voice chat and claimed he had encountered “more kids spewing profanity.” Additionally, there have been reports of inappropriate discussions in the Roblox Community Space, a game created by Roblox itself, involving topics such as drug deals and sexual content. These incidents have sparked discussions and concerns within the community and on the platform, particularly regarding the effectiveness of AI moderation systems and the protection of underage users.
Conclusion and Suggestion
Returning to the question posed at the beginning of this article: does AI moderation bring about a utopian or dystopian effect? The answer, it seems, might be both. Some scholars argue that utilizing data-centered technology for content moderation involves a fundamental contradiction. These moderation tools often operate on the edges, potentially being overly stringent, leading to the erroneous identification and removal of legitimate content, or failing to effectively protect content that is being harmed(Gillespie, 2020).
In the face of such situations, AI moderation should achieve the following points in order to better implement content management, avoiding users from various online harms while enhancing their gaming experience:
1. When dealing with account bans or limitations on freedom of speech, AI moderation should ensure transparency(Corea et al., 2022):
- Disclose the number of posts that have been deleted and the accounts facing permanent or temporary suspensions due to violations of content policies.
- Ensure that every user whose content is deleted or account is suspended is informed of the specific reasons for this action.
- Establish a substantive process allowing users to promptly appeal any content removal or account suspension.
2. For user privacy concerns, major platforms need to formulate customized security and privacy policies, rather than simply applying generic security measures. This can involve(Dwivedi, 2022):
- Implement secure and efficient password algorithms to encrypt network connections, safeguarding users’ personal information and communication content from unauthorized access.
- Conduct regular security audits and vulnerability fixes, and establish dedicated security teams to address potential threats and risks.
We believed that the initial intention behind AI moderation tools is to create and maintain a friendlier and safer online community. If major social gaming platforms can acknowledge the issues associated with AI moderation and take corrective measures, the application of AI moderation in social gaming platforms may maximize the achievement of a utopian state.
AI Moderation in Social Gaming Platforms: Utopia or Dystopia? © 2023.10.2 by JIAYI SHI is licensed under CC BY-NC-ND 4.0
References
Baszucki, D. (2023). https://twitter.com/DavidBaszucki/status/1668321613178355712. [online] X (formerly Twitter). Available at: https://twitter.com/DavidBaszucki/status/1668321613178355712 [Accessed 1 Oct. 2023].
Comiter, M. (2019). Attacking Artificial Intelligence: AI’s Security Vulnerability and What Policymakers Can Do About It. [online] Belfer Center for Science and International Affairs. Available at: https://www.belfercenter.org/publication/AttackingAI.
Corea, F., Fossa, F., Loreggia, A., Quintarelli, S. and Sapienza, S. (2022). A principle-based approach to AI: the case for European Union and Italy. AI & SOCIETY, 57. doi:https://doi.org/10.1007/s00146-022-01453-8.
Dwivedi, Y.K. (2022). Metaverse beyond the hype: Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 66(66), p.102542. doi:https://doi.org/10.1016/j.ijinfomgt.2022.102542.
GDI (2023). https://twitter.com/M1A2_AbramsTank/status/1668210427983978503. [online] X (formerly Twitter). Available at: https://twitter.com/M1A2_AbramsTank/status/1668210427983978503 [Accessed 1 Oct. 2023].
Gillespie, T. (2020). Content moderation, AI, and the question of scale. Big Data & Society, 7(2), p.205395172094323. doi:https://doi.org/10.1177/2053951720943234.
Glover, C. (2023). Discord.io data breach sees information on 760,000 users leaked. [online] Tech Monitor. Available at: https://www.google.com/url?q=https://techmonitor.ai/technology/cybersecurity/discord-data-leak&sa=D&source=docs&ust=1696171609201793&usg=AOvVaw1SQ6PSi2jllWutiQVNXacZ [Accessed 1 Oct. 2023].
Grayson, N. (2021). Roblox voice chat checks ID to keep kids safe, but slurs and sex sounds slip through. Washington Post. [online] 16 Nov. Available at: https://www.washingtonpost.com/video-games/2021/11/16/roblox-voice-chat-id-requirement-slurs/.
Kairoh, M. (2021). https://twitter.com/Kairoh3D/status/1458213949602664449. [online] X (formerly Twitter). Available at: https://twitter.com/Kairoh3D/status/1458213949602664449 [Accessed 1 Oct. 2023].
Liberto, M.V. (2023). https://twitter.com/The_Suntrip/status/1691780439197823397. [online] X (formerly Twitter). Available at: https://twitter.com/The_Suntrip/status/1691780439197823397 [Accessed 1 Oct. 2023].
Max (2023). https://twitter.com/MaximumADHD/status/1640537042604945411. [online] X (formerly Twitter). Available at: https://twitter.com/MaximumADHD/status/1640537042604945411 [Accessed 1 Oct. 2023].
Ozanne, M., Bhandari, A., Bazarova, N.N. and DiFranzo, D. (2022). Shall AI moderators be made visible? Perception of accountability and trust in moderation systems on social media platforms. Big Data & Society, 9(2), p.205395172211156. doi:https://doi.org/10.1177/20539517221115666.
Poireault, K. (2023). Old Roblox Data Leak Resurfaces, 4000 Users’ Personal Information Exposed. [online] Infosecurity Magazine. Available at: https://www.infosecurity-magazine.com/news/old-roblox-data-leak-resurfaces/ [Accessed 1 Oct. 2023].
rxne (2023). Is Roblox falsely banning voice chat users? [online] www.youtube.com. Available at: https://www.youtube.com/watch?v=Q4sRGUf3nKo [Accessed 1 Oct. 2023].
Schwartz, E.H. (2023). Roblox Acquires Voice AI Moderation Startup Speechly. [online] Voicebot.ai. Available at: https://voicebot.ai/2023/09/19/roblox-acquires-voice-ai-moderation-startup-speechly/ [Accessed 1 Oct. 2023].Weatherbed, J. (2023). Roblox data breach leaks almost 4,000 developer profiles. [online] The Verge. Available at: https://www.theverge.com/2023/7/21/23802742/roblox-data-breach-leak-developer-personal-information-exposed. [Accessed 1 Oct. 2023].