Twitter, often described as a bustling digital city, serves as a dynamic platform where news breaks before it’s news, trends emerge and fade in the blink of an eye, and memes are born and die with the speed of a shooting star. It is considered one of the fundamental pillars of the internet, enabling people to connect beyond the confines of traditional “walled gardens” (Lemley, 2021). Under the banner of Twitter 2.0, the platform has boldly proclaimed, “We believe Twitter users have the right to express their opinions and ideas without fear of censorship.” This assertion grants everyone the power to create and share ideas and information without barriers instantly – or does it?
In October of the previous year, when Elon Musk took control of Twitter, his goal was to promote a more comprehensive array of voices on the social media platform, all while positioning himself as a staunch advocate for unrestricted freedom of speech. However, this transition has brought about a series of controversies and accusations regarding Twitter’s stance on online hate speech and its commitment to free expression.
Twitter’s Battle with Hate-Filled Tweets:
One of the most glaring instances of Twitter’s struggle with enforcing its content policies came to light when the platform faced legal scrutiny for failing to remove hate-filled tweets despite being alerted to six offensive posts in January by HateAid and European Union of Jewish Students (EUJS) researchers. Twitter’s inability to take action against these clear violations of its own policies, along with the ruling that three of the tweets did not breach guidelines, raised significant concerns among users and advocates for online safety. As antisemitism and hate against Jews continue to spread, EUJS and HateAid have taken this matter to court.
Panel discussion at the start of the trial:
Hate tweets also seemed to increase after Elon Musk’s takeover, with instances of slurs against Black Americans and gay men becoming more prevalent.
In addition to that, Elon Musk has suspended the accounts of several journalists without offering any proper explanation. Furthermore, in the months following his takeover, he either banned or inhibited links to Twitter’s competitors, including Instagram, Mastodon, and Substack. It has also been reported that Twitter seemed to reduce the visibility of posts related to Ukraine on its platform, and messages containing specific terms like “transgender,” “trans,” “gay,” and “bisexual” may have been concealed, even within private messages.
All these actions raise significant questions about Twitter’s commitment to its own content moderation policy. Additionally, the substantial increase in offensive content on the platform has the potential to discourage cautious advertisers, as they may be hesitant to align their products and brand image with hate-filled rants, as discussed by Gillespie in his 2018 research. This hesitance on the part of advertisers could have far-reaching implications for Twitter’s financial sustainability as a platform catering to businesses and advertisers. It could lead to a decline in ad revenue and partnerships, posing challenges for Twitter in maintaining its profitability and its ability to provide a valuable space for businesses to reach their target audiences effectively. In the long term, such repercussions could prompt Twitter to re-evaluate its content moderation strategies and policies to strike a balance between promoting free expression and maintaining a brand-safe environment for advertisers.
It appears that Musk exercises significant discretion over the platform, and users’ presence is seemingly welcomed only at his discretion and for as long as he desires. Unfortunately, these measures seem to fall short of promoting a culture of unrestricted speech on the internet.
The internet, as a whole, should serve as a space for free speech, free association, and various other liberties designed for people to express themselves authentically and autonomously (O’Hara & Hall, 2018). It should be a place where anyone can enter without privilege or prejudice and express their beliefs without the fear of being silenced or marginalized (O’Hara & Hall, 2018).
While particular forms of content, such as child pornography or pirated intellectual property, are universally targeted for regulation by most governments, topics like political discourse, Holocaust denial, or blasphemy are subject to varying degrees of censorship across different nations (O’Hara & Hall, 2018). This diversity in regulations may partly explain why some offensive tweets have not been taken down, as Twitter operates in a global context with varying legal standards.
Dilemma of Social Media Platforms:
Social media platforms find themselves in a unique position, acting as intermediaries between users, citizens, law enforcement, policymakers, and regulators in the digital age (Gillespie, 2018). They bear the responsibility of moderating user-generated content on their platforms, navigating a delicate balance between allowing freedom of expression and adhering to laws and regulations that restrict certain types of content, such as hate speech, harassment, or illegal activities. This role places them squarely at the heart of contentious debates surrounding online speech and censorship.
Policymakers and regulators frequently exert pressure on social media platforms to take specific actions, including the removal of content, suspension of accounts, or the sharing of user information. Platforms must carefully consider when and how to cooperate with these requests while weighing the impact on free expression and user trust.
Therefore, if Twitter’s hate-filled tweets continue to soar, the intervention of policymakers, regulators or even the government may be needed since external pressure could add some sort of coercion on Twitter to remove them. According to Gillespie (2018), Twitter complies with government requests to take down tweets, but it restricts the removal to users within that specific country. Moreover, Twitter provides transparency by disclosing which tweets have been removed and the authority behind the removal. In the end, the tweets still exist, and they require further pressure to be removed, such as one government pressuring another government.
If these issues aren’t solved, it could also create a phenomenon called “the Splinternet”, in which several “internets” could be made, such as a feminist internet, Islamic internet and a caring internet (O’Hara & Hall, 2018).
Conclusion:
The controversy surrounding Twitter’s stance on free speech and content moderation is a complex issue that has come into sharp focus since Elon Musk took control of the platform. While Twitter has long been a bastion of free expression, the challenges it faces in balancing the preservation of digital freedom with the need to combat hate speech and other harmful content are more apparent than ever. The increase in offensive content on Twitter may deter cautious advertisers, potentially harming the platform’s financial sustainability and prompting a need for content moderation adjustments.
It is crucial to recognise that the internet should remain a space where individuals can express their ideas and beliefs without undue censorship. However, this freedom cannot be absolute, as certain forms of content must be regulated to protect individuals and society from harm. Striking the right balance between free speech and responsible content moderation remains an ongoing challenge for platforms like Twitter.
In the era of Elon Musk’s leadership, Twitter stands at a crossroads, and its decisions regarding content moderation will have far-reaching implications for the future of digital discourse. Ultimately, Twitter’s ability to navigate these complex issues will determine whether it can continue to be a vibrant hub for free expression while upholding the values of inclusivity and safety in the digital age. If not, it could just be Elon Mask’s little playground.
Reference list:
Burke, J. (2023, July 10). Twitter faces legal challenge after failing to remove reported hate tweets. The Guardian. https://www.theguardian.com/technology/2023/jul/10/twitter-faces-legal-challenge-after-failing-to-remove-reported-hate-tweets
Eduardo, A. (2023, April 12). Twitter is no free speech haven under Elon Musk. Fire. https://www.thefire.org/news/twitter-no-free-speech-haven-under-elon-musk
Gillespie, T. (2018). Governance by and through Platforms. In The SAGE handbook of social media (pp. 254-278). SAGE Publications.
Iqbal, J. (2023, April 6). Is Elon Musk really a ‘free speech absolutist’?. The Spectator. https://www.spectator.co.uk/article/why-is-twitter-helping-modi-threaten-indian-democracy/
HateAid. (2023). Podiumsdiskussion zum #TwitterTrial von HateAid und EUJS. In Youtube. https://www.youtube.com/watch?v=l-70bQfEY_Q
HateAid. (n.d.). Twitter landmark case against antisemitism. https://hateaid.org/en/twitter-landmark-case-antisemitism/
K, O’Hara. & W, Hall. (2018, December 7). Four Internets: The Geopolitics of Digital Governance. Centre for International Governance Innovation. https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance/
Lemley, M. A. (2021). THE SPLINTERNET. Duke Law Journal, 70(6), 1397–1428.
Roth, E. (2022, December 19). Twitter abruptly bans all links to Instagram, Mastodon, and other competitors.The Verge. https://www.theverge.com/2022/12/18/23515221/twitter-bans-links-instagram-mastodon-competitors
Sheera, F & Kate, C. (2022, December 2). Hate Speech’s Rise on Twitter Is Unprecedented, Researchers Find. The New York Times. https://www.nytimes.com/2022/12/02/technology/twitter-hate-speech.html
X Safety. (2023, April 17). Freedom of Speech, Not Reach: An update on our enforcement philosophy. X blog. https://blog.twitter.com/en_us/topics/product/2023/freedom-of-speech-not-reach-an-update-on-our-enforcement-philosophy