“Applications – Credit to http://homedust.com/” by Homedust is licensed under CC BY 2.0.
The rising spread of political misinformation on social media platforms poses a significant risk to our political landscape, influencing election outcomes, policy formation, and more (Muhammed & Mathew, 2022). Although extensive research has been conducted on strategies to mitigate the dissemination of such misinformation, many of these strategies have substantial limitations and may be unrealistic to enforce (Muhammed & Mathew, 2022).
This essay will contend that structural changes and other strategies are insufficient and lack practical applicability for the complete elimination of political misinformation. Firstly, it will assert that previous strategies are inadequate due to their inability to address the lack of consensus as a root cause of misinformation. Secondly, it will argue that efforts to structurally alter platform algorithms are unrealistic because social media companies are likely to leverage their lobbying power to safeguard their profits and maintain maximum user engagement. Finally, this essay will assert that user-centred mitigation strategies, such as educational interventions, are necessary to complement other approaches in order to more effectively combat the spread of political misinformation on social media.
The significance of this essay lies in its unique perspective and approach to the issue of political misinformation and its propagation on social media. It contributes to raising awareness of political misinformation by offering a critical analysis of the challenges and limitations of prior strategies.
Expertise Consensus and Misinformation
Firstly, efforts to reduce political misinformation through structural changes are insufficient and greatly limited due to their inability to address external causes of its spread, such as a lack of consensus expertise in the field of politics.
Vraga & Bode (2020) have demonstrated that divergent opinions among experts in particular fields can contribute to the dissemination of misinformation. This occurs as disseminators perceive these differences as opportunities to discredit experts and their facts or to promote their own false opinions as truth.
While there have been proposals to diminish misinformation by modifying the structural features of social media platforms (Jang et al., 2018; Shu et al., 2020, p.23), there is evidence of correction from authoritative bodies within the field. For instance, an analysis of the CDC’s mitigation efforts during the Zika epidemic conducted by Vraga & Bode (2017) revealed a reduction in the spread of misinformation. This highlights the effectiveness of passive moderation by relevant authoritative bodies, which present consistent and universally agreed-upon information, thus addressing the root cause of the lack of consensus in the spread of misinformation (Vraga & Bode, 2020).
However, the absence of external and unbiased authoritative bodies overseeing political information may exacerbate the spread of political misinformation, as it increases the potential for a lack of expert consensus (Vraga & Bode, 2020).
While there have been numerous recommendations to implement structural changes to platforms in order to reduce the spread of misinformation (Jang et al., 2018; Shu et al., 2020, p.23), these alone are insufficient. These changes might increase the removal of false information content but fail to address one of the root causes that compel users to disseminate misinformation (Vraga & Bode, 2017; Vraga & Bode, 2020).
The escalating spread of political misinformation concerning the upcoming referendum serves as an example that underscores the necessity for impartial and unbiased authoritative bodies that are not directly affiliated with government parties. Such bodies are needed to curtail the dissemination of political misinformation.
Despite the potential to reduce misinformation through external means, there are still proposals to mitigate misinformation through structural changes (Jang et al., 2018; Shu et al., 2020, p.23). This approach may not be realistic.
Big Tech Powers Unlikely to Allow for Changes to Platform Articles
Despite previous suggestions regarding the potential reduction of political misinformation by altering platform algorithmic structures (Jang et al., 2018; Shu et al., 2020, p.23), the prospect of making the fundamental structural changes needed to significantly restrict or eliminate political misinformation appears highly improbable. It has been demonstrated that user engagement on platforms like Facebook and Twitter is amplified by algorithms that personalise content to align with users’ existing biases and preferences (Muhammed & Mathew, 2022). This dramatically increases the consumption and acceptance of political misinformation due to confirmation bias, where individuals believe potentially false information merely because of its repeated exposure (Muhammed & Mathew, 2022). Given that their profits, power, and influence are dependent on this model (Muhammed & Mathew, 2022), it is likely that social media companies will persist in its implementation, despite its contribution to significant real-world consequences through the continued circulation of political misinformation.
Critics may contend that state intervention could be a solution to counteract the creation of echo chambers through algorithmic structures. They propose implementing policies to reshape user experiences, increasing exposure to diverse perspectives and thereby mitigating echo-chamber effects and confirmation biases associated with current algorithms (Muhammed & Mathew, 2022). However, this outcome is unlikely to manifest itself effectively, given the immense lobbying power and influence held by social media companies to protect their personal interests and financial gains (Popiel, 2018).
A recent case in point is Google’s lobbying efforts against the passage of anti-disinformation bills in Brazil, illustrating the reluctance of major media corporations to implement protective measures against the spread of misinformation, even when prompted by government action, despite the potential to partially mitigate the real-world consequences of all forms of misinformation. Given the clear reluctance of social media businesses to combat misinformation spread through necessary structural change, political misinformation may instead need to be combatted at the user level.
Click the following link for more information on lobbying:
User-Centred Strategies to Combat Political Misinformation
In the literature, there have been criticisms of certain proposed misinformation mitigation strategies that are not user-centred. For example, it has been demonstrated that mitigation approaches such as content flagging and the application of warning labels to content have limitations due to their impracticality and the sheer impossibility of assessing all content released on social media platforms (Garret, 2019; Muhammed & Mathew, 2022; Pennycook et al., 2020). This highlights the vast scope of digital communication platforms, making it exceedingly challenging to exert control at a structural level and emphasising the inadequacy of such strategies in combating the spread of political misinformation.
Rather than relying exclusively on the aforementioned strategies, user-centred approaches should be employed in conjunction. One of the primary user-centred strategies that can be effectively implemented is user education on misinformation, equipping users with the skills to identify political misinformation (Ali & Qazi, 2021).
An illustrative case highlighting the significance of education in countering misinformation is the proposal of legislation to launch digital literacy educational programs, which may encompass methods for identifying misinformation on social media, within the Californian public school curriculum. The recognition of the importance of misinformation education at the state level underscores the immense potential for empowering social media users to identify and combat political misinformation effectively.
In conclusion, the efforts and current strategies proposed to mitigate political misinformation on social media are not entirely adequate or practical. This is primarily because structural changes to platforms do little to eliminate the key incentive of exploiting the lack of expert consensus. It is unrealistic to expect social media companies to implement changes that reduce misinformation spread when it could compromise their vested interests. Additionally, user-centred mitigation strategies, such as educational interventions, are necessary to more effectively combat the spread of political misinformation on social media.
Despite some of the insights and contributions offered by this essay, further research and work are required within the specific domain of political misinformation in this field.
This work is licensed under CC BY-SA 4.0
References
Ali, A. S., & Qazi, I. A. (2021). Countering misinformation on social media through educational interventions: Evidence from a randomized experiment in Pakistan. Journal of Development Economics, 163, 103108–103108. https://doi.org/10.1016/j.jdeveco.2023.103108
Garrett, R. K., & Poulsen, S. (2019). Flagging Facebook falsehoods: Self-identified humor warnings outperform fact checker and peer warnings. Journal of Computer-Mediated Communication, 24(5), 240-258.
Harris, B. (2023, May 2). Google draws backlash from Brazil with lobbying against “fake news” bill. Financial Times. https://www.ft.com/content/8a4bb131-e792-491d-a348-9dee4de41ce2
Jang, S. M., Geng, T., Queenie Li, J.-Y., Xia, R., Huang, C.-T., Kim, H., & Tang, J. (2018). A computational approach for examining the roots and spreading patterns of fake news: Evolution tree analysis. Computers in Human Behavior, 84, 103–113. https://doi.org/10.1016/j.chb.2018.02.032
Moffatt, A. (2023, August 2). California teens use social media nonstop. Teaching media literacy in schools could protect them. CalMatters. https://calmatters.org/commentary/2023/08/california-teens-social-media-literacy/
Muhammed T, S., & Mathew, S. K. (2022). The disaster of misinformation: a review of research in social media. International Journal of Data Science and Analytics, 13(4), 271-285. https://doi.org/10.1007/s41060-022-00311-6
One Minute Economics. (2019). The Economics Behind Lobbying Explained in One Minute: From Meaning/Definition to Examples. In YouTube. https://www.youtube.com/watch?v=k9n9O6prsx4
Pennycook, G., Bear, A., Collins, E. T., & Rand, D. G. (2020). The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science, 66(11). https://doi.org/10.1287/mnsc.2019.3478
Popiel, P. (2018). The Tech Lobby: Tracing the Contours of New Media Elite Lobbying Power. Communication, Culture and Critique, 11(4), 566–585. https://doi.org/10.1093/ccc/tcy027
Shu, K., Bhattacharjee, A., Alatawi, F., Nazer, T. H., Ding, K., Karami, M., & Liu, H. (2020). Combating disinformation in a social media age. WIREs Data Mining and Knowledge Discovery, 10(6), 23. https://doi.org/10.1002/widm.1385
Vraga, E. K., & Bode, L. (2017). Using Expert Sources to Correct Health Misinformation in Social Media. Science Communication, 39(5), 621–645. https://doi.org/10.1177/1075547017731776
Vraga, E. K., & Bode, L. (2020). Defining Misinformation and Understanding its Bounded Nature: Using Expertise and Evidence for Describing Misinformation. Political Communication, 37(1), 136–144. https://doi.org/10.1080/10584609.2020.1716500
Zadvirna, D. (2023, June 22). Polling staff branded “vote thieves” as AEC reveals rise in online abuse, misinformation around voting process. ABC News. https://www.abc.net.au/news/2023-06-22/australian-electoral-commissioner-tom-rogers-social-media-abuse/102507204