Topic: AI Intelligent Audit Manual audit
Overview of Automated Audit Technology:
With the rise of AI technology, the Internet is widely known. The digital platform is described as a concept of contemporary social structure and economic activity, which refers to the increasingly important role that digital platforms can play in shaping people’s lives, interactions and economic activities or social interactions. These platforms can exist in online marketplaces, social media, the sharing economy, and service platforms such as platforms (Dijck, 2018). While the Internet’s popularity increases, it also faces the test of the quality of the Internet community. The increase in the number of users of the Internet community is faced with the emergence of problems such as content quality, abuse, privacy issues, and accompanied by the spread of low-quality, false or harmful information, which may mislead and harm its users. As a result, the supervision of Internet platforms should be improved accordingly, resulting in the emergence of AI audit. Automated auditing is a technology that uses computer programs and artificial intelligence algorithms to detect, identify, and process digital content. It can automatically screen, review, or classify large amounts of online content according to predetermined rules, models, or criteria, with the goal of ensuring compliance, security, and quality of the content.
Efficiency and misjudgment:
With the rapid growth of online content, the number of online active users has increased, and in order to enable online platforms to meet the challenges of user growth and content increase, the emergence of AI auditing can easily scale to handle large-scale data. AI audit analyzes multimedia content through intelligent detection, and in the audit process, platform administrators classify and label platform content to identify low-quality products and non-compliant content by developing custom rules and models that conform to the values and policies of a particular community. AI moderation can monitor user-generated content in real time and classify and flag it, allowing problems to be detected and dealt with in a timely manner. This skill is important for chat apps and real-time interactions in the social media space. Most of today’s AI audits systems employ machine learning algorithms that can analyze large amounts of labeled data to learn how to identify classified content. This greatly improves the efficiency of content review for the supervision of the platform. AI moderation detects disinformation more accurately, it flags content as compliant or non-compliant and classifies it into different problem types. You can have the platform take appropriate action, such as removing problematic content or taking warning action. This approach improves the quality and security of content on the platform, effectively prevents malicious content and abuse, and improves the user experience.
Although automated auditing technology has many advantages, it is not perfect. Social media platforms bring more people into direct contact with each other, provide them with the opportunity to communicate and interact with a wider range of people, and organize them into the online public. While it may seem utopian, the dangers are also obvious and increasingly apparent, such as the fact that some illegal content never has a clear red line (Gillespie, 2018). Therefore, due to the complexity of online platform content, it is difficult to be accurately understood by algorithms. Since platforms regulate content in an “AD hoc” manner, there is a certain arbitrariness in enforcing rules (Gillespie, 2018). Often, AI audits will result in systems incorrectly flagging compliant content as a violation. This means that the system mistakenly sees harmless or legal content as problematic content, leading to inappropriate restrictions or removal. This has a negative impact on the user experience and freedom of expression. However, the AI audit system will also fail to detect the offending content and pass it. Resulting in the spread of false information, malicious behavior or security issues. This phenomenon can threaten user security and the reputation of the platform. These two types of misjudgments have seriously affected the environment of the online platform community. Therefore, the emergence of manual audit will be particularly important. Algorithms in multilingual and multicultural online environments may not be able to understand all the nuances of language and culture, so different ways of handling may be required.
Ethics and Transparency:
There is a lot of Internet governance that respects “openness“, “the Internet should be free because it is designed for people to develop freely and autonomously” (O ‘Hara, & Hall, 2018). Freedom of speech is already the order of the day. The Internet community is prone to language confusion, so manual review technology plays a key role. When the content being reviewed is related to complex, subjective, or sensitive content, human auditors can use judgment and understanding to deal with complex content. Human review techniques are suitable for dealing with complex, controversial or culturally sensitive content by taking context, emotion and intent into account to more accurately assess the content’s compliance. When facing major events and business problems, the AI audit is more in need of the help of manual audit due to the lack of human touch. For emerging problems and threats, manual audit can adapt and solve problems more flexibly without waiting for the update of the algorithm model. For example: manual customer service, this service well reflects the importance of manual audit, because the AI audit can not make a more accurate judgment on human needs, so the manual audit can better make personalized service for human beings, can more accurately meet human problems and needs.
Despite these advantages, human audits also face a number of challenges. Due to the limited workload, manual audits require a lot of human resources and time, and the compensation required to hire and train these auditors will increase the cost. Manual audit requires staff from all over the world, due to different growing environments and ethnic cultures, which means that there will be stereotypes and racial discrimination in the process of manual audit. Just as geek culture values expertise and expertise, geek culture often revolves around acquiring, analyzing, and sharing that knowledge with others. They often value tact, cleverness, and craft, negotiating between collectivism and individualism within the communities in which they are located, and while geek culture may welcome and promote niche culture, it often exhibits a fraught relationship with issues of gender and race (Massanari, 2017). When it comes to large-scale content and real-time interactions, human moderators have some drawbacks.
Despite these criticisms, we still feel that a combination of AI auditing can address the vulnerabilities of human auditing. In the face of these situations, a set of ethical principles and transparency requirements should be followed on online platforms that use content moderation technology. Online platforms should clearly define their moderation policies, ensure they respect freedom of expression, and publicly disclose moderation standards and procedures. In order for users to understand the audit process and assess its fairness, platforms should provide transparent information about their automated audit systems, such as the algorithms, models, and data sources used. In terms of ethics, the platform should provide ethical training to human auditors to ensure that they face ethical principles squarely, are not subject to subjective factors, and how to deal with complex situations. Strengthening ethical principles and transparency requirements can effectively make content review processes on digital platforms more transparent, legal, and ethical.
Technical limitation:
The spate of controversies surrounding digital and social media, from fake news and its political influence to low-quality content, gendered online harassment and hateful language, has led to a growing public expectation that digital platforms need to be accountable to the public interest, particularly because “these companies are increasingly monitoring, policing and removing content,” the report said. And restrict and organize some users “(Flew, Martin, & Suzor, 2019, pp. 33-50). For example, multi-stakeholder governance institutions involving non-state actors are elevated to the same status as governments in public formulation (Muller, 2018). Evokes the platform as a tool for amplifying and manufacturing racist discourse, both through the appropriation of its functions by users and the shaping of sociality through its design and algorithms (Matamoros-Fernandez, 2017). Although automated audit has the advantages of efficiency and speed when dealing with large-scale content, misjudgment and algorithm bias often occur. Therefore, the combination of AI and manual audit can better ensure accuracy and fairness.
In short, the rise of AI to free up some of the manpower of manual audits is reasonable, but there are some ethical and political effectiveness of the controversy. However, by maintaining the relevant ethical principles and transparency, the combination of machine audit and manual audits can be used to maximize the role of AI audits and manual audits, and create a harmonious online platform community environment.
Bibliography:
Ey Global. (2018). How artificial intelligence will transform the audit. YouTube.
https://www.youtube.com/watch?v=58suyR5E6fI
Dijck, J. (2018). The Platform Society as a Contested Concept. In Poell, T., & Waal, M. de. (Eds.). , The platform society (pp. 5–32). Oxford University Press.
Flew, T., Martin, F., & Suzor, N. (2019). Internet regulation as media policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33–50. https://doi.org/10.1386/jdmp.10.1.33_1
Gillespie, T.(2018). All Platforms Moderate. In Gillespie. (Ed.), Custodians of the Internet : Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media (pp. 1–23). Yale University Press. https://doi.org/10.12987/9780300235029
Massanari, A. (2017). Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures. New Media & Society, 19(3), 329–346. https://doi.org/10.1177/1461444815608807
Matamoros-Fernández, A. (2017). Platformed racism: the mediation and circulation of an Australian race-based controversy on Twitter, Facebook and YouTube. Information, Communication & Society, 20(6), 930–946. https://doi.org/10.1080/1369118X.2017.1293130
Muller, M. (2018). Confronting Alignment. In Mueller. (Ed.).,Will the Internet fragment? : sovereignty, globalization and cyberspace (pp. 50–56). Polity Press.
O’Hara, K., & Hall, W. (2018). Four Internets: The Geopolitics of Digital Governance (No. 206). Centre for International Governance Innovation. https://www.cigionline.org/publications/four-internets-geopolitics-digital-governance