An upcoming report by the Algorithmic Justice League (AJL), a nonprofit organization, calls for disclosure regarding the use of type AI and the creation of a public archive for the effects of AI. The site can help readers identify potential problems with algorithms, and help moderators identify or better repeat errors. Founder of AJL Joy Buolamwini coauthored an impressive 2018 audit which found facial recognition methods work well for white men and worse for black women.
The report said it was important for the Auditors to be independent and the results to be transparent. Without that protection, “there is no way to respond,” said AJL research chief Sasha Costanza-Chock. “If he wants to, he can just bury it; if a problem is found, there is no guarantee that it will be resolved. It has no teeth, it hides, and the auditors are powerless. ”
Deb Raji is a colleague at AJL who counts statistics, and participated in the 2018 survey of facial recognition algorithms. He warns that Big Tech companies appear to be taking a provocative approach to foreign readers, sometimes threatening lawsuits based on secrecy or anti-fraud. In August, Facebook obstruction of NYU students from examining how money is spent on politics and hindering the efforts of a German researcher to explore the Instagram algorithm.
Raji wants there to be an accounting body within the federal agency to do things like follow standards or mediate disputes between accountants and companies. Such a body may be formed in accordance with the Financial Accounting Standards Board or the Food and Drug Administration’s standard for diagnosing medical equipment.
The standards of auditors and auditors are important because the increasing number of phones to run AI has led to the establishment of several. basics of auditing, some critics of AI, and others that may be good for the companies under review. In 2019, a consortium of AI researchers from 30 organizations recommended external monitoring and regulations that create a market for readers as part of building AI that people trust with guaranteed results.
Cathy O’Neil founded a company, O’Neil Risk Consulting & Algorithmic Auditing (Orcaa), among other things to explore AI that is invisible or inaccessible to humans. For example, Orcaa works with senior US lawyers in four US countries to evaluate financial or customer purchases. But O’Neil says it loses potential customers because companies want to remain rejected and do not want to know if their AI is harming people or not.
Earlier this year Orcaa conducted a research algorithm used by HireVue to analyze people’s faces during interviews. The company quoted reporters as saying that the study did not find any accuracy or bias, but that the study did not attempt to evaluate the code, education, or performance of any particular social group. The critics said The HireVue version in this study was misleading and inconsistent. Shortly before the release of the study, HireVue said it would no longer use AI in video chat services.
O’Neil thinks the auditions may be helpful, but says in some ways it is too early to follow the AJL’s approach, in part because there is no clue and we do not fully understand how AI hurts people. . Instead, O’Neil prefers another approach: algorithmic impact assessments.
Although research may test the effects of the AI model to see if, for example, they treat men differently from women, the results analysis will focus more on how the algorithm was designed, who may be injured, and who is responsible if things do not go well. . In Canada, businesses need to monitor the risk to individuals and communities using an algorithm; in the US, lighting is being developed for selection while AI has a small or significant risk and to explain how people rely on AI.
The idea of measuring potential and possible damage was introduced in the 1970s by the National Environmental Protection Act, which led to the creation of a word about the environment. Those reports also consider things from pollution to the discovery of fossils; in the same way the analysis of algorithms can take into account a number of factors.