- Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns.
- High-fidelity explanations of most AI decisions are not currently possible. The challenges of explainable AI are formidable.
- Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations.
- Auditable AI is not a panacea. But auditable AI can increase transparency and combat bias.
Rumman Chowdhury points out some of the potential imperfections of a system that relied on automated auditing, and does not like the idea that automated auditing might be an acceptable substitute for other forms of governance. Such a suggestion is not made explicitly in the article, and I haven't seen any evidence that this was the authors' intention. However, there is always a risk that people might latch onto a technical fix without understanding its limitations, and this risk is perhaps what underlies her critique.
In a recent paper, she calls for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But how can people verify that systems are not only ignoring these data, but also being cautious about other data that may serve as proxies for race and class, as discussed by Cathy O'Neil? How can they prove that a system is systematically unfair without having some classification data of their own?
And yes, we know that all classification is problematic. But that doesn't mean being squeamish about classification, it just means being self-consciously critical about the tools you are using. Any given tool provides a particular lens or perspective, and it is important to remember that no tool can ever give you the whole picture. Donna Haraway calls this partial perspective.
With any tool, we need to be concerned about how the tool is used, by whom, and for whom. Chowdhury expects people to assume the tool will be in some sense "neutral", creating a "veneer of objectivity"; and she sees the tool as a way of centralizing power. Clearly there are some questions about the role of various stakeholders in promoting algorithmic fairness - the article mentions regulators as well as the ACLU - and there are some major concerns that the authors don't address in the article.
Chowdhury's final criticism is that the article "fails to acknowledge historical inequities, institutional injustice, and socially ingrained harm". If we see algorithmic bias as merely a technical problem, then this leads us to evaluate the technical merits of auditable AI, and acknowledge its potential use despite its clear limitations. And if we see algorithmic bias as an ethical problem, then we can look for various ways to "solve" and "eliminate" bias. @juliapowles calls this a "captivating diversion". But clearly that's not the whole story.
Some stakeholders (including the ACLU) may be concerned about historical and social injustice. Others (including the tech firms) are primarily interested in making the algorithms more accurate and powerful. So obviously it matters who controls the auditing tools. (Whom shall the tools serve?)
What algorithms and audits have in common is that they deliver opinions. A second opinion (possibly based on the auditing algorithm) may sometimes be useful - but only if it is reasonably independent of the first opinion, and doesn't entirely share the same assumptions or perspective. There are codes of ethics for human auditors, so we may want to ask whether automated auditing would be subject to some ethical code.
Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)
Oren Etzioni and Michael Li, High-Stakes AI Decisions Need to Be Automatically Audited (Wired, 18 July 2019)
Donna Haraway, Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. In Simians, Cyborgs and Women (Free Association, 1991)
Cathy O'Neil, Weapons of Math Destruction
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Related posts: Whom Does the Technology Serve? (May 2019), Algorithms and Governmentality (July 2019)
No comments:
Post a Comment