@DietrichEpp disagreed completely.
If you want a machine tolearnthen you have to understand the difference between data and knowledge. Stats classes don’t normally cover this.
So there are at least two questions here. Firstly, how much do you really have to understand in order to build a machine. As I see it, getting a machine do something (including learning) counts as engineering rather than science. Engineering requires two kinds of knowledge - practical knowledge (how to reliably, efficiently and safely produce a given outcome) and socio-ethical knowledge (whom shall the technology serve). Engineers are generally not expected to fully understand the scientific principles that underpin all the components, tools and design heuristics that they use, but they have a professional and ethical responsibility to have some awareness of the limitations of these tools and the potential consequences of their work.
In his book on Design Thinking, Peter Rowe links the concept of design heuristic to Gadamer's concept of enabling prejudice. Engineers would not be able to function without taking some things for granted.
So the second question is - which things can/should an engineer trust. Most computer engineers will be familiar with the phrase Garbage In Garbage Out, and this surely entails a professional scepticism about the quality of any input dataset. Meanwhile, statisticians are trained to recognize a variety of potential causes of bias. (Some of these are listed in the Wikipedia entry on statistical bias.) Most of the statistics courses I looked at on Coursera included material on inference. (Okay, I only looked at the first dozen or so, small sample.)
Looking for relevant material to support my position, I found some good comments by Ariel Guersenzvaig, reported by Derek du Preez.
Unbiased data is an oxymoron. Data is biased from the start. You have to choose categories in order to collect the data. Sometimes even if you don’t choose the categories, they are there ad hoc. Linguistics, sociologists and historians of technology can teach us that categories reveal a lot about the mind, about how people think about stuff, about society.
And arriving too late for this Twitter discussion, two more stories of dataset bias were published in the last few days. Firstly, following an investigation by Vinay Prabhu and Abeba Birhane, MIT has withdrawn Tiny Images, a very large image dataset that has been widely used for machine learning, and asked researchers and developers to delete it. And secondly, FiveThirtyEight has published an excellent essay by Mimi Ọnụọha on the disconnect between data collection and meaningful change, arguing that it is impossible to collect enough data to convince people of structural racism.
Prabhu and Birhane detected significant quantities of obscene and offensively labelled material embedded in image datasets, which could easily teach a machine learning algorithm to deliver sexist or racist outcomes. They acknowledge the efforts made in the curation of image datasets, but insist that more could have been done, and will need to be done in future, to address some serious epistemological and ethical questions. With hindsight, it is possible to see the naive optimism of
mapping out the entire world of objectsin a rather different light.
Prabhu and Birhane mention Wittgenstein's remark in the Tractatus,
ethics and aesthetics are one and the same. This thought brings me to the amazing work of Mimi Ọnụọha.
Classification.01 is a sculpture that consists of two neon brackets. When more than one viewer approaches to look at the piece, the brackets use a nearby camera to decide whether or the two viewers have been classifed assimilar, according to a variety of algorithmic measures. The brackets only light up if the terms of classification have been met. The brackets do not share the code and the rationale behind the reason for the classification of the viewers. Just as with many of our technological systems, the viewers are left to determine on their own why they have been grouped, a lingering reminder no matter how much our machines classify, ultimately classification is also a human process.
In summary, there are some critical questions about data and knowledge that affect the practice of machine learning, and some critical insights from artists and sociologists. As for philosophy, famous philosophers from Plato to Wittgenstein have spent 2500 years exploring a broad range of abstract ideas about the relationship between data and knowledge, so you can probably find a plausible argument to support any position you wish to adopt. So this is hardly going to provide any consistent guidance for machine learning.
Update
Thanks to Jag Bhalla for drawing my attention to @BioengineerGM's article on accountability in models. So not just GIGO (Garbage-In-Garbage-Out) but also AIAO (Accountability-In-Accountability-Out).
Guru Madhavan, Do-It-Yourself Pandemic: It’s Time for Accountability in Models (Issues in Science and Technology, 1 July 2020)
Mimi Ọnụọha, When Proof Is Not Enough (FiveThirtyEight, 1 July 2020)
Vinay Uday Prabhu and Abeba Birhane, Large Image Datasets: A pyrrhic win for computervision?(Preprint, 1 July 2020)
Derek du Preez, AI and ethics - ‘Unbiased data is an oxymoron’ (Diginomica, 31 October 2019)
Katyanna Quach, MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs Top uni takes action after El Reg highlights concerns by academics (The Register, 1 July 2020)
Peter Rowe, Design Thinking (MIT Press 1987)
Stanford Encyclopedia of Philosophy: Gadamer and the Positivity of Prejudice
Wikipedia: Algorithmic bias, All models are wrong, Bias (statistics), Garbage in garbage out
Further points and links in the following posts: Faithful Representation (August 2008), From Sedimented Principles to Enabling Prejudices
(March 2013), Whom does the technology serve? (May 2019), Algorithms and Auditability (July 2019), Algorithms and Governmentality (July 2019), Naive Epistemology (July 2020), Mapping out the entire world of objects (July 2020)
No comments:
Post a Comment