There are two contrasting ways of characterizing these issues. One way is to focus on the use of big data to target individuals with increasingly personalized content, such as precision nudging. Thus mass surveillance provides commercial and governmental organizations with large quantities of personal data, allowing them to make precise calculations concerning individuals, and use these calculations for the purposes of influence and control.
Alternatively, we can look at how big data can be used to control large sets or populations - what Foucault calls governmentality. If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Note 1), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.
Although Deleuze challenges this dichotomy.
We no longer find ourselves dealing with the mass/individual pair. Individuals have become 'dividuals' and masses, samples, data, markets, or 'banks'.
Foucault's version of Bentham's panopticon is often invoked in discussions of mass surveillance, but what was equally important for Foucault was what he called biopower -
a type of power that presupposed a closely meshed grid of material coercions rather than the physical existence of a sovereign. [Foucault 2003 via Adams]
People used to talk metaphorically about faceless bureaucracy being a
machine, but now we have a real machine, performing the same function with much greater efficiency and effectiveness. And of course, scale.
The machine tended increasingly to dictate the purpose to be served, and to exclude other more intimate human needs.Lewis Mumford
Bureaucracy is usually regarded as a Bad Thing, so it's worth remembering that it is a lot better than some of the alternatives. Bureaucracy should mean you are judged according to an agreed set of criteria, rather than whether someone likes your face or went to the same school as you. Bureaucracy may provide some protection against arbitrary action and certain forms of injustice. And the fact that bureaucracy has sometimes been used by evil regimes for evil purposes isn't sufficient grounds for rejecting all forms of bureaucracy everywhere.
What bureaucracy does do is codify and classify, and this has important implications for discrimination and diversity.
Sometimes discrimination is considered to be a good thing. For example, recruitment should discriminate between those who are qualified to do the job and those who are not, and this can be based either on a subjective judgement or an agreed standard. But even this can be controversial. For example, the College of Policing is implementing a policy that police recruits in England and Wales should be educated to degree level, despite strong objections from the Police Federation.
Other kinds of discrimination such as gender and race are widely disapproved of, and many organizations have an official policy disavowing such discrimination, or affirming a belief in diversity. Despite such policies, however, some unofficial or inadvertent discrimination may often occur, and this can only be discovered and remedied by some form of codification and classification. Thus if campaigners want to show that firms are systematically paying women less than men, they need payroll data classified by gender to prove the point.
Organizations often have a diversity survey as part of their recruitment procedure, so that they can monitor the numbers of recruits by gender, race, religion, sexuality, disability or whatever, thereby detecting any hidden and unintended bias, but of course this depends on people's willingness to place themselves in one of the defined categories. (If everyone ticks the
prefer not to saybox, then the diversity statistics are not going to be very helpful.)
Daugherty, Wilson and Chowdhury call for systems to be
taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand. But there are often other data (such as postcode/zipcode) that are correlated with the attributes you are not supposed to use, and these may serve as accidental proxies, reintroducing discrimination by the back door. The decision-making algorithm may be designed to ignore certain data, based on training data that has been carefully constructed to eliminate certain forms of bias, but perhaps you then need a separate governance algorithm to check for any other correlations.
Bureaucracy produces lists, and of course the lists can either be wrong or used wrongly. For example, King's College London recently apologized for denying access to selected students during a royal visit.
Big data also codifies and classifies, although much of this is done on inferred categories rather than declared ones. For example, some social media platforms infer gender from someone's speech acts (or what Judith Butler would call performativity). And political views can apparently be inferred from food choice. The fact that these inferences may be inaccurate doesn't stop them being used for targetting purposes, or population control.
Cathy O'Neil's statement that algorithms are
opinions embedded in codeis widely quoted. This may lead people to think that this is only a problem if you disagree with these opinions, and that the main problem with big data and algorithmic intelligence is a lack of perfection. For example, criticizing such technologies as affective computing (to detect emotional state) if they fail to deal with ethnic diversity.
And of course technology companies encourage ethics professors to look at their products from this perspective, firstly because they welcome any ideas that would help them make their products more powerful, and secondly because it distracts the professors from the more fundamental question as to whether they should be doing things like facial recognition in the first place. @juliapowles calls this a "captivating diversion".
But a more fundamental question concerns the ethics of codification and classification. Following a detailed investigation of this topic, published under the title Sorting Things Out, Bowker and Star conclude that "all information systems are necessarily suffused with ethical and political values, modulated by local administrative procedures" (p321).
At the end of their book (pp324-5), they identify three things they want designers and users of information systems to do. (Clearly these things apply just as much to algorithms and big data as to older forms of information system.)Black boxes are necessary, and not necessarily evil. The moral questions arise when the categories of the powerful become the taken for granted; when policy decisions are layered into inaccessible technological structures; when one group's visibility comes at the expense of another's suffering.(p320)
- Firstly, allow for ambiguity and plurality, allowing for multiple definitions across different domains. They call this recognizing the balancing act of classifying.
- Secondly, the sources of the classifications should remain transparent. If the categories are based on some professional opinion, these should be traceable to the profession or discourse or other authority that produced them. They call this rendering voice retrievable.
- And thirdly, awareness of the unclassified or unclassifiable
other
. They call this being sensitive to exclusions, and note thatresidual categories have their own texture that operates like the silences in a symphony to pattern the visible categories and their boundaries
(p325).
Note 1: This view is attributed to Bruno Latour by Bowker and Star (1999 p 137). However, although Latour talks about paper-shuffling bureaucrats (1987 pp 254-5), I have been unable to find this particular quote.
Rachel Adams, Michel Foucault: Biopolitics and Biopower (Critical Legal Thinking, 10 May 2017)
Geoffrey Bowker and Susan Leigh Star, Sorting Things Out (MIT Press 1999).
Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)
Gilles Deleuze, Postscript on the Societies of Control (October, Vol 59, Winter 1992), pp. 3-7
Michel Foucault, ‘Society Must be Defended’ Lecture Series at the Collège de France, 1975-76 (2003) (trans. D Macey)
Maša Galič, Tjerk Timan and Bert-Jaap Koops, Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation (Philos. Technol. 30:9–37, 2017)
Bruno Latour, Science in Action (Harvard University Press 1987)
Lewis Mumford, The Myth of the Machine (1967)
Samantha Murphy, Political Ideology Linked to Food Choices (LiveScience, 24 May 2011)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Antoinette Rouvroy and Thomas Berns (translated by Elizabeth Libbrecht), Algorithmic governmentality and prospects of emancipation (Réseaux No 177, 2013)
BBC News, All officers 'should have degrees', says College of Policing (13 November 2015), King's College London sorry over royal visit student bans (4 July 2019)
Related posts
Quotes on Bureaucracy (June 2003), Crude Categories (August 2009), What is the Purpose of Diversity? (January 2010), Affective Computing (March 2019), The Game of Wits between Technologists and Ethics Professors (June 2019), Algorithms and Auditability (July 2019), On the Performativity of Data (August 2021), The Corporate Sorting Hat (September 2021)
Updated 16 July 2019
No comments:
Post a Comment