Thursday, August 8, 2019

Automation Ethics

Many people start their journey into the ethics of automation and robotics by looking at Asimov's Laws of Robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm (etc. etc.)
As I've said before, I believe Asimov's Laws are problematic as a basis for ethical principles. Given that Asimov's stories demonstrate numerous ways in which the Laws don't actually work as intended. I have always regarded Asimov's work as being satirical rather than prescriptive.

While we usually don't want robots to harm people (although some people may argue for this principle to be partially suspended in the event of a "just war"), notions of harm are not straightforward. For example, a robot surgeon would have to cut the patient (minor harm) in order to perform an essential operation (major benefit). How essential or beneficial does the operation need to be, in order to justify it? Is the patient's consent sufficient?

Harm can be individual or collective. One potential harm from automation is that even if it creates wealth overall, it may shift wealth and employment opportunities away from some people, at least in the short term. But perhaps this can be justified in terms of the broader social benefit, or in terms of technological inevitability.

And besides the avoidance of (unnecessary) harm, there are some other principles to think about.
  • Human-centred work - Humans should be supported by robots, not the other way around. 
  • Whole system solutions - Design the whole system or process, don’t just optimize a robot as a single component.  
  • Self-correcting - Ensure that the system is capable of detecting and learning from errors. 
  • Open - Providing space for learning and future disruption. Don't just pave the cow-paths.
  • Transparent - The internal state and decision-making processes of a robot are accessible to (some) users.  

Let's look at each of these in more detail.

Human-Centred Work

Humans should be supported by robots, not the other way around. So we don't just leave humans to handle the bits and pieces that can't be automated, but try to design coherent and meaningful jobs for humans, with robots to make them more powerful, efficient, and effective.

Organization theorists have identified a number of job characteristics associated with job satisfaction, including skill variety, task identity, task significance, autonomy and feedback. So we should be able to consider how a given automation project affects these characteristics.

Whole Systems

When we take an architectural approach to planning and designing new technology, we can look at the whole system rather than merely trying to optimize a single robotic component.
  • Look across the business and technology domains (e.g. POLDAT).
  • Look at the total impact of a collection of automated devices, not at each device separately.
  • Look at this as a sociotechnical system, involving humans and robots collaborating on the business process.


Ensure that the (whole) system is capable of detecting and learning from errors (including near misses).

This typically requires a multi-loop learning process. The machines may handle the inner learning loops, but human intervention will be necessary for the outer loops.


Okay, so do you improve the process first and then automate it, or do you automate first? If you search the Internet for "paving the cow-paths", you can find strong opinions on both sides of this argument. But the important point here is that automation shouldn't close down all possibility of future change. Paving the cow-paths may be okay, but not just paving the cow-paths and thinking that's the end of the matter.

In some contexts, this may mean leaving a small proportion of cases to be handled manually, so that human know-how is not completely lost. (Lewis Mumford argued that it is generally beneficial to retain some "craft" production alongside automated "factory" production, as a means to further insight, discovery and invention.)


The internal state and decision-making processes of a robot are accessible to (some) users. Provide ways to monitor and explain what the robots are up to, or to provide an audit trail in the event of something going wrong.

Related posts

How Soon Might Humans Be Replaced At Work? (November 2015) Could we switch the algorithms off? (July 2017), How many ethical principles? (April 2019), Responsible Transparency (April 2019), Process Automation and Intelligence (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)


Jim Highsmith, Paving Cow Paths (21 June 2005)


Job Characteristic Theory
Just War Theory

Monday, July 22, 2019

Algorithms and Auditability

@ruchowdh objects to an article by @etzioni and @tianhuil calling for algorithms to audit algorithms. The original article makes the following points.
  • Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns. 
  • High-fidelity explanations of most AI decisions are not currently possible. The challenges of explainable AI are formidable.  
  • Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations.
  • Auditable AI is not a panacea. But auditable AI can increase transparency and combat bias. 

Rumman Chowdhury points out some of the potential imperfections of a system that relied on automated auditing, and does not like the idea that automated auditing might be an acceptable substitute for other forms of governance. Such a suggestion is not made explicitly in the article, and I haven't seen any evidence that this was the authors' intention. However, there is always a risk that people might latch onto a technical fix without understanding its limitations, and this risk is perhaps what underlies her critique.

In a recent paper, she calls for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But how can people verify that systems are not only ignoring these data, but also being cautious about other data that may serve as proxies for race and class, as discussed by Cathy O'Neil? How can they prove that a system is systematically unfair without having some classification data of their own?

And yes, we know that all classification is problematic. But that doesn't mean being squeamish about classification, it just means being self-consciously critical about the tools you are using. Any given tool provides a particular lens or perspective, and it is important to remember that no tool can ever give you the whole picture. Donna Haraway calls this partial perspective.

With any tool, we need to be concerned about how the tool is used, by whom, and for whom. Chowdhury expects people to assume the tool will be in some sense "neutral", creating a "veneer of objectivity"; and she sees the tool as a way of centralizing power. Clearly there are some questions about the role of various stakeholders in promoting algorithmic fairness - the article mentions regulators as well as the ACLU - and there are some major concerns that the authors don't address in the article.

Chowdhury's final criticism is that the article "fails to acknowledge historical inequities, institutional injustice, and socially ingrained harm". If we see algorithmic bias as merely a technical problem, then this leads us to evaluate the technical merits of auditable AI, and acknowledge its potential use despite its clear limitations. And if we see algorithmic bias as an ethical problem, then we can look for various ways to "solve" and "eliminate" bias. @juliapowles calls this a "captivating diversion". But clearly that's not the whole story.

Some stakeholders (including the ACLU) may be concerned about historical and social injustice. Others (including the tech firms) are primarily interested in making the algorithms more accurate and powerful. So obviously it matters who controls the auditing tools. (Whom shall the tools serve?)

What algorithms and audits have in common is that they deliver opinions. A second opinion (possibly based on the auditing algorithm) may sometimes be useful - but only if it is reasonably independent of the first opinion, and doesn't entirely share the same assumptions or perspective. There are codes of ethics for human auditors, so we may want to ask whether automated auditing would be subject to some ethical code.

Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)

Oren Etzioni and Michael Li, High-Stakes AI Decisions Need to Be Automatically Audited (Wired, 18 July 2019)

Donna Haraway, Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. In Simians, Cyborgs and Women (Free Association, 1991)

Cathy O'Neil, Weapons of Math Destruction

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

Related posts: Whom Does the Technology Serve? (May 2019), Algorithms and Governmentality (July 2019)

Saturday, July 13, 2019

Algorithms and Governmentality

In the corner of the Internet where I hang out, it is reasonably well understood that big data raises a number of ethical issues, including data ownership and privacy.

There are two contrasting ways of characterizing these issues. One way is to focus on the use of big data to target individuals with increasingly personalized content, such as precision nudging. Thus mass surveillance provides commercial and governmental organizations with large quantities of personal data, allowing them to make precise calculations concerning individuals, and use these calculations for the purposes of influence and control.

Alternatively, we can look at how big data can be used to control large sets or populations - what Foucault calls governmentality. If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Note 1), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.

Although Deleuze challenges this dichotomy.
"We no longer find ourselves dealing with the mass/individual pair. Individuals have become 'dividuals' and masses, samples, data, markets, or 'banks'."

Foucault's version of Bentham's panopticon is often invoked in discussions of mass surveillance, but what was equally important for Foucault was what he called biopower - "a type of power that presupposed a closely meshed grid of material coercions rather than the physical existence of a sovereign". [Foucault 2003 via Adams]

People used to talk metaphorically about faceless bureaucracy being a "machine", but now we have a real machine, performing the same function with much greater efficiency and effectiveness. And of course, scale.
"The machine tended increasingly to dictate the purpose to be served, and to exclude other more intimate human needs." (Lewis Mumford)

Bureaucracy is usually regarded as a Bad Thing, so it's worth remembering that it is a lot better than some of the alternatives. Bureaucracy should mean you are judged according to an agreed set of criteria, rather than whether someone likes your face or went to the same school as you. Bureaucracy may provide some protection against arbitrary action and certain forms of injustice. And the fact that bureaucracy has sometimes been used by evil regimes for evil purposes isn't sufficient grounds for rejecting all forms of bureaucracy everywhere.

What bureaucracy does do is codify and classify, and this has important implications for discrimination and diversity.

Sometimes discrimination is considered to be a good thing. For example, recruitment should discriminate between those who are qualified to do the job and those who are not, and this can be based either on a subjective judgement or an agreed standard. But even this can be controversial. For example, the College of Policing is implementing a policy that police recruits in England and Wales should be educated to degree level, despite strong objections from the Police Federation.

Other kinds of discrimination such as gender and race are widely disapproved of, and many organizations have an official policy disavowing such discrimination, or affirming a belief in diversity. Despite such policies, however, some unofficial or inadvertent discrimination may often occur, and this can only be discovered and remedied by some form of codification and classification. Thus if campaigners want to show that firms are systematically paying women less than men, they need payroll data classified by gender to prove the point.

Organizations often have a diversity survey as part of their recruitment procedure, so that they can monitor the numbers of recruits by gender, race, religion, sexuality, disability or whatever, thereby detecting any hidden and unintended bias, but of course this depends on people's willingness to place themselves in one of the defined categories. (If everyone ticks the "prefer not to say" box, then the diversity statistics are not going to be very helpful.)

Daugherty, Wilson and Chowdhury call for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But there are often other data (such as postcode/zipcode) that are correlated with the attributes you are not supposed to use, and these may serve as accidental proxies, reintroducing discrimination by the back door. The decision-making algorithm may be designed to ignore certain data, based on training data that has been carefully constructed to eliminate certain forms of bias, but perhaps you then need a separate governance algorithm to check for any other correlations.

Bureaucracy produces lists, and of course the lists can either be wrong or used wrongly. For example, King's College London recently apologized for denying access to selected students during a royal visit.

Big data also codifies and classifies, although much of this is done on inferred categories rather than declared ones. For example, some social media platforms infer gender from someone's speech acts (or what Judith Butler would call performativity). And political views can apparently be inferred from food choice. The fact that these inferences may be inaccurate doesn't stop them being used for targetting purposes, or population control.

Cathy O'Neil's statement that algorithms are "opinions embedded in code" is widely quoted. This may lead people to think that this is only a problem if you disagree with these opinions, and that the main problem with big data and algorithmic intelligence is a lack of perfection. And of course technology companies encourage ethics professors to look at their products from this perspective, firstly because they welcome any ideas that would help them make their products more powerful, and secondly because it distracts the professors from the more fundamental question as to whether they should be doing things like facial recognition in the first place. @juliapowles calls this a "captivating diversion".

But a more fundamental question concerns the ethics of codification and classification. Following a detailed investigation of this topic, published under the title Sorting Things Out, Bowker and Star conclude that "all information systems are necessarily suffused with ethical and political values, modulated by local administrative procedures" (p321).
"Black boxes are necessary, and not necessarily evil. The moral questions arise when the categories of the powerful become the taken for granted; when policy decisions are layered into inaccessible technological structures; when one group's visibility comes at the expense of another's suffering."  (p320)
At the end of their book (pp324-5), they identify three things they want designers and users of information systems to do. (Clearly these things apply just as much to algorithms and big data as to older forms of information system.)
  • Firstly, allow for ambiguity and plurality, allowing for multiple definitions across different domains. They call this recognizing the balancing act of classifying.
  • Secondly, the sources of the classifications should remain transparent. If the categories are based on some professional opinion, these should be traceable to the profession or discourse or other authority that produced them. They call this rendering voice retrievable.
  • And thirdly, awareness of the unclassified or unclassifiable "other". They call this being sensitive to exclusions, and note that "residual categories have their own texture that operates like the silences in a symphony to pattern the visible categories and their boundaries" (p325).

Note 1: This view is attributed to Bruno Latour by Bowker and Star (1999 p 137). However, although Latour talks about paper-shuffling bureaucrats (1987 pp 254-5), I have been unable to find this particular quote.

Rachel Adams, Michel Foucault: Biopolitics and Biopower (Critical Legal Thinking, 10 May 2017)

Geoffrey Bowker and Susan Leigh Star, Sorting Things Out (MIT Press 1999).

Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)

Gilles Deleuze, Postscript on the Societies of Control (October, Vol 59, Winter 1992), pp. 3-7

Michel Foucault, ‘Society Must be Defended’ Lecture Series at the Collège de France, 1975-76 (2003) (trans. D Macey)

Maša Galič, Tjerk Timan and Bert-Jaap Koops, Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation (Philos. Technol. 30:9–37, 2017)

Bruno Latour, Science in Action (Harvard University Press 1987)

Lewis Mumford, The Myth of the Machine (1967)

Samantha Murphy, Political Ideology Linked to Food Choices (LiveScience, 24 May 2011)

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

Antoinette Rouvroy and Thomas Berns (translated by Elizabeth Libbrecht), Algorithmic governmentality and prospects of emancipation (Réseaux No 177, 2013)

BBC News, All officers 'should have degrees', says College of Policing (13 November 2015), King's College London sorry over royal visit student bans (4 July 2019)

Related posts

Quotes on Bureaucracy (June 2003), Crude Categories (August 2009), What is the Purpose of Diversity? (January 2010), The Game of Wits between Technologists and Ethics Professors (June 2019), Algorithms and Auditability (July 2019)

Updated 16 July 2019

Friday, June 21, 2019

With Strings Attached

@rachelcoldicutt notes that "Google Docs new grammar suggestion tool doesn’t like the word 'funding' and prefers 'investment' ".

Many business people have an accounting mindset, in which all expenditure must be justified in terms of benefit to the organization, measured in financial terms. When they hear the word "investment", they hold their breath until they hear the word "return".

So when Big Tech funds the debate on AI ethics (Oscar Williams, New Statesman, 6 June 2019), can we infer that Big Tech sees this as an "investment", to which it is entitled to a return or payback?

Related post: The Game of Wits Between Technologists and Ethics Professors (June 2019)

Saturday, June 8, 2019

The Game of Wits between Technologists and Ethics Professors

What does #TechnologyEthics look like from the viewpoint of your average ethics professor? 

Not surprisingly, many ethics professors believe strongly in the value of ethics education, and advocate ethics awareness training for business managers and engineers. Provided by people like themselves, obviously.

There is a common pattern among technologists and would-be enterpreneurs to first come up with a "solution", find areas where the solution might apply, and then produce self-interested arguments to explain why the solution matches the problem. Obviously there is a danger of confirmation bias here. Proposing ethics education as a solution for an ill-defined problem space looks suspiciously like the same pattern. Ethicists should understand why it is important to explain what this education achieves, and how exactly it solves the problem.

Please note that I am not arguing against the value of ethics education and training as such, merely complaining that some of the programmes seem to involve little more than meandering through a randomly chosen reading list. @ruchowdh recently posted a particularly egregious example - see below.

Ethics professors may also believe that people with strong ethical awareness, such as themselves, can play a useful role in technology governance - for example, participating in advisory councils.

Some technology companies may choose to humour these academics, engaging them as a PR exercise (ethics washing) and generously funding their research. Fortunately, many of them lack deep understanding of business organizations and of technology, so there is little risk of them causing any serious challenge or embarrasment to these companies.

Professors are always attracted to the kind of work that lends itself to peer-reviewed articles in leading Journals. So it is fairly easy to keep their attention focused on theoretically fascinating questions with little or no practical relevance, such as the Trolley Problem.

Alternatively, they can be engaged to try and "fix" problems with real practical relevance, such as algorithmic bias. @juliapowles calls this a "captivating diversion", distracting academics from the more fundamental question, whether the algorithm should be built at all.

It might be useful for these ethics professors to have deeper knowledge of technology and business, in their social and historical context, enabling them to ask more searching and more relevant questions. (Although some ethics experts have computer science degrees or similar, computer science generally teaches people about specific technologies, not about Technology.) 

But if only a minority of ethics professors possess sufficient knowledge and experience, these will be overlooked for the plum advisory jobs. I therefore advocate compulsory technology awareness training for ethics professors, especially "prominent" ones. Provided by people like myself, obviously.

Stephanie Burns, Solution Looking For A Problem (Forbes, 28 May 2019)

Casey Fiesler, Tech Ethics Curricula: A Collection of Syllabi (5 July 2018), What Our Tech Ethics Crisis Says About the State of Computer Science Education (5 December 2018)

Mark Graban, Cases of Technology “Solutions” Looking for a Problem? (26 January 2011)

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

Oscar Williams, How Big Tech funds the debate on AI ethics (New Statesman, 6 June 2019)

Related posts:

Leadership and Governance (May 2019), Selected Reading List - Science and Technology Studies (June 2019), With Strings Attached (June 2019)

Updated 21 June 2019

Tuesday, May 28, 2019

Five Elements of Responsibility by Design

I have been developing an approach to #TechnologyEthics, which I call #ResponsibilityByDesign. It is based on the five elements of #VPECT. Let me start with a high-level summary before diving into some of the detail.

  • Why does ethics matter?
  • What outcomes for whom?

  • Principles and practices of technology ethics
  • Formal codes of practice, etc. Regulation.

Event-Driven (Activity Viewpoint)
  • Effective and appropriate action at different points: planning; risk assessment; design; verification, validation and test; deployment; operation; incident management; retirement. (Also known as the Process Viewpoint). 

Content (Knowledge Viewpoint)
  • What matters from an ethical point of view? What issues do we need to pay attention to?
  • Where is the body of knowledge and evidence that we can reference?

Trust (Responsibility Viewpoint)
  • Transparency and governance
  • Responsibility, Authority, Expertise, Work (RAEW)

Concerning technology ethics, there is a lot of recent published material on each of these elements separately, but I have not yet found much work that puts them together in a satisfactory way. Many working groups concentrate on a single element - for example, principles or transparency. And even when experts link multiple elements, the logical connections aren't always spelled out.

At the time of writing this post (May 2019), I haven't yet fully worked out how to join these elements either, and I shall welcome constructive feedback from readers and pointers to good work elsewhere. I am also keen to find opportunities to trial these ideas on real projects.

Related Posts

Responsibility by Design (June 2018) What is Responsibility by Design (October 2018) Why Responsibility by Design Now? (October 2018)

Sunday, May 19, 2019

The Nudge as a Speech Act

As I said in my previous post, I don't think we can start to think about the ethics of technology nudges without recognizing the complexity of real-world nudges. So in this post, I shall look at how nudges are communicated in the real world, before considering what their artificial analogues might look like.

Once upon a time, nudges were physical rather than verbal - a push on the shoulder perhaps, or a dig in the ribs with an elbow. The meaning was elliptical and depended almost entirely on context. "Nudge nudge, wink wink", as Monty Python used to say.

Even technologically mediated nudges can sometimes be physical, or what we should probably call haptic. For example, the fitness band that vibrates when it thinks you have been sitting for too long.

But many of the acts we now think of as nudges are delivered verbally, as some kind of speech act. But which kind?

The most obvious kind of nudge is a direct suggestion, which may take the form of a weak command. ("Try and eat a little now.") But nudges can also take other illocutionary forms, including questions ("Don't you think the sun is very hot here?") and statements / predictions ("You will find that new nose of yours very useful to spank people with.").

(Readers familiar with Kipling may recognize my examples as the nudges given by the Bi-Coloured-Python-Rock-Snake to the Elephant's Child.)

The force of a suggestion may depend on context and tone of voice. (A more systematic analysis of what philosophers call illocutionary force can be found in the Stanford Encyclopedia of Philosophy, based on Searle and Vanderveken 1985.)

@tonyjoyce raises a good point about tone of voice in electronic messages. Traditionally robots don't do tone of voice, and when a human being talks in a boring monotone we may describe their speech as robotic. But I can't see any reason why robots couldn't be programmed with more varied speech patterns, including tonality, if their designers saw the value of this.

Meanwhile, we already get some differentation from electronic communications. For example I should expect an electronic announcement to "LEAVE THE BUILDING IMMEDIATELY" to have a tone of voice that conveys urgency, and we might think it is inappropriate or even unethical to use the same tone of voice for selling candy. We might put this together with other attention-seeking devices, such as flashing red text. The people who design clickbait clearly understand illocutionary force (even if they aren't familiar with the term). 

A speech act can also gain force by being associated with action. If I promise to donate money to a given charity, this may nudge other people to do the same; but if they see me actually putting the money in the tin, the nudge might be much stronger. But then the nudge might be just as strong if I just put the money in the tin without saying anything, as long as everyone sees me do it. The important point is that some communication takes place, whether verbal or non-verbal, and this returns us to something closer to the original concept of nudge.

From an ethical point of view, there are particular concerns about unobtrusive or subliminal nudges. Yeung has introduced the concept of the Hypernudge, which combines three qualities: nimble, unobtrusive and highly potent. I share her concerns about this combination, but I think it is helpful to deal with these three qualities separately, before looking at the additional problems that may arise when they are combined.

Proponents of the nudge sometimes try to distinguish between unobtrusive (acceptable) and subliminal (unacceptable), but this distinction may be hard to sustain, and many people quote Luc Bovens' observation that nudges "typically work better in the dark". See also Baldwin.

I'm sure there's more to say on this topic, so I may update this post later. Relevant comments always welcome.

Robert Baldwin, From regulation to behaviour change: giving nudge the third degree (The Modern Law Review 77/6, 2014) pp 831-857

Luc Bovens, The Ethics of Nudge. In Mats J. Hansson and Till Grüne-Yanoff (eds.), Preference Change: Approaches from Philosophy, Economics and Psychology. (Berlin: Springer, 2008) pp. 207-20

John Danaher, Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy (Institute for Ethics and Emerging Technologies, 17 January 2017)

J. Searle and D. Vanderveken, Foundations of Illocutionary Logic (Cambridge: Cambridge University Press, 1985)

Karen Yeung, ‘Hypernudge’: Big Data as a Mode of Regulation by Design (Information, Communication and Society (2016) 1,19; TLI Think! Paper 28/2016)

Stanford Encyclopedia of Philosophy: Speech Acts

Related posts: On the Ethics of Technologically Mediated Nudge (May 2019), Nudge Technology (July 2019)

Updated 28 May 2019. Many thanks to @tonyjoyce