Monday, July 22, 2019

Algorithms and Auditability

@ruchowdh objects to an article by @etzioni and @tianhuil calling for algorithms to audit algorithms. The original article makes the following points.
  • Automated auditing, at a massive scale, can systematically probe AI systems and uncover biases or other undesirable behavior patterns. 
  • High-fidelity explanations of most AI decisions are not currently possible. The challenges of explainable AI are formidable.  
  • Auditing is complementary to explanations. In fact, auditing can help to investigate and validate (or invalidate) AI explanations.
  • Auditable AI is not a panacea. But auditable AI can increase transparency and combat bias. 

Rumman Chowdhury points out some of the potential imperfections of a system that relied on automated auditing, and does not like the idea that automated auditing might be an acceptable substitute for other forms of governance. Such a suggestion is not made explicitly in the article, and I haven't seen any evidence that this was the authors' intention. However, there is always a risk that people might latch onto a technical fix without understanding its limitations, and this risk is perhaps what underlies her critique.

In a recent paper, she calls for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But how can people verify that systems are not only ignoring these data, but also being cautious about other data that may serve as proxies for race and class, as discussed by Cathy O'Neil? How can they prove that a system is systematically unfair without having some classification data of their own?

And yes, we know that all classification is problematic. But that doesn't mean being squeamish about classification, it just means being self-consciously critical about the tools you are using. Any given tool provides a particular lens or perspective, and it is important to remember that no tool can ever give you the whole picture. Donna Haraway calls this partial perspective.

With any tool, we need to be concerned about how the tool is used, by whom, and for whom. Chowdhury expects people to assume the tool will be in some sense "neutral", creating a "veneer of objectivity"; and she sees the tool as a way of centralizing power. Clearly there are some questions about the role of various stakeholders in promoting algorithmic fairness - the article mentions regulators as well as the ACLU - and there are some major concerns that the authors don't address in the article.

Chowdhury's final criticism is that the article "fails to acknowledge historical inequities, institutional injustice, and socially ingrained harm". If we see algorithmic bias as merely a technical problem, then this leads us to evaluate the technical merits of auditable AI, and acknowledge its potential use despite its clear limitations. And if we see algorithmic bias as an ethical problem, then we can look for various ways to "solve" and "eliminate" bias. @juliapowles calls this a "captivating diversion". But clearly that's not the whole story.

Some stakeholders (including the ACLU) may be concerned about historical and social injustice. Others (including the tech firms) are primarily interested in making the algorithms more accurate and powerful. So obviously it matters who controls the auditing tools. (Whom shall the tools serve?)

What algorithms and audits have in common is that they deliver opinions. A second opinion (possibly based on the auditing algorithm) may sometimes be useful - but only if it is reasonably independent of the first opinion, and doesn't entirely share the same assumptions or perspective. There are codes of ethics for human auditors, so we may want to ask whether automated auditing would be subject to some ethical code.




Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)

Oren Etzioni and Michael Li, High-Stakes AI Decisions Need to Be Automatically Audited (Wired, 18 July 2019)

Donna Haraway, Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective. In Simians, Cyborgs and Women (Free Association, 1991)

Cathy O'Neil, Weapons of Math Destruction

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

Related posts: Whom Does the Technology Serve? (May 2019), Algorithms and Governmentality (July 2019)

Saturday, July 13, 2019

Algorithms and Governmentality

In the corner of the Internet where I hang out, it is reasonably well understood that big data raises a number of ethical issues, including data ownership and privacy.

There are two contrasting ways of characterizing these issues. One way is to focus on the use of big data to target individuals with increasingly personalized content, such as precision nudging. Thus mass surveillance provides commercial and governmental organizations with large quantities of personal data, allowing them to make precise calculations concerning individuals, and use these calculations for the purposes of influence and control.

Alternatively, we can look at how big data can be used to control large sets or populations - what Foucault calls governmentality. If the prime job of the bureaucrat is to compile lists that could be shuffled and compared (Note 1), then this function is increasingly being taken over by the technologies of data and intelligence - notably algorithms and so-called big data.

Although Deleuze challenges this dichotomy.
"We no longer find ourselves dealing with the mass/individual pair. Individuals have become 'dividuals' and masses, samples, data, markets, or 'banks'."

Foucault's version of Bentham's panopticon is often invoked in discussions of mass surveillance, but what was equally important for Foucault was what he called biopower - "a type of power that presupposed a closely meshed grid of material coercions rather than the physical existence of a sovereign". [Foucault 2003 via Adams]

People used to talk metaphorically about faceless bureaucracy being a "machine", but now we have a real machine, performing the same function with much greater efficiency and effectiveness. And of course, scale.
"The machine tended increasingly to dictate the purpose to be served, and to exclude other more intimate human needs." (Lewis Mumford)

Bureaucracy is usually regarded as a Bad Thing, so it's worth remembering that it is a lot better than some of the alternatives. Bureaucracy should mean you are judged according to an agreed set of criteria, rather than whether someone likes your face or went to the same school as you. Bureaucracy may provide some protection against arbitrary action and certain forms of injustice. And the fact that bureaucracy has sometimes been used by evil regimes for evil purposes isn't sufficient grounds for rejecting all forms of bureaucracy everywhere.

What bureaucracy does do is codify and classify, and this has important implications for discrimination and diversity.

Sometimes discrimination is considered to be a good thing. For example, recruitment should discriminate between those who are qualified to do the job and those who are not, and this can be based either on a subjective judgement or an agreed standard. But even this can be controversial. For example, the College of Policing is implementing a policy that police recruits in England and Wales should be educated to degree level, despite strong objections from the Police Federation.

Other kinds of discrimination such as gender and race are widely disapproved of, and many organizations have an official policy disavowing such discrimination, or affirming a belief in diversity. Despite such policies, however, some unofficial or inadvertent discrimination may often occur, and this can only be discovered and remedied by some form of codification and classification. Thus if campaigners want to show that firms are systematically paying women less than men, they need payroll data classified by gender to prove the point.

Organizations often have a diversity survey as part of their recruitment procedure, so that they can monitor the numbers of recruits by gender, race, religion, sexuality, disability or whatever, thereby detecting any hidden and unintended bias, but of course this depends on people's willingness to place themselves in one of the defined categories. (If everyone ticks the "prefer not to say" box, then the diversity statistics are not going to be very helpful.)

Daugherty, Wilson and Chowdhury call for systems to be "taught to ignore data about race, gender, sexual orientation, and other characteristics that aren’t relevant to the decisions at hand". But there are often other data (such as postcode/zipcode) that are correlated with the attributes you are not supposed to use, and these may serve as accidental proxies, reintroducing discrimination by the back door. The decision-making algorithm may be designed to ignore certain data, based on training data that has been carefully constructed to eliminate certain forms of bias, but perhaps you then need a separate governance algorithm to check for any other correlations.

Bureaucracy produces lists, and of course the lists can either be wrong or used wrongly. For example, King's College London recently apologized for denying access to selected students during a royal visit.

Big data also codifies and classifies, although much of this is done on inferred categories rather than declared ones. For example, some social media platforms infer gender from someone's speech acts (or what Judith Butler would call performativity). And political views can apparently be inferred from food choice. The fact that these inferences may be inaccurate doesn't stop them being used for targetting purposes, or population control.

Cathy O'Neil's statement that algorithms are "opinions embedded in code" is widely quoted. This may lead people to think that this is only a problem if you disagree with these opinions, and that the main problem with big data and algorithmic intelligence is a lack of perfection. And of course technology companies encourage ethics professors to look at their products from this perspective, firstly because they welcome any ideas that would help them make their products more powerful, and secondly because it distracts the professors from the more fundamental question as to whether they should be doing things like facial recognition in the first place. @juliapowles calls this a "captivating diversion".

But a more fundamental question concerns the ethics of codification and classification. Following a detailed investigation of this topic, published under the title Sorting Things Out, Bowker and Star conclude that "all information systems are necessarily suffused with ethical and political values, modulated by local administrative procedures" (p321).
"Black boxes are necessary, and not necessarily evil. The moral questions arise when the categories of the powerful become the taken for granted; when policy decisions are layered into inaccessible technological structures; when one group's visibility comes at the expense of another's suffering."  (p320)
At the end of their book (pp324-5), they identify three things they want designers and users of information systems to do. (Clearly these things apply just as much to algorithms and big data as to older forms of information system.)
  • Firstly, allow for ambiguity and plurality, allowing for multiple definitions across different domains. They call this recognizing the balancing act of classifying.
  • Secondly, the sources of the classifications should remain transparent. If the categories are based on some professional opinion, these should be traceable to the profession or discourse or other authority that produced them. They call this rendering voice retrievable.
  • And thirdly, awareness of the unclassified or unclassifiable "other". They call this being sensitive to exclusions, and note that "residual categories have their own texture that operates like the silences in a symphony to pattern the visible categories and their boundaries" (p325).




Note 1: This view is attributed to Bruno Latour by Bowker and Star (1999 p 137). However, although Latour talks about paper-shuffling bureaucrats (1987 pp 254-5), I have been unable to find this particular quote.

Rachel Adams, Michel Foucault: Biopolitics and Biopower (Critical Legal Thinking, 10 May 2017)

Geoffrey Bowker and Susan Leigh Star, Sorting Things Out (MIT Press 1999).

Paul R. Daugherty, H. James Wilson, and Rumman Chowdhury, Using Artificial Intelligence to Promote Diversity (Sloan Management Review, Winter 2019)

Gilles Deleuze, Postscript on the Societies of Control (October, Vol 59, Winter 1992), pp. 3-7

Michel Foucault, ‘Society Must be Defended’ Lecture Series at the Collège de France, 1975-76 (2003) (trans. D Macey)

Maša Galič, Tjerk Timan and Bert-Jaap Koops, Bentham, Deleuze and Beyond: An Overview of Surveillance Theories from the Panopticon to Participation (Philos. Technol. 30:9–37, 2017)

Bruno Latour, Science in Action (Harvard University Press 1987)

Lewis Mumford, The Myth of the Machine (1967)

Samantha Murphy, Political Ideology Linked to Food Choices (LiveScience, 24 May 2011)

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

BBC News, All officers 'should have degrees', says College of Policing (13 November 2015), King's College London sorry over royal visit student bans (4 July 2019)


Related posts

Quotes on Bureaucracy (June 2003), Crude Categories (August 2009), What is the Purpose of Diversity? (January 2010), The Game of Wits between Technologists and Ethics Professors (June 2019), Algorithms and Auditability (July 2019)


Updated 16 July 2019

Friday, June 21, 2019

With Strings Attached

@rachelcoldicutt notes that "Google Docs new grammar suggestion tool doesn’t like the word 'funding' and prefers 'investment' ".

Many business people have an accounting mindset, in which all expenditure must be justified in terms of benefit to the organization, measured in financial terms. When they hear the word "investment", they hold their breath until they hear the word "return".

So when Big Tech funds the debate on AI ethics (Oscar Williams, New Statesman, 6 June 2019), can we infer that Big Tech sees this as an "investment", to which it is entitled to a return or payback?



Related post: The Game of Wits Between Technologists and Ethics Professors (June 2019)

Saturday, June 8, 2019

The Game of Wits between Technologists and Ethics Professors

What does #TechnologyEthics look like from the viewpoint of your average ethics professor? 

Not surprisingly, many ethics professors believe strongly in the value of ethics education, and advocate ethics awareness training for business managers and engineers. Provided by people like themselves, obviously.

There is a common pattern among technologists and would-be enterpreneurs to first come up with a "solution", find areas where the solution might apply, and then produce self-interested arguments to explain why the solution matches the problem. Obviously there is a danger of confirmation bias here. Proposing ethics education as a solution for an ill-defined problem space looks suspiciously like the same pattern. Ethicists should understand why it is important to explain what this education achieves, and how exactly it solves the problem.

Please note that I am not arguing against the value of ethics education and training as such, merely complaining that some of the programmes seem to involve little more than meandering through a randomly chosen reading list. @ruchowdh recently posted a particularly egregious example - see below.


Ethics professors may also believe that people with strong ethical awareness, such as themselves, can play a useful role in technology governance - for example, participating in advisory councils.

Some technology companies may choose to humour these academics, engaging them as a PR exercise (ethics washing) and generously funding their research. Fortunately, many of them lack deep understanding of business organizations and of technology, so there is little risk of them causing any serious challenge or embarrasment to these companies.

Professors are always attracted to the kind of work that lends itself to peer-reviewed articles in leading Journals. So it is fairly easy to keep their attention focused on theoretically fascinating questions with little or no practical relevance, such as the Trolley Problem.

Alternatively, they can be engaged to try and "fix" problems with real practical relevance, such as algorithmic bias. @juliapowles calls this a "captivating diversion", distracting academics from the more fundamental question, whether the algorithm should be built at all.

It might be useful for these ethics professors to have deeper knowledge of technology and business, in their social and historical context, enabling them to ask more searching and more relevant questions. (Although some ethics experts have computer science degrees or similar, computer science generally teaches people about specific technologies, not about Technology.) 



But if only a minority of ethics professors possess sufficient knowledge and experience, these will be overlooked for the plum advisory jobs. I therefore advocate compulsory technology awareness training for ethics professors, especially "prominent" ones. Provided by people like myself, obviously.




Stephanie Burns, Solution Looking For A Problem (Forbes, 28 May 2019)

Casey Fiesler, Tech Ethics Curricula: A Collection of Syllabi (5 July 2018), What Our Tech Ethics Crisis Says About the State of Computer Science Education (5 December 2018)

Mark Graban, Cases of Technology “Solutions” Looking for a Problem? (26 January 2011)

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

Oscar Williams, How Big Tech funds the debate on AI ethics (New Statesman, 6 June 2019)

Related posts:

Leadership and Governance (May 2019), Selected Reading List - Science and Technology Studies (June 2019), With Strings Attached (June 2019)

Updated 21 June 2019

Tuesday, May 28, 2019

Five Elements of Responsibility by Design

I have been developing an approach to #TechnologyEthics, which I call #ResponsibilityByDesign. It is based on the five elements of #VPECT. Let me start with a high-level summary before diving into some of the detail.


Values
  • Why does ethics matter?
  • What outcomes for whom?

Policies
  • Principles and practices of technology ethics
  • Formal codes of practice, etc. Regulation.

Event-Driven (Activity Viewpoint)
  • Effective and appropriate action at different points: planning; risk assessment; design; verification, validation and test; deployment; operation; incident management; retirement. (Also known as the Process Viewpoint). 

Content (Knowledge Viewpoint)
  • What matters from an ethical point of view? What issues do we need to pay attention to?
  • Where is the body of knowledge and evidence that we can reference?

Trust (Responsibility Viewpoint)
  • Transparency and governance
  • Responsibility, Authority, Expertise, Work (RAEW)

Concerning technology ethics, there is a lot of recent published material on each of these elements separately, but I have not yet found much work that puts them together in a satisfactory way. Many working groups concentrate on a single element - for example, principles or transparency. And even when experts link multiple elements, the logical connections aren't always spelled out.

At the time of writing this post (May 2019), I haven't yet fully worked out how to join these elements either, and I shall welcome constructive feedback from readers and pointers to good work elsewhere. I am also keen to find opportunities to trial these ideas on real projects.


Related Posts

Responsibility by Design (June 2018) What is Responsibility by Design (October 2018) Why Responsibility by Design Now? (October 2018)


Sunday, May 19, 2019

The Nudge as a Speech Act

As I said in my previous post, I don't think we can start to think about the ethics of technology nudges without recognizing the complexity of real-world nudges. So in this post, I shall look at how nudges are communicated in the real world, before considering what their artificial analogues might look like.


Once upon a time, nudges were physical rather than verbal - a push on the shoulder perhaps, or a dig in the ribs with an elbow. The meaning was elliptical and depended almost entirely on context. "Nudge nudge, wink wink", as Monty Python used to say.

Even technologically mediated nudges can sometimes be physical, or what we should probably call haptic. For example, the fitness band that vibrates when it thinks you have been sitting for too long.

But many of the acts we now think of as nudges are delivered verbally, as some kind of speech act. But which kind?

The most obvious kind of nudge is a direct suggestion, which may take the form of a weak command. ("Try and eat a little now.") But nudges can also take other illocutionary forms, including questions ("Don't you think the sun is very hot here?") and statements / predictions ("You will find that new nose of yours very useful to spank people with.").

(Readers familiar with Kipling may recognize my examples as the nudges given by the Bi-Coloured-Python-Rock-Snake to the Elephant's Child.)

The force of a suggestion may depend on context and tone of voice. (A more systematic analysis of what philosophers call illocutionary force can be found in the Stanford Encyclopedia of Philosophy, based on Searle and Vanderveken 1985.)

@tonyjoyce raises a good point about tone of voice in electronic messages. Traditionally robots don't do tone of voice, and when a human being talks in a boring monotone we may describe their speech as robotic. But I can't see any reason why robots couldn't be programmed with more varied speech patterns, including tonality, if their designers saw the value of this.

Meanwhile, we already get some differentation from electronic communications. For example I should expect an electronic announcement to "LEAVE THE BUILDING IMMEDIATELY" to have a tone of voice that conveys urgency, and we might think it is inappropriate or even unethical to use the same tone of voice for selling candy. We might put this together with other attention-seeking devices, such as flashing red text. The people who design clickbait clearly understand illocutionary force (even if they aren't familiar with the term). 

A speech act can also gain force by being associated with action. If I promise to donate money to a given charity, this may nudge other people to do the same; but if they see me actually putting the money in the tin, the nudge might be much stronger. But then the nudge might be just as strong if I just put the money in the tin without saying anything, as long as everyone sees me do it. The important point is that some communication takes place, whether verbal or non-verbal, and this returns us to something closer to the original concept of nudge.

From an ethical point of view, there are particular concerns about unobtrusive or subliminal nudges. Yeung has introduced the concept of the Hypernudge, which combines three qualities: nimble, unobtrusive and highly potent. I share her concerns about this combination, but I think it is helpful to deal with these three qualities separately, before looking at the additional problems that may arise when they are combined.

Proponents of the nudge sometimes try to distinguish between unobtrusive (acceptable) and subliminal (unacceptable), but this distinction may be hard to sustain, and many people quote Luc Bovens' observation that nudges "typically work better in the dark". See also Baldwin.


I'm sure there's more to say on this topic, so I may update this post later. Relevant comments always welcome.




Robert Baldwin, From regulation to behaviour change: giving nudge the third degree (The Modern Law Review 77/6, 2014) pp 831-857

Luc Bovens, The Ethics of Nudge. In Mats J. Hansson and Till Grüne-Yanoff (eds.), Preference Change: Approaches from Philosophy, Economics and Psychology. (Berlin: Springer, 2008) pp. 207-20

John Danaher, Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy (Institute for Ethics and Emerging Technologies, 17 January 2017)

J. Searle and D. Vanderveken, Foundations of Illocutionary Logic (Cambridge: Cambridge University Press, 1985)

Karen Yeung, ‘Hypernudge’: Big Data as a Mode of Regulation by Design (Information, Communication and Society (2016) 1,19; TLI Think! Paper 28/2016)


Stanford Encyclopedia of Philosophy: Speech Acts

Related posts: On the Ethics of Technologically Mediated Nudge (May 2019), Nudge Technology (July 2019)


Updated 28 May 2019. Many thanks to @tonyjoyce

Friday, May 17, 2019

On the Ethics of Technologically Mediated Nudge

Before we can discuss the ethics of technologically mediated nudge, we need to recognize that many of the ethical issues are the same whether the nudge is delivered by a human or a robot. So let me start by trying to identify different categories of nudge.

In its simplest form, the nudge can involve gentle persuasions and hints between one human being and another. Parents trying to influence their children (and vice versa), teachers hoping to inspire their pupils, various forms of encouragement and consensus building and leadership. In fiction, such interventions often have evil intent and harmful consequences, but in real life let's hope that these interventions are mostly well-meaning and benign.

In contrast, there are more large-scale forms of nudge, where a team of social engineers (such as the notorious "Nudge Unit") design ways of influencing the behaviour of lots of people, but don't have any direct contact with the people whose behaviour is to be influenced. A new discipline has grown up, known as Behavioural Economics.

I shall call these two types unmediated and mediated respectively.

Mediated nudges may be delivered in various ways. For example, someone in Central Government may design a nudge to encourage job-seekers to find work. Meanwhile, YouTube can nudge us to watch a TED talk about nudging. Some nudges can be distributed via the Internet, or even the Internet of Things. In general, this involves both people and technology - in other words, a sociotechnical system.

To assess the outcome of the nudge, we can look at the personal effect on the nudgee or at the wider socio-economic impact, either short-term or longer-term. In terms of outcome, it may not make much difference whether the nudge is delivered by a human being or by a machine, given that human beings delivering the nudge might be given a standard script or procedure to follow, except in so far as the nudgee may feel differently about it, and may therefore respond differently. It is an empirical question whether a given person would respond more positively to a given nudge from a human bureaucrat or from a smartphone app, and the ethical difference between the two will be largely driven by this.

The second distinction involves the beneficiary of the nudge. Some nudges are designed to benefit the nudgee (Cass Sunstein calls these "paternalistic"), while others are designed to benefit the community as a whole (for example, correcting some market failure such as the Tragedy of the Commons). On the one hand, nudges that encourage people to exercise more; on the other hand, nudges that remind people to take their litter home. And of course there are also nudges whose intended beneficiary is the person or organization doing the nudging. We might think here of dark patterns, shades of manipulation, various ways for commercial organizations to get the individual to spend more time or money. Clearly there are some ethical issues here.

A slightly more complicated case from an ethical perspective is where the intended outcome of the nudge is to get the nudgee to behave more ethically or responsibly towards someone else.

Sunstein sees the "paternalistic" nudges as more controversial than nudges to address potential market failures, and states two further preferences. Firstly, he prefers nudges that educate people, that serve over time to increase rather than decrease their powers of agency. And secondly, he prefers nudges that operate at a slow deliberative tempo ("System 2") rather than at a fast intuitive tempo ("System 1"), since the latter can seem more manipulative.

Meanwhile, there is a significant category of self-nudging. There are now countless apps and other devices that will nudge you according to a set of rules or parameters that you provide yourself, implementing the kind of self-binding or precommitment that Jon Elster described in Ulysses and the Sirens (1979). Examples include the Tomato system for time management, fitness trackers that will count your steps and vibrate when you have been sitting for too long, money management apps that allocate your spare change to your chosen charity. Several years ago, Microsoft developed an experimental Smart Bra that would detect changes in the skin to predict when a women was about to reach for the cookie jar, and give her a friendly warning. Even if there is no problem with the nudge itself (because you have consented/chosen to be nudged) there may be some ethical issues with the surveillance and machine learning systems that enable the nudge. Especially when the nudging device is kindly made available to you by your employer or insurance company.

And even if the immediate outcome of the nudge is benefical to the nudgee, in some situations there may be concerns that the nudgee becomes over-dependent on being nudged, and thereby loses some element of self-control or delayed gratification.


The final distinction I want to introduce here concerns the direction of the nudge. The most straightforward nudges are those that push an individual in the desired direction. Suggestions to eat more healthy food, suggestions to direct spare cash to charity or savings. But some forms of therapy are based on paradoxical interventions, where the individual is pushed in the opposite directly, and they react by moving in the direction you want them to go. For example, if you want someone to give up some activity that is harming them, you might suggest they carry out this activity more systematically or energetically. This is sometimes known as reverse psychology or prescribing the symptom. For example, faced with a girl who was biting her nails, the therapist Milton Erickson advised her how she could get more enjoyment from biting her nails. Astonished by this advice, which was of course in direct opposition to all the persuasion and coercion she had received from other people up to that point, she found she was now able to give up biting her nails altogether.

(Richard Bordenave attributes paradoxical intervention to Paul Watzlawick, who worked with Gregory Bateson. It can also be found in some versions of Neuro-Linguistic Programming (NLP), which was strongly influenced by both Bateson and Erickson.)

Of course, this technique can also be practised in an ethically unacceptable direction as well. Imagine a gambling company whose official message to gamblers is that they should invest their money in a sensible savings account instead of gambling it away. This might seem like an ethically noble gesture, until we discover that the actual effect on people with a serious gambling problem is that this causes them to gamble even more. (In the same way that smoking warnings can cause some people to smoke more. Possibly cigarette companies are aware of this.)

Reverse psychology may also explain why nudge programmes may have the opposite effect to intended one. Shortly before the 2016 Brexit Referendum, an English journalist writing for RT (formerly known as Russia Today) noted a proliferation of nudges trying to persuade people to vote remain, which he labelled propaganda. While the result was undoubtedly affected by covert nudges in all directions, it is also easy to believe that the pro-establishment style of the Remain nudges could have been counterproductive.

Paradoxical interventions make perfect sense in terms of systems theory, which teaches us that the links from cause to effect are often complex and non-linear. Sometimes an accumulation of positive nudges can tip a system into chaos or catastrophe, as Donella Meadows notes in her classic essay on Leverage Points.

The Leverage Point framework may also be useful in comparing the effects of nudging at different points in a system. Robert Steele notes the use of a nudge based on restructuring information flows; in contrast, a nudge that was designed to alter the nudgee's preferences or goals or political opinions could be much more dangerously powerful, as @zeynep has demonstrated in relation to YouTube.

One of the things that complicates the ethics of Nudge is that the alternative to nudging may either be greater forms of coercion or worse outcomes for the individual. In his article on the Ethics of Nudging, Cass Sunstein argues that all human interaction and activity takes place inside some kind of Choice Architecture, thus some form of nudging is probably inevitable, whether deliberate or inadvertent. He also argues that nudges may be required on ethical grounds to the extent that they promote our core human values. (This might imply that it is sometimes irresponsible to miss an opportunity to provide a helpful nudge.) So the ethical question is not whether to nudge or not, but how to design nudges in such a way as to maximize these core human values, which he identifies as welfare, autonomy and human dignity.

While we can argue with some of the detail of Sunstein's position, I think his two main conclusions make reasonable sense. Firstly, that we are always surrounded by what Sunstein calls Choice Architectures, so we can't get away from the nudge. And secondly, that many nudges are not just preferable to whatever the alternative might be but may also be valuable in their own right.

So what happens when we introduce advanced technology into the mix? For example, what if we have a robot that is programmed to nudge people, perhaps using some kind of artificial intelligence or machine learning to adapt the nudge to each individual in a specific context at a specific point in time?

Within technology ethics, transparency is a major topic. If the robot is programmed to include a predictive model of human psychology that enables it to anticipate the human response in certain situations, this model should be open to scrutiny. Although such models can easily be wrong or misguided, especially if the training data set reflects an existing bias, with reasonable levels of transparency (at least for the appropriate stakeholders) it will usually be easier to detect and correct these errors than to fix human misconceptions and prejudices.

In science fiction, robots have sufficient intelligence and understanding of human psychology to invent appropriate nudges for a given situation. If we start to see more of this in real life, we could start to think of these as unmediated robotic nudges, instead of the robot merely being the delivery mechanism for a mediated nudge. But does this introduce any additional ethical issues, or merely amplify the importance of the ethical issues we are already looking at?

Some people think that the ethical rules should be more stringent for robotic nudges than for other kinds of nudges. For example, I've heard people talking about parental consent before permitting children to be nudged by a robot. But other people might think it was safer for a child to be nudged (for whatever purpose) by a robot than by an adult human. And if you think it is a good thing for a child to work hard at school, eat her broccoli, and be kind to those less fortunate than herself, and if robotic persuasion turns out to be the most effective and child-friendly way of achieving these goals, do we really want heavier regulation on robotic child-minders than human ones?

Finally, it's worth noting that as nudges exploit bounded rationality, any entity that displays bounded rationality is capable of being nudged. As well as humans, this includes animals, algorithmic machines, as well as larger social systems (including markets and elections).




Richard Bordenave, Comment les paradoxes permettent de réinventer les nudges (Harvard Business Review France, 30 January 2019). Adapted English version: When paradoxes inspire Nudges (6 April 2019)

Jon Elster, Ulysses and the Sirens (1979)

Sam Gerrans, Propaganda techniques nudging UK to remain in Europe (RT, 22 May 2016)

Jochim Hansen, Susanne Winzeler and Sascha Topolinski, When the Death Makes You Smoke: A Terror Management Perspective on the Effectiveness of Cigarette On-Pack Warnings (Journal of Experimental Social Psychology 46(1):226-228, January 2010) HT @ABMarkman

Donella Meadows, Leverage Points: Places to Intervene in a System (Whole Earth Review, Winter 1997)

Robert Steele, Implementing an integrated and transformative agenda at the regional and national levels (AtKisson, 2014)

Cass Sunstein, The Ethics of Nudging (Yale J. on Reg, 32, 2015)

Iain Thomson, Microsoft researchers build 'smart bra' to stop women's stress eating (The Register, 6 Dec 2013)

Zeynep Tufecki, YouTube, the Great Radicalizer (New York Times, 10 March 2018)

Wikipedia: Behavioural Insights Team ("Nudge Unit"), Bounded Rationality, Reverse Psychology,

Stanford Encyclopedia of Philosophy: The Ethics of Manipulation

Related posts: Have you got big data in your underwear? (December 2014), Ethical Communication in a Digital Age (November 2018), The Nudge as a Speech Act (May 2019), Nudge Technology (July 2019)


Updated 18 July 2019