Sunday, October 20, 2019
On the Scope of Ethics
On both sides of the debate, there were people who strongly disapproved of weapons systems, but this disapproval led them to two opposite positions. One side felt that applying any ethical principles and standards to such systems would imply a level of ethical approval or endorsement, which they would prefer to withhold. The other side felt that weapons systems called for at least as much ethical scrutiny as anything else, if not more, and thought that exempting weapons systems implied a free pass.
It goes without saying that people disapprove of weapons systems to different degrees. Some people think they are unacceptable in all circumstances, while others see them as a regrettable necessity, while welcoming the economic activity and technological spin-offs that they produce. It's also worth noting that there are other sectors that attract strong disapproval from many people, including gambling, hydrocarbon, nuclear energy and tobacco, especially where these appear to rely on disinformation campaigns such as climate science denial.
It's also worth noting that there isn't always a clear dividing line between those products and technologies that can be used for military purposes and those that cannot. For example, although the dividing line between peaceful nuclear power and nuclear weapons may be framed as a purely technical question, this has major implications for international relations, and technical experts may be subject to significant political pressure.
While there may be disagreements about the acceptability of a given technology, and legitimate suspicion about potential use, these should be capable of being addressed as part of ethical governance. So I don't think this is a good reason for limiting the scope.
However, a better reason for limiting the scope may be to simplify the task. Given finite time and resources, it may be better to establish effective governance for a limited scope, than taking forever getting something that works properly for everything. This leads to the position that although some ethical governance may apply to weapons systems, this doesn't mean that every ethical governance exercise must address such systems. And therefore it may be reasonable to exclude such systems from a specific exercise for a specific time period, provided that this doesn't rule out the possibility of extending the scope at a later date.
Update. The US Department of Defense has published a high-level set of ethical principles for the military use of AI. Following the difference of opinion outlined above, some people will think it matters how these principles are interpreted and applied in specific cases (since like many similar sets of principles, they are highly generic), while other people will think any such discussion completely misses the point.
David Vergun, Defense Innovation Board Recommends AI Ethical Guidelines (US Dept of Defense, 1 November 2019)
Tuesday, October 8, 2019
Ethics of Transparency and Concealment
In 1986 I wrote an article for Human Systems Management promoting the importance of visibility. There were two reasons I preferred this word. Firstly, "transparency" is a contronym - it has two opposite senses. When something is transparent, this either means you don't see it, you just see through it, or it means you can really see it. And secondly, transparency appears to be merely a property of an object, whereas visibility is about the relationship between the object and the viewer - visibility to whom?
(P7001 addresses this by defining transparency requirements in relation to different stakeholder groups.)
Although I wasn't aware of this when I wrote the original article, my concept of visibility shares something with Heidegger's concept of Unconcealment (Unverborgenheit). Heidegger's work seems a good starting point for thinking about the ethics of transparency.
Technology generally makes certain things available while concealing other things. (This is related to what Albert Borgmann, a student of Heidegger, calls the Device Paradigm.)
In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control. Levitt
Goods that are available to us enrich our lives and, if they are technologically available, they do so without imposing burdens on us. Something is available in this sense if it has been rendered instantaneous, ubiquitous, safe, and easy. BorgmannI referred above to the two opposite meanings of the word "transparent". For Heidegger and his followers, the word "transparent" often refers to tools that can be used without conscious thought, or what Heidegger called ready-to-hand (zuhanden). In technology ethics, on the other hand, the word "transparent" generally refers to something (product, process or organization) being open to scrutiny, and I shall stick to this meaning for the remainder of this blogpost.
We are surrounded by technology, we rarely have much idea how most of it works, and usually cannot be bothered to find out. Thus when technological devices are designed to conceal their inner workings, this is often exactly what the users want. How then can we object to concealment?
The ethical problems of concealment depend on what is concealed by whom and from whom, why it is concealed, and whether, when and how it can be unconcealed.
Let's start with the why. Sometimes people deliberately hide things from us, for dishonest or devious reasons. This category includes so-called defeat devices that are intended to cheat regulations. Less clear-cut is when people hide things to avoid the trouble of explaining or justifying them.
(If something is not visible, then we may not be aware that there is something that needs to be explained. So even if we want to maintain a distinction between transparency and explainability, the two concepts are interdependent.)
People may also hide things for aesthetic reasons. The Italian civil engineer Riccardo Morandi designed bridges with the steel cables concealed, which made them difficult to inspect and maintain. The Morandi Bridge in Genoa collapsed in August 2018, killing 43 people.
And sometimes things are just hidden, not as a deliberate act but because nobody has thought it necessary to make them visible. (This is one of the reasons why a standard could be useful.)
We also need to consider the who. For whose benefit are things being hidden? In particular, who is pulling the strings, where is the funding coming from, and where are the profits going - follow the money. In technology ethics, the key question is Whom Does The Technology Serve?
In many contexts, therefore, the main focus of unconcealment is not understanding exactly how something works but being aware of the things that people might be trying to hide from you, for whatever reason. This might include being selective about the available evidence, or presenting the most common or convenient examples and ignoring the outliers. It might also include failing to declare potential conflicts of interest.
For example, the #AllTrials campaign for clinical trial transparency demands that drug companies declare all clinical trials in advance, rather than waiting until the trials are complete and then deciding which ones to publish.
Now let's look at the possibility of unconcealment. Concealment doesn't always mean making inconvenient facts impossible to discover, but may mean making them so obscure and inaccessible that most people don't bother, or creating distractions that divert people's attention elsewhere. So transparency doesn't just entail possibility, it requires a reasonable level of accessibility.
Sometimes too much information can also serve to conceal the truth. Onora O'Neill talks about the "cult of transparency" that fails to produce real trust.
Transparency can produce a flood of unsorted information and misinformation that provides little but confusion unless it can be sorted and assessed. It may add to uncertainty rather than to trust. Transparency can even encourage people to be less honest, so increasing deception and reducing reasons for trust. O'NeillSometimes this can be inadvertent. However, as Chesterton pointed out in one of his stories, this can be a useful tactic for those who have something to hide.
Where would a wise man hide a leaf? In the forest. If there were no forest, he would make a forest. And if he wished to hide a dead leaf, he would make a dead forest. And if a man had to hide a dead body, he would make a field of dead bodies to hide it in. ChestertonStohl et al call this strategic opacity (via Ananny and Crawford).
Another philosopher who talks about the "cult of transparency" is Shannon Vallor. However, what she calls the "Technological Transparency Paradox" seems to be merely a form of asymmetry: we are open and transparent to the social media giants, but they are not open and transparent to us.
In the absence of transparency, we are forced to trust people and organizations - not only for their honesty but also their competence and diligence. Under certain conditions, we may trust independent regulators, certification agencies and other institutions to verify these attributes on our behalf, but this in turn depends on our confidence in their ability to detect malfeasance and enforce compliance, as well as believing them to be truly independent. (So how transparent are these institutions themselves?) And trusting products and services typically means trusting the organizations and supply chains that produce them, in addition to any inspection, certification and official monitoring that these products and services have undergone.
Instead of seeing transparency as a simple binary (either something is visible or it isn't), it makes sense to discuss degrees of transparency, depending on stakeholder and context. For example, regulators, certification bodies and accident investigators may need higher levels of transparency than regular users. And regular users may be allowed to choose whether to make things visible or invisible. (Thomas Wendt discusses how Heideggerian thinking affects UX design.)
Finally, it's worth noting that people don't only conceal things from others, they also conceal things from themselves, which leads us to the notion of self-transparency. In the personal world this can be seen as a form of authenticity; in the corporate world, it translates into ideas of responsibility, due diligence, and a constant effort to overcome wilful blindness.
If transparency and openness is promoted as a virtue, then people and organizations can make their virtue apparent by being transparent and open, and this may make us more inclined to trust them. We should perhaps be wary of organizations that demand or assume that we trust them, without providing good evidence of their trustworthiness. (The original confidence trickster asked strangers to trust him with their valuables.) The relationship between trust and trustworthiness is complicated.
UK Department of Health and Social Care, Response to the House of Commons Science and Technology Committee report on research integrity: clinical trials transparency (UK Government Policy Paper, 22 February 2019) via AllTrials
Mike Ananny and Kate Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability (new media and society 2016) pp 1–17
Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)
G.K. Chesterton, The Sign of the Broken Sword (The Saturday Evening Post, 7 January 1911)
Martin Heidegger, The Question Concerning Technology (Harper 1977) translated and with an introduction by William Lovitt
Onora O'Neill, Trust is the first casualty of the cult of transparency (Telegraph, 24 April 2002)
Cynthia Stohl, Michael Stohl and P.M. Leonardi, Managing opacity: Information visibility and the paradox of transparency in the digital age (International Journal of Communication Systems 10, January 2016) pp 123–137.
Richard Veryard, The Role of Visibility in Systems (Human Systems Management 6, 1986) pp 167-175 (this version includes some further notes dated 1999)
Thomas Wendt, Designing for Transparency and the Myth of the Modern Interface (UX Magazine, 26 August 2013)
Stanford Encyclopedia of Philosophy: Heidegger, Technological Transparency Paradox
Wikipedia: Confidence Trick, Follow The Money, Ponte Morandi, Regulatory Capture,Willful Blindness
Related posts: Defeating the Device Paradigm (October 2015), Transparency of Algorithms (October 2016), Pax Technica (November 2017), Responsible Transparency (April 2019), Whom Does The Technology Serve (May 2019)
Thursday, August 8, 2019
Automation Ethics
A robot may not injure a human being or, through inaction, allow a human being to come to harm (etc. etc.)As I've said before, I believe Asimov's Laws are problematic as a basis for ethical principles. Given that Asimov's stories demonstrate numerous ways in which the Laws don't actually work as intended. I have always regarded Asimov's work as being satirical rather than prescriptive.
While we usually don't want robots to harm people (although some people may argue for this principle to be partially suspended in the event of a "just war"), notions of harm are not straightforward. For example, a robot surgeon would have to cut the patient (minor harm) in order to perform an essential operation (major benefit). How essential or beneficial does the operation need to be, in order to justify it? Is the patient's consent sufficient?
Harm can be individual or collective. One potential harm from automation is that even if it creates wealth overall, it may shift wealth and employment opportunities away from some people, at least in the short term. But perhaps this can be justified in terms of the broader social benefit, or in terms of technological inevitability.
And besides the avoidance of (unnecessary) harm, there are some other principles to think about.
- Human-centred work - Humans should be supported by robots, not the other way around.
- Whole system solutions - Design the whole system or process, don’t just optimize a robot as a single component.
- Self-correcting - Ensure that the system is capable of detecting and learning from errors.
- Open - Providing space for learning and future disruption. Don't just pave the cow-paths.
- Transparent - The internal state and decision-making processes of a robot are accessible to (some) users.
Let's look at each of these in more detail.
Human-Centred Work
Humans should be supported by robots, not the other way around. So we don't just leave humans to handle the bits and pieces that can't be automated, but try to design coherent and meaningful jobs for humans, with robots to make them more powerful, efficient, and effective.
Organization theorists have identified a number of job characteristics associated with job satisfaction, including skill variety, task identity, task significance, autonomy and feedback. So we should be able to consider how a given automation project affects these characteristics.
Whole Systems
When we take an architectural approach to planning and designing new technology, we can look at the whole system rather than merely trying to optimize a single robotic component.
- Look across the business and technology domains (e.g. POLDAT).
- Look at the total impact of a collection of automated devices, not at each device separately.
- Look at this as a sociotechnical system, involving humans and robots collaborating on the business process.
Self-Correcting
Ensure that the (whole) system is capable of detecting and learning from errors (including near misses).
This typically requires a multi-loop learning process. The machines may handle the inner learning loops, but human intervention will be necessary for the outer loops.
Open
Okay, so do you improve the process first and then automate it, or do you automate first? If you search the Internet for "paving the cow-paths", you can find strong opinions on both sides of this argument. But the important point here is that automation shouldn't close down all possibility of future change. Paving the cow-paths may be okay, but not just paving the cow-paths and thinking that's the end of the matter.
In some contexts, this may mean leaving a small proportion of cases to be handled manually, so that human know-how is not completely lost. (Lewis Mumford argued that it is generally beneficial to retain some "craft" production alongside automated "factory" production, as a means to further insight, discovery and invention.)
Transparency
The internal state and decision-making processes of a robot are accessible to (some) users. Provide ways to monitor and explain what the robots are up to, or to provide an audit trail in the event of something going wrong.
Related posts
How Soon Might Humans Be Replaced At Work? (November 2015) Could we switch the algorithms off? (July 2017), How many ethical principles? (April 2019), Responsible Transparency (April 2019), Process Automation and Intelligence (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)
Links
Jim Highsmith, Paving Cow Paths (21 June 2005)
Wikipedia
Job Characteristic Theory
Just War Theory
Saturday, June 8, 2019
The Game of Wits between Technologists and Ethics Professors
Not surprisingly, many ethics professors believe strongly in the value of ethics education, and advocate ethics awareness training for business managers and engineers. Provided by people like themselves, obviously.
There is a common pattern among technologists and would-be enterpreneurs to first come up with a "solution", find areas where the solution might apply, and then produce self-interested arguments to explain why the solution matches the problem. Obviously there is a danger of confirmation bias here. Proposing ethics education as a solution for an ill-defined problem space looks suspiciously like the same pattern. Ethicists should understand why it is important to explain what this education achieves, and how exactly it solves the problem.
Please note that I am not arguing against the value of ethics education and training as such, merely complaining that some of the programmes seem to involve little more than meandering through a randomly chosen reading list. @ruchowdh recently posted a particularly egregious example - see below.
Ethics professors may also believe that people with strong ethical awareness, such as themselves, can play a useful role in technology governance - for example, participating in advisory councils.
Some technology companies may choose to humour these academics, engaging them as a PR exercise (ethics washing) and generously funding their research. Fortunately, many of them lack deep understanding of business organizations and of technology, so there is little risk of them causing any serious challenge or embarrasment to these companies.
Professors are always attracted to the kind of work that lends itself to peer-reviewed articles in leading Journals. So it is fairly easy to keep their attention focused on theoretically fascinating questions with little or no practical relevance, such as the Trolley Problem.
Alternatively, they can be engaged to try and "fix" problems with real practical relevance, such as algorithmic bias. @juliapowles calls this a "captivating diversion", distracting academics from the more fundamental question, whether the algorithm should be built at all.
It might be useful for these ethics professors to have deeper knowledge of technology and business, in their social and historical context, enabling them to ask more searching and more relevant questions. (Although some ethics experts have computer science degrees or similar, computer science generally teaches people about specific technologies, not about Technology.)
The problem isn’t that we are lacking a conceptual tool kit. The problem is people emerging from the technology fields are *aggressively* ignoring the existing vast body of knowledge and existing ethnical frameworks that we can and should apply to the current situation/technology.— zeynep tufekci (@zeynep) June 11, 2019
But if only a minority of ethics professors possess sufficient knowledge and experience, these will be overlooked for the plum advisory jobs. I therefore advocate compulsory technology awareness training for ethics professors, especially "prominent" ones. Provided by people like myself, obviously.
Simon Beard, The Problem with the Trolley Problem (27 September 2019)
Stephanie Burns, Solution Looking For A Problem (Forbes, 28 May 2019)
Casey Fiesler, Tech Ethics Curricula: A Collection of Syllabi (5 July 2018), What Our Tech Ethics Crisis Says About the State of Computer Science Education (5 December 2018)
Mark Graban, Cases of Technology “Solutions” Looking for a Problem? (26 January 2011)
Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)
Oscar Williams, How Big Tech funds the debate on AI ethics (New Statesman, 6 June 2019)
Related posts:
Leadership and Governance (May 2019), Selected Reading List - Science and Technology Studies (June 2019), With Strings Attached (June 2019)
Updated 27 September 2019
Tuesday, May 28, 2019
Five Elements of Responsibility by Design
Values
- Why does ethics matter?
- What outcomes for whom?
Policies
- Principles and practices of technology ethics
- Formal codes of practice, etc. Regulation.
Event-Driven (Activity Viewpoint)
- Effective and appropriate action at different points: planning; risk assessment; design; verification, validation and test; deployment; operation; incident management; retirement. (Also known as the Process Viewpoint).
Content (Knowledge Viewpoint)
- What matters from an ethical point of view? What issues do we need to pay attention to?
- Where is the body of knowledge and evidence that we can reference?
Trust (Responsibility Viewpoint)
- Transparency and governance
- Responsibility, Authority, Expertise, Work (RAEW)
Concerning technology ethics, there is a lot of recent published material on each of these elements separately, but I have not yet found much work that puts them together in a satisfactory way. Many working groups concentrate on a single element - for example, principles or transparency. And even when experts link multiple elements, the logical connections aren't always spelled out.
At the time of writing this post (May 2019), I haven't yet fully worked out how to join these elements either, and I shall welcome constructive feedback from readers and pointers to good work elsewhere. I am also keen to find opportunities to trial these ideas on real projects.
Related Posts
Responsibility by Design (June 2018) What is Responsibility by Design (October 2018) Why Responsibility by Design Now? (October 2018)
Sunday, May 19, 2019
The Nudge as a Speech Act
Once upon a time, nudges were physical rather than verbal - a push on the shoulder perhaps, or a dig in the ribs with an elbow. The meaning was elliptical and depended almost entirely on context. "Nudge nudge, wink wink", as Monty Python used to say.
Even technologically mediated nudges can sometimes be physical, or what we should probably call haptic. For example, the fitness band that vibrates when it thinks you have been sitting for too long.
But many of the acts we now think of as nudges are delivered verbally, as some kind of speech act. But which kind?
The most obvious kind of nudge is a direct suggestion, which may take the form of a weak command. ("Try and eat a little now.") But nudges can also take other illocutionary forms, including questions ("Don't you think the sun is very hot here?") and statements / predictions ("You will find that new nose of yours very useful to spank people with.").
(Readers familiar with Kipling may recognize my examples as the nudges given by the Bi-Coloured-Python-Rock-Snake to the Elephant's Child.)
The force of a suggestion may depend on context and tone of voice. (A more systematic analysis of what philosophers call illocutionary force can be found in the Stanford Encyclopedia of Philosophy, based on Searle and Vanderveken 1985.)
@tonyjoyce raises a good point about tone of voice in electronic messages. Traditionally robots don't do tone of voice, and when a human being talks in a boring monotone we may describe their speech as robotic. But I can't see any reason why robots couldn't be programmed with more varied speech patterns, including tonality, if their designers saw the value of this.
Meanwhile, we already get some differentation from electronic communications. For example I should expect an electronic announcement to "LEAVE THE BUILDING IMMEDIATELY" to have a tone of voice that conveys urgency, and we might think it is inappropriate or even unethical to use the same tone of voice for selling candy. We might put this together with other attention-seeking devices, such as flashing red text. The people who design clickbait clearly understand illocutionary force (even if they aren't familiar with the term).
A speech act can also gain force by being associated with action. If I promise to donate money to a given charity, this may nudge other people to do the same; but if they see me actually putting the money in the tin, the nudge might be much stronger. But then the nudge might be just as strong if I just put the money in the tin without saying anything, as long as everyone sees me do it. The important point is that some communication takes place, whether verbal or non-verbal, and this returns us to something closer to the original concept of nudge.
From an ethical point of view, there are particular concerns about unobtrusive or subliminal nudges. Yeung has introduced the concept of the Hypernudge, which combines three qualities: nimble, unobtrusive and highly potent. I share her concerns about this combination, but I think it is helpful to deal with these three qualities separately, before looking at the additional problems that may arise when they are combined.
Proponents of the nudge sometimes try to distinguish between unobtrusive (acceptable) and subliminal (unacceptable), but this distinction may be hard to sustain, and many people quote Luc Bovens' observation that nudges "typically work better in the dark". See also Baldwin.
I'm sure there's more to say on this topic, so I may update this post later. Relevant comments always welcome.
Robert Baldwin, From regulation to behaviour change: giving nudge the third degree (The Modern Law Review 77/6, 2014) pp 831-857
Luc Bovens, The Ethics of Nudge. In Mats J. Hansson and Till Grüne-Yanoff (eds.), Preference Change: Approaches from Philosophy, Economics and Psychology. (Berlin: Springer, 2008) pp. 207-20
John Danaher, Algocracy as Hypernudging: A New Way to Understand the Threat of Algocracy (Institute for Ethics and Emerging Technologies, 17 January 2017)
J. Searle and D. Vanderveken, Foundations of Illocutionary Logic (Cambridge: Cambridge University Press, 1985)
Karen Yeung, ‘Hypernudge’: Big Data as a Mode of Regulation by Design (Information, Communication and Society (2016) 1,19; TLI Think! Paper 28/2016)
Stanford Encyclopedia of Philosophy: Speech Acts
Related posts: On the Ethics of Technologically Mediated Nudge (May 2019), Nudge Technology (July 2019)
Updated 28 May 2019. Many thanks to @tonyjoyce
Friday, May 17, 2019
On the Ethics of Technologically Mediated Nudge
In its simplest form, the nudge can involve gentle persuasions and hints between one human being and another. Parents trying to influence their children (and vice versa), teachers hoping to inspire their pupils, various forms of encouragement and consensus building and leadership. In fiction, such interventions often have evil intent and harmful consequences, but in real life let's hope that these interventions are mostly well-meaning and benign.
In contrast, there are more large-scale forms of nudge, where a team of social engineers (such as the notorious
Nudge Unit) design ways of influencing the behaviour of lots of people, but don't have any direct contact with the people whose behaviour is to be influenced. A new discipline has grown up, known as Behavioural Economics.
I shall call these two types unmediated and mediated respectively.
Mediated nudges may be delivered in various ways. For example, someone in Central Government may design a nudge to encourage job-seekers to find work. Meanwhile, YouTube can nudge us to watch a TED talk about nudging. Some nudges can be distributed via the Internet, or even the Internet of Things. In general, this involves both people and technology - in other words, a sociotechnical system.
To assess the outcome of the nudge, we can look at the personal effect on the nudgee or at the wider socio-economic impact, either short-term or longer-term. In terms of outcome, it may not make much difference whether the nudge is delivered by a human being or by a machine, given that human beings delivering the nudge might be given a standard script or procedure to follow, except in so far as the nudgee may feel differently about it, and may therefore respond differently. It is an empirical question whether a given person would respond more positively to a given nudge from a human bureaucrat or from a smartphone app, and the ethical difference between the two will be largely driven by this.
The second distinction involves the beneficiary of the nudge. Some nudges are designed to benefit the nudgee (Cass Sunstein calls these
paternalistic), while others are designed to benefit the community as a whole (for example, correcting some market failure such as the Tragedy of the Commons). On the one hand, nudges that encourage people to exercise more; on the other hand, nudges that remind people to take their litter home. And of course there are also nudges whose intended beneficiary is the person or organization doing the nudging. We might think here of dark patterns, shades of manipulation, various ways for commercial organizations to get the individual to spend more time or money. Clearly there are some ethical issues here.
A slightly more complicated case from an ethical perspective is where the intended outcome of the nudge is to get the nudgee to behave more ethically or responsibly towards someone else.
Sunstein sees the
paternalisticnudges as more controversial than nudges to address potential market failures, and states two further preferences. Firstly, he prefers nudges that educate people, that serve over time to increase rather than decrease their powers of agency. And secondly, he prefers nudges that operate at a slow deliberative tempo (
System 2) rather than at a fast intuitive tempo (
System 1), since the latter can seem more manipulative.
Meanwhile, there is a significant category of self-nudging. There are now countless apps and other devices that will nudge you according to a set of rules or parameters that you provide yourself, implementing the kind of self-binding or precommitment that Jon Elster described in Ulysses and the Sirens (1979). Examples include the Tomato system for time management, fitness trackers that will count your steps and vibrate when you have been sitting for too long, money management apps that allocate your spare change to your chosen charity. Several years ago, Microsoft developed an experimental Smart Bra that would detect changes in the skin to predict when a women was about to reach for the cookie jar, and give her a friendly warning. Even if there is no problem with the nudge itself (because you have consented/chosen to be nudged) there may be some ethical issues with the surveillance and machine learning systems that enable the nudge. Especially when the nudging device is kindly made available to you by your employer or insurance company.
And even if the immediate outcome of the nudge is benefical to the nudgee, in some situations there may be concerns that the nudgee becomes over-dependent on being nudged, and thereby loses some element of self-control or delayed gratification.
The final distinction I want to introduce here concerns the direction of the nudge. The most straightforward nudges are those that push an individual in the desired direction. Suggestions to eat more healthy food, suggestions to direct spare cash to charity or savings. But some forms of therapy are based on paradoxical interventions, where the individual is pushed in the opposite direction, and they react by moving in the direction you want them to go. For example, if you want someone to give up some activity that is harming them, you might suggest they carry out this activity more systematically or energetically. This is sometimes known as reverse psychology or prescribing the symptom. For example, faced with a girl who was biting her nails, the therapist Milton Erickson advised her how she could get more enjoyment from biting her nails. Astonished by this advice, which was of course in direct opposition to all the persuasion and coercion she had received from other people up to that point, she found she was now able to give up biting her nails altogether.
(Richard Bordenave attributes paradoxical intervention to Paul Watzlawick, who worked with Gregory Bateson. It can also be found in some versions of Neuro-Linguistic Programming (NLP), which was strongly influenced by both Bateson and Erickson.)
Of course, this technique can also be practised in an ethically unacceptable direction as well. Imagine a gambling company whose official message to gamblers is that they should invest their money in a sensible savings account instead of gambling it away. This might seem like an ethically noble gesture, until we discover that the actual effect on people with a serious gambling problem is that this causes them to gamble even more. (In the same way that smoking warnings can cause some people to smoke more. Possibly cigarette companies are aware of this.)
Update: new study on warning messages to gamblers indicates a possible (but not statistically significant) counterproductive effect. See link below.
Reverse psychology may also explain why nudge programmes may have the opposite effect to intended one. Shortly before the 2016 Brexit Referendum, an English journalist writing for RT (formerly known as Russia Today) noted a proliferation of nudges trying to persuade people to vote remain, which he labelled propaganda. While the result was undoubtedly affected by covert nudges in all directions, it is also easy to believe that the pro-establishment style of the Remain nudges could have been counterproductive.
Paradoxical interventions make perfect sense in terms of systems theory, which teaches us that the links from cause to effect are often complex and non-linear. Sometimes an accumulation of positive nudges can tip a system into chaos or catastrophe, as Donella Meadows notes in her classic essay on Leverage Points.
The Leverage Point framework may also be useful in comparing the effects of nudging at different points in a system. Robert Steele notes the use of a nudge based on restructuring information flows; in contrast, a nudge that was designed to alter the nudgee's preferences or goals or political opinions could be much more dangerously powerful, as @zeynep has demonstrated in relation to YouTube.
One of the things that complicates the ethics of Nudge is that the alternative to nudging may either be greater forms of coercion or worse outcomes for the individual. In his article on the Ethics of Nudging, Cass Sunstein argues that all human interaction and activity takes place inside some kind of Choice Architecture, thus some form of nudging is probably inevitable, whether deliberate or inadvertent. He also argues that nudges may be required on ethical grounds to the extent that they promote our core human values. (This might imply that it is sometimes irresponsible to miss an opportunity to provide a helpful nudge.) So the ethical question is not whether to nudge or not, but how to design nudges in such a way as to maximize these core human values, which he identifies as welfare, autonomy and human dignity.
While we can argue with some of the detail of Sunstein's position, I think his two main conclusions make reasonable sense. Firstly, that we are always surrounded by what Sunstein calls Choice Architectures, so we can't get away from the nudge. And secondly, that many nudges are not just preferable to whatever the alternative might be but may also be valuable in their own right.
So what happens when we introduce advanced technology into the mix? For example, what if we have a robot that is programmed to nudge people, perhaps using some kind of artificial intelligence or machine learning to adapt the nudge to each individual in a specific context at a specific point in time?
Within technology ethics, transparency is a major topic. If the robot is programmed to include a predictive model of human psychology that enables it to anticipate the human response in certain situations, this model should be open to scrutiny. Although such models can easily be wrong or misguided, especially if the training data set reflects an existing bias, with reasonable levels of transparency (at least for the appropriate stakeholders) it will usually be easier to detect and correct these errors than to fix human misconceptions and prejudices.
In science fiction, robots have sufficient intelligence and understanding of human psychology to invent appropriate nudges for a given situation. If we start to see more of this in real life, we could start to think of these as unmediated robotic nudges, instead of the robot merely being the delivery mechanism for a mediated nudge. But does this introduce any additional ethical issues, or merely amplify the importance of the ethical issues we are already looking at?
Some people think that the ethical rules should be more stringent for robotic nudges than for other kinds of nudges. For example, I've heard people talking about parental consent before permitting children to be nudged by a robot. But other people might think it was safer for a child to be nudged (for whatever purpose) by a robot than by an adult human. And if you think it is a good thing for a child to work hard at school, eat her broccoli, and be kind to those less fortunate than herself, and if robotic persuasion turns out to be the most effective and child-friendly way of achieving these goals, do we really want heavier regulation on robotic child-minders than human ones?
Finally, it's worth noting that as nudges exploit bounded rationality, any entity that displays bounded rationality is capable of being nudged. As well as humans, this includes animals, algorithmic machines, as well as larger social systems (including markets and elections).
Richard Bordenave, Comment les paradoxes permettent de réinventer les nudges (Harvard Business Review France, 30 January 2019). Adapted English version: When paradoxes inspire Nudges (6 April 2019)
Rob Davies, Warning message on gambling ads does little to stop betting – study (The Guardian, 4 August 2019)
Jon Elster, Ulysses and the Sirens (1979)
Sam Gerrans, Propaganda techniques nudging UK to remain in Europe (RT, 22 May 2016)
Jochim Hansen, Susanne Winzeler and Sascha Topolinski, When the Death Makes You Smoke: A Terror Management Perspective on the Effectiveness of Cigarette On-Pack Warnings (Journal of Exp,erimental Social Psychology 46(1):226-228, January 2010) HT @ABMarkman
Donella Meadows, Leverage Points: Places to Intervene in a System (Whole Earth Review, Winter 1997)
Robert Steele, Implementing an integrated and transformative agenda at the regional and national levels (AtKisson, 2014)
Cass Sunstein, The Ethics of Nudging (Yale J. on Reg, 32, 2015)
Iain Thomson, Microsoft researchers build 'smart bra' to stop women's stress eating (The Register, 6 Dec 2013)
Zeynep Tufekci, YouTube, the Great Radicalizer (New York Times, 10 March 2018)
Wikipedia: Behavioural Insights Team ("Nudge Unit"), Bounded Rationality, Reverse Psychology,
Stanford Encyclopedia of Philosophy: The Ethics of Manipulation
Related posts: Good Ideas from Flaky Sources (December 2009), Have you got big data in your underwear? (December 2014), Ethical Communication in a Digital Age (November 2018), The Nudge as a Speech Act (May 2019), Nudge Technology (July 2019)
Updated 4 August 2019
Tuesday, May 14, 2019
Leadership versus Governance
She argues that the people who were appointed to the ATEAC were selected because they were "prominent" in the field. She notes that "although being prominent doesn't mean you're the best, it probably does mean you're at least pretty good, at least at something".
Ignoring the complexities of university politics, academics generally achieve prominence because they are pretty good at having interesting and original ideas, publishing papers and books, coordinating research, and supervising postgraduate work, as well as representing the field in wider social and intellectual forums (e.g. TED talks). Clearly that can be regarded as an important type of leadership.
Bryson argues that leading is about problem-solving. And clearly there are some aspects of problem-solving in what has brought her to prominence, although that's certainly not the whole story.
But that argument completely misses the point. The purpose of the ATEAC was not problem-solving. Google does not need help with problem-solving, it employs thousands of extremely clever people who spend all day solving problems (although it may sometimes need a bit of help in the diversity stakes).
The stated purpose of the ATEAC was to help Google implement its AI principles. In other words, governance.
When Google published its AI principles last year, the question everyone was asking was about governance:
- @mer__edith (Twitter 8 June 2018, tweet no longer available) called for "strong governance, independent external oversight and clarity"
- @katecrawford (Twitter 8 June 2018) asked "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?"
- and @EricNewcomer (Bloomberg 8 June 2018) asked "who decides if Google has fulfilled its commitments".
Google's appointment of an "advisory" council was clearly a half-hearted attempt to answer this question.
Bryson points out that Kay Coles James (the most controversial appointee) had some experience writing technology policy. But what a truly independent governance body needs is experience monitoring and enforcing policy, which is not the same thing at all.
People talk a lot about transparency in relation to technology ethics. Typically this refers to being able to "look inside" an advanced technological product, such as an algorithm or robot. But transparency is also about process and organization - ability to scrutinize the risk assessment and the design and the potential conflicts of interest. There are many people performing this kind of scrutiny on a full-time basis within large organizations or ecosystems, with far more experience of extremely large and complex development programmes than your average professor.
Had Google really wanted a genuinely independent governance body to scrutinize them properly, could they have appointed a different set of experts? Can people appointed and paid by Google ever be regarded as genuinely independent? And doesn't the word "advisory" give the game away? As Brustein and Bergen point out, the actual decisions are made by an internal body, the Advanced Technology Review Council, and external critics doubt that this body will ever seriously challenge Google's commercial or strategic interests.
Veena Dubal suggests that the most effective governance over Google is currently coming from Google's own workforce. It seems that their protests were significant in getting Google to disband the ATEAC, while earlier protests (re Project Maven) had led to the production of the AI principles in the first place. Clearly the kind of courageous leadership demonstrated by people like Meredith Whittaker isn't just about problem-solving.
Joshua Brustein and Mark Bergen, The Google AI Ethics Board With Actual Power Is Still Around (Bloomberg, 6 April 2019)
Joanna Bryson, What we lost when we lost Google ATEAC (7 April 2019), What leaders are actually for (13 May 2019)
Veena Dubal, Who stands between you and AI dystopia? These Google activists (The Guardian, 3 May 2019)
Bobbie Johnson and Gideon Lichfield, Hey Google, sorry you lost your ethics council, so we made one for you (MIT Technology Review 6 April 2019
Abner Li, Google details formal review process for enforcing AI Principles, plans external advisory group (9to5 Google, 18 December 2018
Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)
Kent Walker, An external advisory council to help advance the responsible development of AI (Google, 26 March 2019, updated 4 April 2019)
Related post: Data and Intelligence Principles From Major Players (June 2018)
Updated 15 May 2019
Sunday, April 28, 2019
Responsible Transparency
An important area where demands for transparency conflict with demands for confidentiality is with embedded software that serves the interests of the manufacturer rather than the consumer or the public. For example, a few years ago we learned about a "defeat device" that VW had built in order to cheat the emissions regulations; similar devices have been discovered in televisions to falsify energy consumption ratings.
Even when the manufacturers aren't actually breaking the law, they have a strong commercial interest in concealing the purpose and design of these systems, and they use Digital Rights Management (DRM) and the US Digital Millenium Copyright Act (DMCA) to prevent independent scrutiny. In what appears to be an example of regulatory capture, car manufacturers were abetted by the US EPA, which was persuaded to inhibit transparency of engine software, on the grounds that this would enable drivers to cheat the emissions regulations.
Defending the EPA, David Golumbia sees a choice between two trust models, which he calls democratic and cyberlibertarian. For him, the democratic model "puts trust in bodies specifically chartered and licensed to enforce regulations and laws", such as the EPA, whereas in the cyberlibertarian model, it is the users themselves who get the transparency and can scrutinize how something works. In other words, trusting the wisdom of crowds, or what he patronizingly calls "ordinary citizen security researchers".
(In their book on Trust and Mistrust, John Smith and Aidan Ward describe four types of trust. Golumbia's democratic model involves top-down trust, based on the central authority of the regulator, while the cyberlibertarian model involves decentralized network trust.)
Golumbia argues that the cyberlibertarian position is incoherent.
"It says, on the one hand, we should not trust manufacturers like Volkswagen to follow the law. We shouldn’t trust them because people, when they have self-interest at heart, will pursue that self-interest even when the rules tell them not to. But then it says we should trust an even larger group of people, among whom many are no less self-interested, and who have fewer formal accountability obligations, to follow the law."One problem with this argument is that it appears to confuse scrutiny with compliance. Cyberlibertarians may be strongly in favour of deregulation, but increasing transparency isn't only advocated by cyberlibertarians and doesn't necessarily imply deregulation. It could be based on a recognition that regulatory scrutiny and citizen scrutiny are complementary, given two important facts. Firstly, however powerful the tools at their disposal the regulators don't always spot everything; and secondly, regulators are sometimes subject to improper influence from the companies they are supposed to be regulating (so-called regulatory capture). Therefore having independent scrutiny as well as central regulation increases the likelihood that hazards will be discovered and dealt with. This could include the detection of algorithmic bias or previously unidentified hazards/vulnerabilities/malpractice.
Another small problem with his argument is that the defeat device had already hoodwinked the EPA and other regulators for many years.
Golumbia claims that "what the cyberlibertarians want, even demand, is for everyone to have the power to read and modify the emissions software in their cars" and complains that "the more we put law into the hands of those not specifically entrusted to follow it, the more unethical behavior we will have". It is certainly true that some of the advocates of open source are also advocating "right to repair" and customization rights. But there were two separate requests for exemptions to DMCA - one for testing and one for modification. And the researchers quoted by Kyle Wiens, who were disadvantaged by the failure of the EPA to mandate a specific exemption to DMCA to allow safety and security tests, were not casual libertarians or "ordinary citizens" but researchers at the International Council of Clean Transportation and West Virginia University.
It ought to be possible for regulators and academic researchers to collaborate productively in scrutinizing an industry, provided that clear rules, protocols and working practices are established for responsible scrutiny. Perhaps researchers might gain some protection from regulatory action or litigation by notifying a regulator in advance, or by prompt notification of any discovered issues. For example, the UK Data Protection Act 2018 (section 172) defines what it calls "effectiveness testing conditions", under which researchers can legitimately attempt to crack the anonymity of deidentified personal data. Among other things, a successful attempt must be notified to the Information Commissioner within 72 hours.
Meanwhile, in the cybersecurity world there are fairly well-established protocols for responsible disclosure of vulnerabilities, and in some cases rewards are paid to the researchers who find them, provided they are disclosed responsibly. Although not all of us have the expertise to understand the technical detail, the existence of this kind of independent scrutiny should make us all feel more confident about the safety, reliability and general trustworthiness of the products in question.
David Golumbia, The Volkswagen Scandal: The DMCA Is Not the Problem and Open Source Is Not the Solution (6 October 2015)
Brent Mittelstadt et al, The ethics of algorithms: Mapping the debate (Big Data and Society July–December 2016)
Jonathan Trull, Responsible Disclosure: Cyber Security Ethics (CSO Cyber Security Pulse, 26 February 2015)
Aidan Ward and John Smith, Trust and Mistrust (Wiley 2003)
Kyle Wiens, Opinion: The EPA shot itself in the foot by opposing rules that could've exposed VW (The Verge, 25 September 2015)
Related posts: Four Types of Trust (July 2004), Defeating the Device Paradigm (October 2015)
Tuesday, April 23, 2019
Decentred Regulation and Responsible Technology
Black identifies a number of potential failures in regulation, which are commonly attributed to command and control (CAC) regulation - regulation by the state through the use of legal rules backed by (often criminal) sanctions.
- instrument failure - the instruments used (laws backed by sanctions) are inappropriate and unsophisticated
- information and knowledge failure - governments or other authorities have insufficient knowledge to be able to identify the causes of problems, to design solutions that are appropriate, and to identify non-compliance
- implementation failure - implementation of the regulation is inadequate
- motivation failure and capture theory - those being regulated are insufficiently inclined to comply, and those doing the regulating are insufficiently motivated to regulated in the public interest
For Black, decentred regulation represents an alternative to CAC regulation, based on five key challenges. These challenges echo the ideas of Michel Foucault around governmentality, which Isabell Lorey (2005, p23) defines as "the structural entanglement between the government of a state and the techniques of self-government in modern Western societies".
- complexity - emphasising both causal complexity and the complexity of interactions between actors in society (or systems), which are imperfectly understood and change over time
- fragmentation - of knowledge, and of power and control. This is not just a question of information asymmetry; no single actor has sufficient knowledge, or sufficient control of the instruments of regulation.
- interdependencies - including the co-production of problems and solutions by multiple actors across multiple jurisdictions (and amplified by globalization)
- ungovernability - Black explains this in terms of autopoiesis, the self-regulation, self-production and self-organisation of systems. As a consequence of these (non-linear) system properties, it may be difficult or impossible to control things directly
- the rejection of a clear distinction between public and private - leading to rethinking the role of formal authority in governance and regulation
In response to these challenges, Black describes a form of regulation with the following characteristics
- hybrid - combining governmental and non-governmental actors
- multifaceted - using a number of different strategies simultaneously or sequentially
- indirect - this appears to link to what (following Teubner) she calls reflexive regulation - for example setting the decision-making procedures within organizations in such a way that the goals of public policy are achieved
And she asks if it counts as regulation at all, if we strip away much of what people commonly associate with regulation, and if it lacks some key characteristics, such as intentionality or effectiveness. Does regulation have to be what she calls "cybernetic", which she defines in terms of three functions: standard-setting, information gathering and behaviour modification? (Other definitions of "cybernetic" are available, such as Stafford Beer's Viable Systems Model.)
Meanwhile, how does any of this apply to responsible technology? Apart from the slogan, what I'm about to say would be true of any large technology company, but I'm going to talk about Google, for no other reason than its former use of the slogan "Don't Be Evil". (This is sometimes quoted as "Do No Evil", but for now I shall ignore the difference between being evil and doing evil.) What holds Google to this slogan is not primarily government regulation (mainly US and EU) but mostly an interconnected set of other forces, including investors, customers (much of its revenue coming from advertising), public opinion and its own workforce. Clearly these stakeholders don't all have the same view on what counts as Evil, or what would be an appropriate response to any specific ethical concern.
If we regard each of these stakeholder domains as a large-scale system, each displaying complex and sometimes apparently purposive behaviour, then the combination of all of them can be described as a system of systems. Mark Maier distinguished between three types of System of System (SoS), which he called Directed, Collaborative and Virtual; Philip Boxer identifies a fourth type, which he calls Acknowledged.
- Directed - under the control of a single authority
- Acknowledged - some aspects of regulation are delegated to semi-autonomous authorities, within a centrally planned regime
- Collaborative - under the control of multiple autonomous authorities, collaborating voluntarily to achieve an agreed purpose
- Virtual - multiple authorities with no common purpose
Black's notion of "hybrid" clearly moves from the Directed type to one of the other types of SoS. But which one? Where technology companies are required to interpret and enforce some rules, under the oversight of a government regulator, this might belong to the Acknowledged type. For example, social media platforms being required to enforce some rules about copyright and intellectual property, or content providers being required to limit access to those users who can prove they are over 18. (Small organizations sometimes complain that this kind of regime tends to favour larger organizations, which can more easily absorb the cost of building and implementing the necessary mechanisms.)
However, one consequence of globalization is that there is no single regulatory authority. In Data Protection, for example, the tech giants are faced with different regulations in different jurisdictions, and can choose whether to adopt a single approach worldwide, or to apply the stricter rules only where necessary. (So for example, Microsoft has announced it will apply GDPR rules worldwide, while other technology companies have apparently migrated personal data of non-EU citizens from Ireland to the US in order to avoid the need to apply GDPR rules to these data subjects.)
But although the detailed rules on privacy and other ethical issues vary significantly between countries and jurisdictions, there is a reasonably broad acceptance of the principle that some privacy is probably a Good Thing. Similarly, although dozens of organizations have published rival sets of ethical principles for AI or robotics or whatever, there appears to be a fair amount of common purpose between them, indicating that all these organizations are travelling (or pretending to travel) in more or less the same direction. Therefore it seems reasonable to regard this as the Collaborative type.
Decentred regulation raises important questions of agency and purpose. And if it is to be maintain relevance and effectiveness in a rapidly changing technological world, there needs to be some kind of emergent / collective intelligence conferring the ability to solve not only downstream problems (making judgements on particular cases) but also upstream problems (evolving governance principles and practices).
Julia Black, Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World
(Current Legal Problems, Volume 54, Issue 1, 2001) pp 103–146
Julia Black, Decentred Regulation (LSE Centre for Analysis of Risk and Regulation, 2002)
Philip Boxer, Architectures that integrate differentiated behaviours (Asymmetric Leadership, August 2011)
Martin Innes, Bethan Davies and Morag McDermont, How Co-Production Regulates (Social and Legal Studies, 2008)
Mark W. Maier, Architecting
Principles for Systems-of-Systems (Systems Engineering, Vol
1 No 4, 1998)
Isabell Lorey, State of Insecurity (Verso 2015)
Gunther Teubner, Substantive and Reflexive Elements in Modern Law (Law and Society Review, Vol. 17, 1983) pp 239-285
Wikipedia: Don't Be Evil,
Related posts: How Many Ethical Principles (April 2019), Algorithms and Governmentality (July 2019)
Saturday, April 20, 2019
Ethics committee raises alarm
"You can do something as part of a treatment program, entirely on a whim, and nobody will interfere, as long as it’s not potty (and even then you’ll probably be alright). But the moment you do the exact same thing as part of a research program, trying to see if it actually works or not, adding to the sum total of human knowledge, and helping to save the lives of people you’ll never meet, suddenly a whole bunch of people want to stuck their beaks in."
Within IT, there is considerable controversy about the role of the ethics committee, especially after Google appointed and then disbanded its Ethics Board. In a recent article for Slate, @internetdaniel complains about company ethics boards offering "advice" rather than meaningful oversight, and calls this ethics theatre. @ruchowdh prefers to call it ethics washing.
So I was particularly interested to find a practical example of an ethics committee in action in this morning's Guardian. While the outcome of this case is not yet clear, there seem to be some positive indicators in @sloumarsh's report.
Firstly, the topic (predictive policing) is clearly an important and difficult one. It is not just about applying a simplistic set of ethics principles, but balancing a conflicting set of interests and concerns. (As @oscwilliams reports, this topic has already got the attention of the Information Commissioner's Office.)
Secondly, the discussion is in the open, and the organization is making the right noises. “This is an important area of work, that is why it is right that it is properly scrutinised and those details are made public.” (This contrasts with some of the bad examples of medical ethics cited by Goldacre.)
Thirdly, the ethics committee is (informally) supported by a respected external body (Liberty), which adds weight to its concerns, and has helped bring the case to public attention. (Credit @Hannah_Couchman)
Fourthly, although the ethics committee mandate only applies to a single police force (West Midlands), its findings are likely to be relevant to other police forces across the UK. For those forces that do not have a properly established governance process of their own, the default path may be to follow the West Midlands example.
So it is possible (although not guaranteed) that this particular case may produce a reasonable outcome, with a valuable contribution from the ethics committee and its external supporters. But it is worrying if this is what it takes for governance to work, because this happy combination of positive indicators will not be present in most other cases.
Ben Goldacre, Where’s your ethics committee now, science boy? (Bad Science Blog,23 February 2008), When Ethics Committees Kill (Bad Science Blog, 26 March 2011), Taking transparency beyond results: ethics committees must work in the open (Bad Science Blog, 23 September 2016)
Sarah Marsh, Ethics committee raises alarm over 'predictive policing' tool (The Guardian, 20 April 2019)
Daniel Susser, Ethics Alone Can’t Fix Big Tech (Slate, 17 April 2019)
Jane Wakefield, Google's ethics board shut down (BBC News, 5 April 2019)
Oscar Williams, Some of the UK’s biggest police forces are using algorithms to predict crime (New Statesman, 4 February 2019)
Saturday, March 9, 2019
Upstream Ethics
I use the term upstream ethics to refer to
- Establishing priorities and goals - for example, emphasising precaution and prevention
- Establishing general principles, processes and practices
- Embedding these in standards, policies and codes of practices
- Enacting laws and regulations
- Establishing governance - monitoring and enforcement
- Training and awareness - enabling, encouraging and empowering people to pay due attention to ethical concerns
- Approving and certifying technologies, products, services and supply chains.
I use the term downstream ethics to refer to
- Making judgements about a specific instance
- Eliciting values and concerns in a specific context as part of the requirements elicitation process
- Detecting ethical warning signals
- Applying, interpreting and extending upstream ethics to a specific case or challenge
- Auditing compliance with upstream ethics
There is also a feedback and learning loop, where downstream issues and experiences are used to evaluate and improve the efficacy of upstream ethics.
Downstream ethics does not take place at a single point in time. I use the term early downstream to mean paying attention to ethical questions at an early stage of an initiative. Among other things, this may involve picking up early warning signals of potential ethical issues affecting a particular case. Early downstream means being ethically proactive - introducing responsibility by design - while late downstream means reacting to ethical issues only after they have been forced upon you by other stakeholders.
However, some writers regard what I'm calling early downstream as another type of upstream. Thus Ozdemir and Knoppers talk about Type 1 and Type 2 upstream. And John Paul Slosar writes
"Early identification of the ethical dimensions of person-centered care before the point at which one might recognize the presence of a more traditionally understood “ethics case” is vital for Proactive Ethics Integration or any effort to move ethics upstream. Ideally, there would be a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent."
For his part, as a lawyer specializing in medical technology, Christopher White describes upstream ethics as a question of confidence and supply - in other words, having some level of assurance about responsible sourcing and supply of component technologies and materials. He mentions a range of sourcing issues, including conflict minerals, human slavery, and environmentally sustainable extraction.
Extending this point, advanced technology raises sourcing issues not only for physical resources and components, but also for intangible inputs like data and knowledge. For example, medical innovation may be dependent upon clinical trials, while machine learning may be dependent on large quantities of training data. So there are important questions of upstream ethics as to whether these data were collected properly and responsibly, which may affect the extent to which these data can be used responsibly, or at all. As Rumman Chowdhury asks, "How do we institute methods of ethical provenance?"
There is a trade-off between upstream effort and downstream effort. If you take more care upstream, you should hope to experience fewer difficulties downstream. Conversely, some people may wish to invest little or no time upstream, and face the consequences downstream. One way of thinking about responsibility is shifting the balance of effort and attention upstream. But obviously you can't work everything out upstream, so you will always have further stuff to do downstream.
So it's about getting the balance right, and joining the dots. Wherever we choose to draw the line between "upstream" and "downstream", with different institutional arrangements and mobilizing different modes of argumentation and evidence at different stages, "upstream" and "downstream" still need to be properly connected, as part of a single ethical system.
(In a separate post, Ethics - Soft and Hard, I discuss Luciano Floridi's use of the terms hard and soft ethics, which covers some of the same distinctions I'm making here but in a way I find more confusing.)
Os Keyes, Nikki Stevens, and Jacqueline Wernimont, The Government Is Using the Most Vulnerable People to Test Facial Recognition Software (Slate 17 March 2019) HT @ruchowdh
Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639
John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)
Christopher White, Looking the Other Way: What About Upstream Corporate Considerations? (MedTech, 29 Mar 2017)
Updated 18 March 2019
Sunday, March 3, 2019
Ethics and Uncertainty
Assuming that consequences matter, it would obviously be useful to be able to reason about the consequences. This is typically a combination of inductive reasoning (what has happened when people have done this kind of thing in the past) and predictive reasoning (what is likely to happen when I do this in the future).
There are several difficulties here. The first is the problem of induction - to what extent can we expect the past to be a guide to the future, and how relevant is the available evidence to the current problem. The evidence doesn't speak for itself, it has to be interpreted.
For example, when Stephen Jay Gould was informed that he had a rare cancer of the abdomen, the medical literature indicated that the median survival for this type of cancer was only eight months. However, his statistical analysis of the range of possible outcomes led him to the conclusion that he had a good chance of finding himself at the favourable end of the range, and in fact he lived for another twenty years until an unrelated cancer got him.
The second difficulty is that we don't know enough. We are innovating faster than we can research the effects. And longer term consequences are harder to predict than short-term consequences: even if we assume an unchanging environment, we usually don't have as much hard data about longer-term consequences.
For example, a clinical trial of a drug may tell us what happens when people take the drug for six months. But it will take a lot longer before we have a clear picture of what happens when people continue to take the drug for the rest of their lives. Especially when taken alongside other drugs.
This might suggest that we should be more cautious about actions with long-term consequences. But that is certainly not an excuse for inaction or procrastination. One tactic of Climate Sceptics is to argue that the smallest inaccuracy in any scientific projection of climate change invalidates both the truth of climate science and the need for action. But that's not the point. Gould's abdominal cancer didn't kill him - but only because he took action to improve his prognosis. @Alexandria Ocasio-Cortez has recently started using the term Climate Delayers for those who find excuses for delaying action on climate change.
The third difficulty is that knowledge itself comes packaged in various disciplines or discourses. Medical ethics is dependent upon specialist medical knowledge, and technology ethics is dependent upon specialist technical knowledge. However, it would be wrong to judge ethical issues exclusively on the basis of this technical knowledge, and other kinds of knowledge (social, cultural or whatever) must also be given a voice. This probably entails some degree of cognitive diversity. Will Crouch also points out the uncertainty of predicting the values and preferences of future stakeholders.
The fourth difficulty is that there could always be more knowledge. This raises the question as to whether it is responsible to go ahead on the basis of our current knowledge, and how we can build in mechanisms to make future changes when more knowledge becomes available. Research may sometimes be a moral duty, as Tannert et al argue, but it cannot be an infinite duty.
The question of adequacy of knowledge is itself an ethical question. One of the classic examples in Moral Philosophy concerns a ship owner who sends a ship to sea without bothering to check whether the ship was sea-worthy. Some might argue that the ship owner cannot be held responsible for the deaths of the sailors, because he didn't actually know that the ship would sink. However, most people would see the ship owner having a moral duty of diligence, and would regard him as accountable for neglecting this duty.
But how can we know if we have enough knowledge? This raises the question of the "known unknowns" and "unknown unknowns", which is sometimes used with a shrug to imply that noone can be held responsible for the unknown unknowns.
(And who is we? J. Nathan Matias argues that the obligation to experiment is not limited to the creators of an artefact, but may extend to other interested parties.)
The French psychoanalyst Jacques Lacan was interested in the opposition between impulsiveness and procrastination, and talks about three phases of decision-making: the instant of seeing (recognizing that some situation exists that calls for a decision), the time for understanding (assembling and analysing the options), and the moment to conclude (the final choice).
The purpose of Responsibility by Design is not just to prevent bad or dangerous consequences, but to promote good and socially useful consequences. The result of applying Responsibility by Design should not be reduced innovation, but better and more responsible innovation. The time for understanding should not be dragged on forever, there should always be a moment to conclude.
Matthew Cantor, Could 'climate delayer' become the political epithet of our times? (The Guardian, 1 March 2019)
Will Crouch, Practical Ethics Given Moral Uncertainty (Oxford University, 30 January 2012)
Stephen Jay Gould, The Median Isn't the Message" (Discover 6, June 1985) pp 40–42.
J. Nathan Matias, The Obligation To Experiment (Medium, 12 December 2016)
Alex Matthews-King, Humanity producing potentially harmful chemicals faster than they can test their effects, experts warn (Independent, 27 February 2019)
Christof Tannert, Horst-Dietrich Elvers and Burkhard Jandrig, The ethics of uncertainty. In the light of possible dangers, research becomes a moral duty (EMBO Rep. 8(10) October 2007) pp 892–896
Stanford Encyclopedia of Philosophy: Consequentialism, The Problem of Induction
Wikipedia: There are known knowns
The ship-owner example can be found in an essay called "The Ethics of Belief" (1877) by W.K. Clifford, in which he states that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence".
I describe Lacan's model of time in my book on Organizational Intelligence (Leanpub 2012)
Related posts: Ethics and Intelligence (April 2010), Practical Ethics (June 2018), Big Data and Organizational Intelligence (November 2018)
updated 11 March 2019
Sunday, November 18, 2018
Ethics in Technology - FinTech
Since many of the technologies under discussion are designed to support the financial services industry, the core ethical debate is strongly correlated to the business ethics of the finance sector and is not solely a matter of technology ethics. But like most other sectors, the finance sector is being disrupted by the opportunities and challenges posed by technological innovation, and this entails a professional and moral responsibility on technologists to engage with a range of ethical issues.
(Clearly there are many ethical issues in the financial services industry besides technology. For example, my friends in the @LongFinance initiative are tackling the question of sustainability.)
The Financial Services industry has traditionally been highly regulated, although some FinTech innovations may be less well regulated for now. So people working in this sector may expect regulation - specifically principles-based regulation - to play a leading role in ethical governance (Note: the UK Financial Services Authority has been pursuing a principles-based regulation strategy for over ten years.)
Whether ethical questions can be reduced to a set of principles or rules is a moot point. In medical ethics, principles are generally held to be useful but not sufficient for resolving difficult ethical problems. (See McCormick for a good summary. See also my post on Practical Ethics.)
Nevertheless, there are undoubtedly some useful principles for technology ethics. For example, the principle that you can never foresee all the consequences of your actions, so you should avoid making irreversible technological decisions. In science fiction, this issue can be illustrated by a robot that goes rogue and cannot be switched off. @moniquebachner made the point that with a technology like Blockchain, you were permanently stuck, for good or ill, with your original design choices.
Several of the large tech companies have declared principles for data and intelligence. (My summary here.) But declaring principles is the easy bit; these companies taking them seriously (or us trusting them to take them seriously) may be harder.
One of the challenges discussed by the panel was how to negotiate the asymmetry of power. If your boss or your client wants to do something that you are uncomfortable with, you can't just assert some ethical principles and expect her to change her mind. So rather than walk away from an interesting technical challenge, you give yourself an additional organizational challenge - how to influence the project in the right way, without sacrificing your own position.
Obviously that's an ethical dilemma in its own right. Should you compromise your principles in the hope of retaining some influence over the outcome, or could you persuade yourself that the project isn't so bad after all? There is an interesting play-off between individual responsibility and collective responsibility, which we are also seeing in politics (Brexit passim).
Sheryl Sandberg appears to offer a high-profile example of this ethical dilemma. She had been praised by feminists for being "the one reforming corporate boy’s club culture from the inside ... the civilizing force barely keeping the organization from tipping into the abyss of greed and toxic masculinity." Crispin now disagrees with this view. "It seems clear what Sandberg truly is instead: a team player. And her team is not the working women of the world. It is the corporate culture that has groomed, rewarded, and protected her throughout her career." "This is the end of corporate feminism", comments @B_Ehrenreich.
And talking of Facebook ...
The title of Cathy O'Neil's book Weapons of Math Destruction invites a comparison between the powerful technological instruments now in the hands of big business, and the arsenal of nuclear and chemical weapons that have been a major concern of international relations since the Second World War. During the so-called Cold War, these weapons were largely controlled by the two major superpowers, and it was these superpowers that dominated the debate. As these weapons technologies have proliferated however, attention has shifted to the possible deployment of these weapons by smaller countries, and it seems that the world has become much more uncertain and dangerous.
In the domain of data ethics, it is the data superpowers (Facebook, Google) that command the most attention. But while there are undoubtedly major concerns about the way these companies use their powers, we may at least hope that a combination of forces may help to moderate the worst excesses. Besides regulatory action, these forces might include public opinion, corporate risk aversion from the large advertisers that provide the bulk of the income, as well as pressure from their own employees.
And in FinTech as with Data Protection, it will always be easier for regulators to deal with a small number of large players than with a very large number of small players. The large players will of course try to lobby for regulations that suit them, and may shift some operations into less strongly regulated jurisdictions, but in the end they will be forced to comply, more or less. Except that the ethically dubious stuff will always turn out to be led by a small company you've never heard of, and the large players will deny that they knew anything about it.
As I pointed out in my previous post on The Future of Political Campaigning, the regulators only have limited tools at their disposal, so this slants their powers to deal with the ethical ecosystem as a whole. If I had a hammer ...
Financial Services Authority, Principles-Based Regulation - Focusing on the Outcomes that Matter (FSA, April 2007)
Jessa Crispin, Feminists gave Sheryl Sandberg a free pass. Now they must call her out (Guardian, 17 November 2018)
Ian Harris, Commercial Ethics: Process or Outcome (Z/Yen, 2008)
Thomas R. McCormick, Principles of Bioethics (University of Washington, 2013)
Chris Yapp, Where does the buck stop now? (Long Finance, 28 October 2018)
Related posts Practical Ethics (June 2018) Data and Intelligence Principles from Major Players (June 2018) The Future of Political Campaigning (November 2018)