Tuesday, October 8, 2019

Ethics of Transparency and Concealment

Last week I was in Berlin at the invitation of the IEEE to help develop standards for responsible technology (P7000). One of the working groups (P7001) is looking at transparency, especially in relation to autonomous and semi-autonomous systems. In this blogpost, I want to discuss some more general ideas about transparency.

In 1986 I wrote an article for Human Systems Management promoting the importance of visibility. There were two reasons I preferred this word. Firstly, "transparency" is a contronym - it has two opposite senses. When something is transparent, this either means you don't see it, you just see through it, or it means you can really see it. And secondly, transparency appears to be merely a property of an object, whereas visibility is about the relationship between the object and the viewer - visibility to whom?

(P7001 addresses this by defining transparency requirements in relation to different stakeholder groups.)

Although I wasn't aware of this when I wrote the original article, my concept of visibility shares something with Heidegger's concept of Unconcealment (Unverborgenheit). Heidegger's work seems a good starting point for thinking about the ethics of transparency.

Technology generally makes certain things available while concealing other things. (This is related to what Albert Borgmann, a student of Heidegger, calls the Device Paradigm.)
In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control. Levitt
Goods that are available to us enrich our lives and, if they are technologically available, they do so without imposing burdens on us. Something is available in this sense if it has been rendered instantaneous, ubiquitous, safe, and easy. Borgmann
I referred above to the two opposite meanings of the word "transparent". For Heidegger and his followers, the word "transparent" often refers to tools that can be used without conscious thought, or what Heidegger called ready-to-hand (zuhanden). In technology ethics, on the other hand, the word "transparent" generally refers to something (product, process or organization) being open to scrutiny, and I shall stick to this meaning for the remainder of this blogpost.

We are surrounded by technology, we rarely have much idea how most of it works, and usually cannot be bothered to find out. Thus when technological devices are designed to conceal their inner workings, this is often exactly what the users want. How then can we object to concealment?

The ethical problems of concealment depend on what is concealed by whom and from whom, why it is concealed, and whether, when and how it can be unconcealed.

Let's start with the why. Sometimes people deliberately hide things from us, for dishonest or devious reasons. This category includes so-called defeat devices that are intended to cheat regulations. Less clear-cut is when people hide things to avoid the trouble of explaining or justifying them.

(If something is not visible, then we may not be aware that there is something that needs to be explained. So even if we want to maintain a distinction between transparency and explainability, the two concepts are interdependent.)

People may also hide things for aesthetic reasons. The Italian civil engineer Riccardo Morandi designed bridges with the steel cables concealed, which made them difficult to inspect and maintain. The Morandi Bridge in Genoa collapsed in August 2018, killing 43 people.

And sometimes things are just hidden, not as a deliberate act but because nobody has thought it necessary to make them visible. (This is one of the reasons why a standard could be useful.)

We also need to consider the who. For whose benefit are things being hidden? In particular, who is pulling the strings, where is the funding coming from, and where are the profits going - follow the money. In technology ethics, the key question is Whom Does The Technology Serve?

In many contexts, therefore, the main focus of unconcealment is not understanding exactly how something works but being aware of the things that people might be trying to hide from you, for whatever reason. This might include being selective about the available evidence, or presenting the most common or convenient examples and ignoring the outliers. It might also include failing to declare potential conflicts of interest.

For example, the #AllTrials campaign for clinical trial transparency demands that drug companies declare all clinical trials in advance, rather than waiting until the trials are complete and then deciding which ones to publish.

Now let's look at the possibility of unconcealment. Concealment doesn't always mean making inconvenient facts impossible to discover, but may mean making them so obscure and inaccessible that most people don't bother, or creating distractions that divert people's attention elsewhere. So transparency doesn't just entail possibility, it requires a reasonable level of accessibility.

Sometimes too much information can also serve to conceal the truth. Onora O'Neill talks about the "cult of transparency" that fails to produce real trust.
Transparency can produce a flood of unsorted information and misinformation that provides little but confusion unless it can be sorted and assessed. It may add to uncertainty rather than to trust. Transparency can even encourage people to be less honest, so increasing deception and reducing reasons for trust. O'Neill
Sometimes this can be inadvertent. However, as Chesterton pointed out in one of his stories, this can be a useful tactic for those who have something to hide.
Where would a wise man hide a leaf? In the forest. If there were no forest, he would make a forest. And if he wished to hide a dead leaf, he would make a dead forest. And if a man had to hide a dead body, he would make a field of dead bodies to hide it in. Chesterton
Stohl et al call this strategic opacity (via Ananny and Crawford).

Another philosopher who talks about the "cult of transparency" is Shannon Vallor. However, what she calls the "Technological Transparency Paradox" seems to be merely a form of asymmetry: we are open and transparent to the social media giants, but they are not open and transparent to us.

In the absence of transparency, we are forced to trust people and organizations - not only for their honesty but also their competence and diligence. Under certain conditions, we may trust independent regulators, certification agencies and other institutions to verify these attributes on our behalf, but this in turn depends on our confidence in their ability to detect malfeasance and enforce compliance, as well as believing them to be truly independent. (So how transparent are these institutions themselves?) And trusting products and services typically means trusting the organizations and supply chains that produce them, in addition to any inspection, certification and official monitoring that these products and services have undergone.

Instead of seeing transparency as a simple binary (either something is visible or it isn't), it makes sense to discuss degrees of transparency, depending on stakeholder and context. For example, regulators, certification bodies and accident investigators may need higher levels of transparency than regular users. And regular users may be allowed to choose whether to make things visible or invisible. (Thomas Wendt discusses how Heideggerian thinking affects UX design.)

Finally, it's worth noting that people don't only conceal things from others, they also conceal things from themselves, which leads us to the notion of self-transparency. In the personal world this can be seen as a form of authenticity; in the corporate world, it translates into ideas of responsibility, due diligence, and a constant effort to overcome wilful blindness.

If transparency and openness is promoted as a virtue, then people and organizations can make their virtue apparent by being transparent and open, and this may make us more inclined to trust them. We should perhaps be wary of organizations that demand or assume that we trust them, without providing good evidence of their trustworthiness. (The original confidence trickster asked strangers to trust him with their valuables.) The relationship between trust and trustworthiness is complicated. 



UK Department of Health and Social Care, Response to the House of Commons Science and Technology Committee report on research integrity: clinical trials transparency (UK Government Policy Paper, 22 February 2019) via AllTrials

Mike Ananny and Kate Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability (new media and society 2016) pp 1–17

Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)

G.K. Chesterton, The Sign of the Broken Sword (The Saturday Evening Post, 7 January 1911)

Martin Heidegger, The Question Concerning Technology (Harper 1977) translated and with an introduction by William Lovitt

Onora O'Neill, Trust is the first casualty of the cult of transparency (Telegraph, 24 April 2002)

Cynthia Stohl, Michael Stohl and P.M. Leonardi, Managing opacity: Information visibility and the paradox of transparency in the digital age (International Journal of Communication Systems 10, January 2016) pp 123–137.

Richard Veryard, The Role of Visibility in Systems (Human Systems Management 6, 1986) pp 167-175 (this version includes some further notes dated 1999)

Thomas Wendt, Designing for Transparency and the Myth of the Modern Interface (UX Magazine, 26 August 2013)

Stanford Encyclopedia of Philosophy: Heidegger, Technological Transparency Paradox

Wikipedia: Confidence Trick, Follow The Money, Ponte Morandi, Regulatory Capture,Willful Blindness


Related posts: Defeating the Device Paradigm (October 2015), Transparency of Algorithms (October 2016), Pax Technica (November 2017), Responsible Transparency (April 2019), Whom Does The Technology Serve (May 2019)

Monday, September 23, 2019

Technology and The Discreet Cough

In fiction, servants cough discreetly to make people aware of their presence. (I'm thinking of P.G. Wodehouse, but there must be other examples.)

Technological devices sometimes call our attention to themselves for various reasons. John Ehrenfeld calls this presencing. The device goes from available (ready-to-hand) to conspicuous (visible).

In many cases this is seen as a malfunction, when the device fails to provide the expected commodity (obstinate) and thereby interrupts our intended action (obstructive).

However, in some cases the presencing is part of the design - the device nudging us into some kind of conscious engagement (or even what Borgmann calls focal practice).

Ehrenfeld's example is the two-button toilet flush, which allows the user to select more or less water. He sees this as "lending an ethical context to the task at hand" (p155) - thus the user is not only choosing the quantity of water but also being mindful of the environmental impact of this choice. Even if this mindfulness may diminish with familiarity, "the ethical nature of the task has become completely intertwined with the more practical aspects of the process". In other words, the environmentally friendly path has become routine (normalized).

Of course, people who are really mindful of the environmental or financial impact of wasting water may sometimes choose not to flush at all (following the slogan “If it’s yellow, let it mellow; if it’s brown, flush it down”) or perhaps to wee behind a tree in the garden rather than use the toilet. It is quite possible that the two button flush might nudge a few more people to think this way. 

So sometimes a little gentle obstinacy on the part of our technological devices may be a good thing.





Albert Borgmann, Technology and the Character of Contemporary Life (Chicago, 1984)

John Ehrenfeld, Sustainability by Design (Yale, 2008)

Wednesday, September 18, 2019

What Does Diversion Mean?

Diversion has various different meanings in the world of ethics.

Distraction. An idea or activity serves as a distraction from what's important. For example, @juliapowles uses the term "captivating diversion" to refer to ethicists becoming preoccupied with narrow computational puzzles that distract them from far more important issues.

Substitution. People are redirected from something harmful to something supposedly less harmful. For example, switching from smoking to vaping. See my post on the Ethics of Diversion - Tobacco Example (September 2019). And in the 1840s, a Baptist preacher and temperance activist organized excursions to divert people from drinking. His name: Thomas Cook.

Unauthorized Utilization. Using products for some purpose other than that approved or prescribed for a given purpose in a given market. There are various forms of this, some of which are both illegal and unethical, while others may be ethically justifiable.
  • Drug diversion, the transfer of any legally prescribed controlled substance from the individual for whom it was prescribed to another person for any illicit use.
  • Grey imports. Drug companies try to control shipments of drugs between markets, especially when this is done to undercut the official drug prices. However, some people regard the tactics of the drug companies as unethical. Médecins Sans Frontières, the medical charity, has accused one pharma giant of promoting overly-intrusive patient surveillance to stop a generic drug being diverted to patients in developed countries.
  • Off-label use. Doctors may prescribe drugs for a purpose or patient group outside the official approval, with various degrees of justification. For more discussion, see my post Off-Label (March 2005)
Exploiting Regulatory Divergence. Carrying out activities (for example, conducting trials) in countries with underdeveloped ethics and weak regulatory oversight. See debate between Wertheimer and Resnick.





Amy Kazmin, Pharma combats diversion of cheap drugs (FT 12 April 2015)

Julia Powles, The Seductive Diversion of ‘Solving’ Bias in Artificial Intelligence (7 December 2018)

David B. Resnik, Addressing diversion effects (Journal of Law and the Biosciences, 2015) 428–430

Alan Wertheimer, The ethics of promulgating principles of research ethics: the problem of diversion effects (J Law Biosci. 2(1) Feb 2015) 2-32

Wikipedia: Drug Diversion, Thomas Cook

Monday, September 16, 2019

The Ethics of Diversion - Tobacco Example

What are the ethics of diverting people from smoking to vaping?

On the one hand, we have the following argument.
  • E-cigarettes ("vaping") offer a plausible substitute for smoking cigarettes.
  • Smoking is dangerous, and vaping is probably much less dangerous.
  • Many smokers find it difficult to give up, even if they are motivated to do so. So vaping provides a plausible exit route.
  • Observed reductions in the level of smoking can be partially attributed to the availability of alternatives such as vaping. (This is known as the diversion hypothesis.)
  • It is therefore justifiable to encourage smokers to switch from cigarettes to e-cigarettes.

Critics of this argument make the following points.
  • While the dangers of smoking are now well-known, some evidence is now emerging to suggest that vaping may also be dangerous. In the USA, a handful of people have died and hundreds have been hospitalized.
  • While some smokers may be diverted to vaping, there are also concerns that vaping may provide an entry path to smoking, especially for young people. This is known as the gateway or catalyst hypothesis.
Some defenders of vaping blame the potential health risks and the gateway effect not on vaping itself but on the wide range of flavours that are available. While these may increase the attraction of vaping to children, the flavour ingredients are chemically unstable and may produce toxic compounds. For this reason, President Trump has recently proposed a ban on flavoured e-cigarettes.

Juul, which dominates the e-cigarette market in the US, is currently being investigated by the FDA and federal prosecutors for its marketing, and the inappropriately named Mr Burns has just stepped down as CEO.

And elsewhere in the world, significant differences in regulation are emerging between countries. While some countries are looking to ban e-cigarettes altogether, the UK position (as presented by Public Health England and the MHRA) is to encourage e-cigarettes as a safe alternative to smoking. At some point in the future presumably, UK data can be compared with data from other countries to provide evidence for or against the UK position. Professor Simon Capewell of Liverpool University (quoted in the Observer) calls this a "bizarre national experiment".

While we await convincing data about outcomes, ethical reasoning may appeal to several different principles.

Firstly, the minimum interference principle. In this case, this means not restricting people's informed choice without good reason.

Secondly, the utilitarian principle. The benefit of helping a large number of people to reduce a known harm outweighs the possibility of causing a lesser but unknown harm to a smaller number of people.

Thirdly, the cautionary principle. Even if vaping appears to be safer than traditional smoking, Professor Capewell reminds us of other things that were assumed to be safe - until we discovered that they weren't safe at all.

And finally, the conflict of interest principle. Elliott Reichardt, a researcher at the University of Calvary and a campaigner against vaping, argues that any study, report or campaign funded by the tobacco industry should be regarded with some suspicion.



Meanwhile, the traditional tobacco industry is hedging its bets - investing in e-cigarettes but doing well when vaping falters.



US Food and Drug Administration, Warning Letter to Juul Labs (FDA, 9 September 2019) via BBC News

Allan M. Brandt, Inventing Conflicts of Interest: A History of Tobacco Industry Tactics (Am J Public Health 102(1) January 2012) 63–71

Tom Chivers, Stop Hating on Vaping (Unherd, 13 September 2019) via @IanDunt

Jamie Doward, After six deaths in the US and bans around the world – is vaping safe? (Observer, 15 September 2019)

David Heath, Contesting the Science of Smoking (Atlantic, 4 May 2016)

Angelica Lavito, Juul built an e-cigarette empire. Its popularity with teens threatens its future (CNBC 4 August 2018)

Levy DT, Warner KE, Cummings KM, et al, Examining the relationship of vaping to smoking initiation among US youth and young adults: a reality check (Tobacco Control 20 November 2018)

Jennifer Maloney, Federal Prosecutors Conducting Criminal Probe of Juul (Wall Street Journal, 23 September 2019)

Elliott Reichardt and Juliet Guichon, Vaping is an urgent threat to public health (The Conversation, 13 March 2019)

Saturday, August 31, 2019

The Ethics of Disruption

In a recent commentary on #Brexit, Simon Jenkins notes that
"disruption theory is much in vogue in management schools, so long as someone else suffers".

Here is Bruno Latour making the same point.
"Don't be fooled for a second by those who preach the call of wide-open spaces, of  'risk-taking', those who abandon all protection and continue to point at the infinite horizon of modernization for all. Those good apostles take risks only if their own comfort is guaranteed. Instead of listening to what they are saying about what lies ahead, look instead at what lies behind them: you'll see the gleam of the carefully folded golden parachutes, of everything that ensures them against the random hazards of existence." (Down to Earth, p 11)

Anyone who advocates "moving fast and breaking things" is taking an ethical position: namely that anything fragile enough to break deserves to be broken. This position is similar to the economic view that companies and industries that can't compete should be allowed to fail.

This position may be based on a combination of specific perceptions and general observations. The specific perception is when something is weak or fragile, protecting and preserving it consumes effort and resources that could otherwise be devoted to other more worthwhile purposes, and makes other things less efficient and effective. The general observation is that when something is failing, efforts to protect and preserve it may merely delay the inevitable collapse.

These perceptions and observations rely on a particular worldview or lens, in which things can be perceived as successful or otherwise, independent of other things. As Gregory Bateson once remarked (via Tim Parks),
"There are times when I catch myself believing there is something which is separate from something else."
Perceptions of success and failure are also dependent on timescale and time horizon. The dinosaurs ruled the Earth for 140 million years.

There may also be strong opinions about which things get protection and which don't. For example, some people may think it is more important to support agriculture or to rescue failing banks than to protect manufacturers. On the other hand, there will always be people who disagree with the choices made by governments on such matters, and who will conclude that the whole project of protecting some industry sectors (and not others) is morally compromised.

Furthermore, the idea that some things are "too big to fail" may also be problematic, because it implies that small things don't matter so much.

A common agenda of the disruptors is to tear down perceived barriers, such as regulations. This is subject to a fallacy known as Chesterton's Fence, assuming that anyone whose purpose is not immediately obvious must be redundant.




Simon Jenkins, Boris Johnson and Jeremy Hunt will have to ditch no deal – or face an election (Guardian, 28 June 2019)

Bruno Latour, Down to Earth: Politics in the New Climatic Regime (Polity Press, 2018)

Tim Parks, Impossible Choices (Aeon, 15 July 2019)

Rory Sutherland, Chesterton’s fence – and the idiots who rip it out (Spectator, 10 September 2016)


Related posts: Shifting Paradigms and Disruptive Technology (September 2008), Arguments from Nature (December 2010), Low-Hanging Fruit (August 2019)

Thursday, August 22, 2019

Low-Hanging Fruit

August comes around again, and there are ripe blackberries in the hedgerows. One of the things I was taught at an early age was to avoid picking berries that were low enough to be urinated on by animals. (Or humans for that matter.) So I have always regarded the "low hanging fruit" metaphor with some distaste.

In business, "low hanging fruit" sometimes refers to an easy and quick improvement that nobody has previously spotted.

Which is of course perfectly possible. A new perspective can often reveal new opportunities.

But often the so-called low hanging fruit were already obvious, so pointing them out just makes you sound as if you think you are smarter than everyone else. And if they haven't already been harvested, there may be something you don't know about. (The fallacy of eliminating things whose purpose you don't understand is known as Chesterton's Fence.)

And another thing about picking soft fruit. Fruit are not placed at random, each plant has a characteristic pattern. Many plants place the leaves above the fruit, thus you can often see more fruit when you look upwards from below. If you get into the habit of looking downwards for the low-hanging stuff, you will simply not see how much more bounty the plant has to offer.

A lot of best practices and checklists are based on the simple and obvious. Which is fine as far as it goes, but not very innovative, won't take you from Best Practice to Next Practice.

So as I pointed out in my post on the Wisdom of the Iron Age, nobody should ever be satisfied with the low hanging fruit. Even if the low-hanging fruit hasn't already been pissed upon, its only value should be to get us started, to feed us and motivate us as we build ladders, so we can reach the high-hanging fruit.





Rory Sutherland, Chesterton’s fence – and the idiots who rip it out (Spectator, 10 September 2016)

Wikipedia: Chesterton's Fence

Updated 1 September 2019

Thursday, August 8, 2019

Automation Ethics

Many people start their journey into the ethics of automation and robotics by looking at Asimov's Laws of Robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm (etc. etc.)
As I've said before, I believe Asimov's Laws are problematic as a basis for ethical principles. Given that Asimov's stories demonstrate numerous ways in which the Laws don't actually work as intended. I have always regarded Asimov's work as being satirical rather than prescriptive.

While we usually don't want robots to harm people (although some people may argue for this principle to be partially suspended in the event of a "just war"), notions of harm are not straightforward. For example, a robot surgeon would have to cut the patient (minor harm) in order to perform an essential operation (major benefit). How essential or beneficial does the operation need to be, in order to justify it? Is the patient's consent sufficient?

Harm can be individual or collective. One potential harm from automation is that even if it creates wealth overall, it may shift wealth and employment opportunities away from some people, at least in the short term. But perhaps this can be justified in terms of the broader social benefit, or in terms of technological inevitability.

And besides the avoidance of (unnecessary) harm, there are some other principles to think about.
  • Human-centred work - Humans should be supported by robots, not the other way around. 
  • Whole system solutions - Design the whole system or process, don’t just optimize a robot as a single component.  
  • Self-correcting - Ensure that the system is capable of detecting and learning from errors. 
  • Open - Providing space for learning and future disruption. Don't just pave the cow-paths.
  • Transparent - The internal state and decision-making processes of a robot are accessible to (some) users.  

Let's look at each of these in more detail.


Human-Centred Work

Humans should be supported by robots, not the other way around. So we don't just leave humans to handle the bits and pieces that can't be automated, but try to design coherent and meaningful jobs for humans, with robots to make them more powerful, efficient, and effective.

Organization theorists have identified a number of job characteristics associated with job satisfaction, including skill variety, task identity, task significance, autonomy and feedback. So we should be able to consider how a given automation project affects these characteristics.


Whole Systems

When we take an architectural approach to planning and designing new technology, we can look at the whole system rather than merely trying to optimize a single robotic component.
  • Look across the business and technology domains (e.g. POLDAT).
  • Look at the total impact of a collection of automated devices, not at each device separately.
  • Look at this as a sociotechnical system, involving humans and robots collaborating on the business process.

Self-Correcting

Ensure that the (whole) system is capable of detecting and learning from errors (including near misses).

This typically requires a multi-loop learning process. The machines may handle the inner learning loops, but human intervention will be necessary for the outer loops.
 

Open

Okay, so do you improve the process first and then automate it, or do you automate first? If you search the Internet for "paving the cow-paths", you can find strong opinions on both sides of this argument. But the important point here is that automation shouldn't close down all possibility of future change. Paving the cow-paths may be okay, but not just paving the cow-paths and thinking that's the end of the matter.

In some contexts, this may mean leaving a small proportion of cases to be handled manually, so that human know-how is not completely lost. (Lewis Mumford argued that it is generally beneficial to retain some "craft" production alongside automated "factory" production, as a means to further insight, discovery and invention.)


Transparency

The internal state and decision-making processes of a robot are accessible to (some) users. Provide ways to monitor and explain what the robots are up to, or to provide an audit trail in the event of something going wrong.




Related posts

How Soon Might Humans Be Replaced At Work? (November 2015) Could we switch the algorithms off? (July 2017), How many ethical principles? (April 2019), Responsible Transparency (April 2019), Process Automation and Intelligence (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)

Links

Jim Highsmith, Paving Cow Paths (21 June 2005)

Wikipedia

Job Characteristic Theory
Just War Theory