NOW AVAILABLE The draft of my book on Organizational Intelligence is now available on LeanPub http://leanpub.com/orgintelligence. Please support this development by subscribing and commenting. Thanks.

Wednesday, June 13, 2018

Practical Ethics

A lot of ethical judgements appear to be binary ones. Good versus bad. Acceptable versus unacceptable. Angels versus Devils.

Where questions of ethics reach the public sphere, it is common for people to take strong positions for or against. For example, there have been some high-profile cases involving seriously sick children, whether they should be provided with some experimental treatment, or even whether they should be kept alive at all. These are incredibly difficult decisions for those closely involved, but the experts are then subjected to vitriolic attack from armchair critics (often from the other side of the world) who think they know better.

Practical ethics are mostly about trade-offs, interpreting the evidence, predicting the consequences, estimating and balancing the benefits and risks. There isn't a simple formula that can be applied, each case must be carefully considered to determine where it sits on a spectrum.

The same is true of business and technology ethics. There isn't a blanket rule that says that these forms of persuasion are good and these forms are bad, there are just different degrees of nudge. We might want to regard all nudges with some suspicion, but retailers have always nudged people to purchase things. The question is whether this particular form of nudge is acceptable in this context, or whether it crosses some fuzzy line into manipulation or worse. Where does this particular project sit on the spectrum?

Technologists sometimes abdicate responsibility for such questions. Whatever the client wants, or whatever the technology enables, is okay. Responsibility means owning that judgement.

When Google published its AI ethics recently, Eric Newcomer complained that balancing the benefits and risks sounded like the utilitarianism he learned about at high school. But he also complained that Google's approach lacks impartiality and agent-neutrality. It would therefore be more accurate to describe Google's approach as consequentialism.

In the real world, even the question of agent-neutrality is complicated. Sometimes this is interpreted as a call to disregard any judgement made by a stakeholder, on the grounds that they must be biased. For example, ignoring professional opinions (doctors, teachers) because they might be trying to protect their own professional status. But taking important decisions about healthcare or education away from the professionals doesn't solve the problem of bias, it merely replaces professional bias with some other form of bias.

In Google's case, people are entitled to question how exactly Google will make these difficult judgements, and the extent to which these judgements may be subject to some conflict of interest. But if there is no other credible body that can make these judgements, perhaps the best we can ask for (at least for now) is some kind of transparency or scrutiny.

As I said above, practical ethics are mostly about consequences - which philosophers call consequentialism. But not entirely. Ethical arguments about the human subject aren't always framed in terms of observable effects, but may be framed in terms of human values. For example, the idea people should be given control over something or other, not because it makes them happier, but just because, you know, they should. Or the idea that certain things (truth, human life, etc.) are sacrosanct.

In his book The Human Use of Human Beings, first published in 1950, Norbert Wiener based his computer ethics on what he called four great principles of justice. So this is not just about balancing outcomes.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom”


Of course, a complex issue may require more than a single dimension. It may be useful to draw spider diagrams or radar charts, to help to visualize the relevant factors. Alternatively, Cathy O'Neil recommends the Ethical or Stakeholder Matrix technique, originally invented by Professor Ben Mepham.

"A construction from the world of bio-ethics, the ethical or “stakeholder” matrix is a way of determining the answer to the question, does this algorithm work? It does so by considering all the stakeholders, and all of their concerns, be them positive (accuracy, profitability) or negative (false negatives, bad data), and in particular allows the deployer to think about and gauge all types of best case and worst case scenarios before they happen. The matrix is color coded with red, yellow, or green boxes to alert people to problem areas." [Source: ORCAA]
"The Ethical Matrix is a versatile tool for analysing ethical issues. It is intended to help people make ethical decisions, particularly about new technologies. It is an aid to rational thought and democratic deliberation, not a substitute for them. ... The Ethical Matrix sets out a framework to help individuals and groups to work through these debates in relation to a particular issue. It is designed so that a broader than usual range of ethical concerns is aired, differences of perspective become openly discussed, and the weighting of each concern against the others is made explicit. The matrix is based in established ethical theory but, as far as possible, employs user-friendly language." [Source: Food Ethics Council]




Jessi Hempel, Want to prove your business is fair? Audit your algorithm (Wired 9 May 2018)

Ben Mepham, Ethical Principles and the Ethical Matrix. Chapter 3 in J. Peter Clark Christopher Ritson (eds), Practical Ethics for Food Professionals: Ethics in Research, Education and the Workplace (Wiley 2013)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Tom Upchurch, To work for society, data scientists need a hippocratic oath with teeth (Wired, 8 April 2018)



Stanford Encyclopedia of Philosophy: Computer and Information Ethics, Consequentialism, Utilitarianism

Related posts: Conflict of Interest (March 2018), Data and Intelligence Principles From Major Players (June 2018)

Sunday, March 25, 2018

Ethics as a Service

In the real world, ethics is rarely if ever the primary focus. People engaging with practical issues may need guidance or prompts to engage with ethical questions, as well as appropriate levels of governance.


@JPSlosar ‏calls for
"a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent".

Slosar's particular interest is in healthcare. He wants to proactively integrate ethics in person-centered care, as a key enabler of the multiple (and sometimes conflicting) objectives of healthcare: improved outcomes, reduced costs and the best possible patient and provider experience. These four objectives are known as the Quadruple Aim.

According to Slosar, ethics can be understood as a service aimed at reducing, minimizing or avoiding harm. Harm can sometimes be caused deliberately, or blamed on human inattentiveness, but it is more commonly caused by system and process errors.

A team of researchers at Carnegie-Mellon, Berkeley and Microsoft Research have proposed an approach to ethics-as-a-service involving crowd-sourcing ethical decisions. This was presented at an Ethics-By-Design workshop in 2013.


Meanwhile, Ozdemir and Knoppers distinguish between two types of Upstream Ethics: Type 1 refers to early ethical engagement, while Type 2 refers to the choice of ethical principles, which they call "prenormative", part of the process by which "normativity" is achieved. Given that most of the discussion of EthicsByDesign assumes early ethical engagement in a project (Type 1), their Type 2 might be better called EthicsByFiat.





Cristian Bravo-Lillo, Serge Egelman, Cormac Herley, Stuart Schechter and Janice Tsai, Reusable Ethics‐Compliance Infrastructure for Human Subjects Research (CREDS 2013)

Derek Feeley, The Triple Aim or the Quadruple Aim? Four Points to Help Set Your Strategy (IHI, 28 November 2017)

Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639

John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)

Conflict of Interest

@riptari (Natasha Lomas) has a few questions for DeepMind's AI ethics research unit. She suggests that
"it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts"

and points out that
"there’s a reason no one trusts the survey touting the amazing health benefits of a particular foodstuff carried out by the makers of said foodstuff".

As @marionnestle remarks in relation to the health claims of chocolate,
"industry-funded research tends to set up questions that will give them desirable results, and tends to be interpreted in ways that are beneficial to their interests". (via Nik Fleming)





Nic Fleming, The dark truth about chocolate (Observer, 25 March 2018)

Natasha Lomas, DeepMind now has an AI ethics research unit. We have a few questions for it… (TechCrunch, 4 Oct 2017)

Sunday, March 18, 2018

Security is downstream from strategy

Following @carolecadwalla's latest revelations about the misuse of personal data involving Facebook, she gets a response from Alex Stamos, Facebook's Chief Security Officer.

So let's take a look at some of his hand-wringing Tweets.

I'm sure many security professionals would sympathize with this. Nobody listens to me. Strategy and innovation surge ahead, and security is always an afterthought.

According to his Linked-In entry, Stamos joined Facebook in June 2015. Before that he had been Chief Security Officer at Yahoo!, which suffered a major breach under his watch in late 2014, affecting over 500 million user accounts. So perhaps a mere 50 million Facebook users having their data used for nefarious purposes doesn't really count as much of a breach in his book.

In a series of tweets he later deleted, Stamos argued that the whole problem was caused by the use of an API that everyone should have known about, because it was well-documented. As if his job was only to control the undocumented stuff.
Or as Andrew Keane Woods glosses the matter, "Don’t worry everyone, Cambridge Analytica didn’t steal the data; we were giving it out". By Monday night, Stamos had resigned.

In one of her articles, Carole Cadwalladr quotes the Breitbart doctrine
"politics is downstream from culture, so to change politics you need to change culture"
And culture eats strategy. And security is downstream from everything else. So much then for "by design and by default".







Carole Cadwalladr ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower (Observer, 18 Mar 2018) via @BiellaColeman

Carole Cadwalladr and Emma Graham-Harrison, How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool (Guardian, 17 Mar 2018)

Jessica Elgot and Alex Hern, No 10 'very concerned' over Facebook data breach by Cambridge Analytica (Guardian, 19 Mar 2018)

Hannes Grassegger and Mikael Krogerus, The Data That Turned the World Upside Down (Motherboard, 28 Jan 2017) via @BiellaColeman

Justin Hendrix, Follow-Up Questions For Facebook, Cambridge Analytica and Trump Campaign on Massive Breach (Just Security, 17 March 2018)

Casey Johnston, Cambridge Analytica's leak shouldn't surprise you, but it should scare you (The Outline, 19 March 2018)

Nicole Perlroth, Sheera Frenkel and Scott Shanemarch, Facebook Exit Hints at Dissent on Handling of Russian Trolls (New York Times, 19 March 2018)

Mattathias Schwartz, Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate (The Intercept, 30 March 2017)

Andrew Keane Woods, The Cambridge Analytica-Facebook Debacle: A Legal Primer (Lawfare, 20 March 2018) via BoingBoing


Wikipedia: Yahoo data breaches

Related post: Making the World more Open and Connected (March 2018)


Updated 20 March 2018 with new developments and additional commentary

Friday, March 9, 2018

Fail Fast - Burger Robotics

As @jjvincent observes, integrating robots into human jobs is tougher than it looks. Four days after it was installed in a Pasadena CA burger joint, Flippy the robot has been taken out of service for an upgrade. Turns out it wasn't fast enough to handle the demand. Does this count as Fail Fast?

Flippy's human minders have put a positive spin on the failure, crediting the presence of the robot for an unexpected increase in demand. As Vincent wryly suggests, Flippy is primarily earning its keep as a visitor attraction.

If this is a failure at all, what kind of failure is it? Drawing on earlier work by James Reason, Phil Boxer distinguishes between errors of intention, planning and execution.

If the intention for the robot is to improve productivity and throughput at peak periods, then the designers have got more work to do. And the productivity-throughput problem may be broader than just burger flipping: making Flippy faster may simply expose a bottleneck somewhere else in the system. But if the intention for the robot is to attract customers, this is of greatest value at off-peak periods. In which case, perhaps the robot already works perfectly.



Philip Boxer, ‘Unintentional’ errors and unconscious valencies (Asymmetric Leadership, 1 May 2008)

John Donohue, Fail Fast, Fail Often, Fail Everywhere (New Yorker, 31 May 2015)

Lora Kolodny, Meet Flippy, a burger-grilling robot from Miso Robotics and CaliBurger (TechCrunch 7 Mar 2017)

Brian Heater, Flippy, the robot hamburger chef, goes to work (TechCrunch, 5 March 2018)

James Vincent, Burger-flipping robot takes four-day break immediately after landing new job (Verge, 8 March 2018)





Related post Fail Fast - Why did the chicken cross the road? (March 2018)

Monday, January 15, 2018

Carillion Struck By Lightning

@NilsPratley blames delusion in the boardroom (on a grand scale, he says) for Carillion's collapse. "In the end, it comes down to judgments made in the boardroom."

A letter to the editor of the Financial Times agrees.
"This situation has been caused, in part, by the unprofessional, fatalistic and blasé attitude to contract risk management of some senior executives in the UK construction industry."


By no means the first company brought low by delusion (I've talked some about Enron on this blog, as well as in my book on organizational intelligence), and probably not the last.

And given that Carillion was the beneficiary of some very large public sector contracts, we could also talk about delusion and poor risk management in government circles. As @econtratacion points out, "the public sector had had information pointing towards Carillion's increasingly dire financial situation for a while".



As it happens, the Home Secretary was at the London Stock Exchange today, talking to female executives about gender diversity at board level. So I thought I'd just check the gender make-up of the Carillion board. According to the Carillion website, there were two female executives and two female non-executive directors in a board of twelve.

In the future, Amber Rudd would like half of all directors to be female. An earlier Government-backed review had recommended that at least a third should be female by 2020.

But compared to other large UK companies, the Carillion gender ratio wasn't too bad. "On paper, the directors looked well qualified", writes Kate Burgess in the Financial Times, noting that "the board ticked all the boxes in terms of good governance". But now even the Institute of Directors has expressed belated concerns about the effective governance at Carillion, and Burgess says the board fell into what she calls "a series of textbook traps".

So what kind of traps were these? The board paid large dividends to the shareholders and awarded large bonuses to themselves and other top executives, despite the fact that key performance targets were not met, and there was a massive hole in the pension fund. In other words, they looked after themselves first and the shareholders second, and to hell with pensioners and other stakeholders. Meanwhile, Larry Elliott notes that the directors of the company took steps to shield themselves from financial risk. These are not textbook traps, they are not errors of judgement, they are moral failings.

Of course we shouldn't rely solely on the moral integrity of company executives. If there is no regulation or regulator able to prevent a board behaving in this way, this points to a fundamental weakness in the financial system as a whole. As @RSAMatthew writes,
"There are many culprits in this tale. Lazy or ideologically blinkered ministers, incompetent public sector commissioners, cynical private sector providers signing 'suicide bids' on the assumption that they can renegotiate when things go wrong and, as always, a financial sector willing to arbitrage any profit regardless of consequences or ethics."

There is a strong case that diversity mitigates against groupthink - but as I've argued in my earlier posts, this needs to be real diversity not just symbolic or imaginary diversity (ticking boxes). And even if having more women or ethnic minorities on the board might possibly reduce errors of judgement, women as well as men can have moral failings. It's as if we imagined that Ivanka Trump was going to be a wise and restraining influence on her father, simply because of her gender.

As it happens, the remuneration director at Carillion was a woman. We may never know whether she was coerced or misled by her fellow directors or whether she participated enthusiastically in the gravy. But we cannot say that having a woman in that position is automatically going to be better than having a man. Women on boards may be a necessary step, but it is not a sufficient one.





Martin Bentham, Amber Rudd: 'It makes no sense to have more men than women in the boardroom' (Evening Standard, 15 January 2018)

Mark Bull, A lesson on risk from Carillion’s collapse (FT Letters to the Editor, 16 January 2018)

Kate Burgess, Carillion’s board: misguided or incompetent? (FT, 17 January 2018) HT @AidanWard3

Larry Elliott, Four lessons the Carillion crisis can teach business, government and us (Guardian, 17 January 2018)

Vanessa Fuhrmans, Companies With Diverse Executive Teams Posted Bigger Profit Margins, Study Shows (WSJ, 18 January 2018)

Simon Goodley, Carillion's 'highly inappropriate' pay packets criticised (Guardian, 15 January 2018)

Nils Pratley, Blame the deluded board members for Carillion's collapse (Guardian, 15 January 2018)

Albert Sánchez-Graells, Some thoughts on Carillion's liquidation and systemic risk management in public procurement (15 January 2018)

Rebecca Smith, Women should hold one third of senior executive jobs at FTSE 100 firms by 2020, says Sir Philip Hampton's review (City Am, 6 November 2016)

Matthew Taylor, Is Carillion the end for Public Private Partnerships? (RSA, 16th January 2018)


Related posts

Explaining Enron (January 2010)
The Purpose of Diversity (January 2010)
Organizational Intelligence and Gender (October 2010)
Delusion and Diversity (October 2012)
Intelligence and Governance (February 2013)
More on the Purpose of Diversity (December 2014)


Updated 25 January 2018

Friday, November 24, 2017

Pax Technica - The Conference

#paxtechnica Today I was at the @CRASSHlive conference in Cambridge to hear a series of talks and panel discussions on The Implications of the Internet of Things. For a comprehensive account, see @LaurieJ's livenotes.

When I read Philip Howard's book last week, I wondered why he had devoted so much of his book to such internet phenomena as social media and junk news, when the notional topic of the book was the Internet of Things. His keynote address today made the connection much clearer. While social media provides data about attitudes and aspirations, the internet of things provides data about behaviour. When these different types of data are combined, this produces a much richer web of information.

For example, Howard mentioned a certain coffee company that wanted to use IoT sensors to track the entire coffee journey from farm to disposed cup. (Although another speaker expressed scepticism about the value of this data, arguing that most of the added value of IoT came from actuators rather than sensors.)

To the extent that the data involves personal information, this raises political concerns. Some of the speakers today spoke of surveillance capitalism, and there were useful talks on security and privacy. (See separate post on Risk and Security)

In his 2014 essay on the Internet of Things, Bruce Sterling characterizes the Internet of Things as "an epic transformation: all-purpose electronic automation through digital surveillance by wireless broadband". According to Sterling, powerful stakeholders like the slogan 'Internet of Things' "because it sounds peaceable and progressive".

Peaceable? Howard uses the term Pax. This refers to a period in which the centre is stable and relatively peaceful, although the periphery may be marked by local skirmishes and violence (p7). His historical examples are the Pax Romana, the Pax Britannica and the Pax Americana. He argues that we are currently living in a similar period, which he calls Pax Technica.

For Howard, "a pax indicates a moment of agreement between government and the technology industry about a shared project and way of seeing the world" (p6). This seems akin to Gramsci's notion of cultural hegemony, "the idea that the ruling class can manipulate the value system and mores of a society, so that their view becomes the world view or Weltanschauung" (Wikipedia).

But whose tech? Howard has documented significant threats to democracy from foreign governments using social media bots to propagate junk news. There are widespread fears that this propaganda has had a significant effect on several recent elections. And if the Russians are often mentioned in the context of social media bots and junk news, the Chinese are often mentioned in the context of dodgy Internet of Things devices. While some political factions in the West are accused of collaborating with the Russians, and some commercial interests (notably pharma) may be using similar propaganda techniques, it seems odd to frame this as part of a shared project between government and the technology industry. Howard's research indicates a new technological cold war, in which techniques originally developed by the authoritarian regimes to control their own citizens are repurposed to undermine and destabilize democratic regimes.

David Runciman talked provocatively about government of the things, by the things, for the things. (Someone from the audience linked this, perhaps optimistically, to Bruno Latour's Parliament of Things.) But Runciman's formulation foregrounds the devices (the "things") and overlooks the relationships behind the devices (the "internet of"). (This is related to Albert Borgmann's notion of the Device Paradigm.) As consumers we may spend good money on products with embedded internet-enabled devices, then we discover that these devices don't truly belong to ourselves but remain loyal to their manufacturers. They monitor our behaviour, they may refuse to work with non-branded spare parts, or they may terminate service altogether. As Ian Steadman reports, it's becoming more and more common for everyday appliances to have features we don't expect. (Worth reading Steadman's article in full. He also quotes some prescient science fiction from Philip K Dick's 1969 novel Ubik.) "Very soon your house will betray you" warns architect Rem Koolhaas (Guardian 12 March 2014).

There are important ethical questions here, relating to non-human agency and the Principal-Agent problem.

But the invasion of IoT into our lives doesn't stop there. McGuirk worries that "our countless daily actions and choices around the house become what define us", and quotes a line from Dave Eggers' 2013 novel, The Circle

"Having a matrix of preferences presented as your essence, as the whole you? … It was some kind of mirror, but it was incomplete, distorted."
So personal identity and socioeconomic status may become precarious. This needs more thinking about. In the meantime, here is a quote from Teston.

"Wearable technologies ... are non-human actors that interact with other structural conditions to determine whose bodies count."


Related Posts

Witnessing Machines Built in Secret (November 2017)
Pax Technica - The Book (November 2017)
Pax Technica - On Risk and Security (November 2017)


References

Dan Herman, Dave Eggers' "The Circle" — on tech, big data and the human component (Metaweird, Oct 2013)

Philip Howard, Pax Technica: How The Internet of Things May Set Us Free or Lock Us Up (Yale 2015)

Laura James, Pax Technica Notes (Session 1Session 2Session 3Session 4)

Justin McGuirk, Honeywell, I’m Home! The Internet of Things and the New Domestic Landscape (e-flux #64 April 2015)

John Naughton, 95 Theses about Technology (31 October 2017)

Ian Steadman, Before we give doors and toasters sentience, we should decide what we're comfortable with first (New Statesman, 10 February 2015)

Bruce Sterling, The Epic Struggle of the Internet of Things (2014). Extract via BoingBoing (13 Sept 2014)

Christa Teston, Rhetoric, Precarity, and mHealth Technologies (Rhetoric Society Quarterly, 46:3, 2016) pp 251-268 

Wikipedia: Cultural Hegemony, Device ParadigmHegemony, Principal-Agent problem