Wednesday, June 13, 2018

Practical Ethics

A lot of ethical judgements appear to be binary ones. Good versus bad. Acceptable versus unacceptable. Angels versus Devils.

Where questions of ethics reach the public sphere, it is common for people to take strong positions for or against. For example, there have been some high-profile cases involving seriously sick children, whether they should be provided with some experimental treatment, or even whether they should be kept alive at all. These are incredibly difficult decisions for those closely involved, but the experts are then subjected to vitriolic attack from armchair critics (often from the other side of the world) who think they know better.

Practical ethics are mostly about trade-offs, interpreting the evidence, predicting the consequences, estimating and balancing the benefits and risks. There isn't a simple formula that can be applied, each case must be carefully considered to determine where it sits on a spectrum.

The same is true of business and technology ethics. There isn't a blanket rule that says that these forms of persuasion are good and these forms are bad, there are just different degrees of nudge. We might want to regard all nudges with some suspicion, but retailers have always nudged people to purchase things. The question is whether this particular form of nudge is acceptable in this context, or whether it crosses some fuzzy line into manipulation or worse. Where does this particular project sit on the spectrum?

Technologists sometimes abdicate responsibility for such questions. Whatever the client wants, or whatever the technology enables, is okay. Responsibility means owning that judgement.

When Google published its AI ethics recently, Eric Newcomer complained that balancing the benefits and risks sounded like the utilitarianism he learned about at high school. But he also complained that Google's approach lacks impartiality and agent-neutrality. It would therefore be more accurate to describe Google's approach as consequentialism.

In the real world, even the question of agent-neutrality is complicated. Sometimes this is interpreted as a call to disregard any judgement made by a stakeholder, on the grounds that they must be biased. For example, ignoring professional opinions (doctors, teachers) because they might be trying to protect their own professional status. But taking important decisions about healthcare or education away from the professionals doesn't solve the problem of bias, it merely replaces professional bias with some other form of bias.

In Google's case, people are entitled to question how exactly Google will make these difficult judgements, and the extent to which these judgements may be subject to some conflict of interest. But if there is no other credible body that can make these judgements, perhaps the best we can ask for (at least for now) is some kind of transparency or scrutiny.

As I said above, practical ethics are mostly about consequences - which philosophers call consequentialism. But not entirely. Ethical arguments about the human subject aren't always framed in terms of observable effects, but may be framed in terms of human values. For example, the idea people should be given control over something or other, not because it makes them happier, but just because, you know, they should. Or the idea that certain things (truth, human life, etc.) are sacrosanct.

In his book The Human Use of Human Beings, first published in 1950, Norbert Wiener based his computer ethics on what he called four great principles of justice. So this is not just about balancing outcomes.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom”


Of course, a complex issue may require more than a single dimension. It may be useful to draw spider diagrams or radar charts, to help to visualize the relevant factors. Alternatively, Cathy O'Neil recommends the Ethical or Stakeholder Matrix technique, originally invented by Professor Ben Mepham.

"A construction from the world of bio-ethics, the ethical or “stakeholder” matrix is a way of determining the answer to the question, does this algorithm work? It does so by considering all the stakeholders, and all of their concerns, be them positive (accuracy, profitability) or negative (false negatives, bad data), and in particular allows the deployer to think about and gauge all types of best case and worst case scenarios before they happen. The matrix is color coded with red, yellow, or green boxes to alert people to problem areas." [Source: ORCAA]
"The Ethical Matrix is a versatile tool for analysing ethical issues. It is intended to help people make ethical decisions, particularly about new technologies. It is an aid to rational thought and democratic deliberation, not a substitute for them. ... The Ethical Matrix sets out a framework to help individuals and groups to work through these debates in relation to a particular issue. It is designed so that a broader than usual range of ethical concerns is aired, differences of perspective become openly discussed, and the weighting of each concern against the others is made explicit. The matrix is based in established ethical theory but, as far as possible, employs user-friendly language." [Source: Food Ethics Council]




Jessi Hempel, Want to prove your business is fair? Audit your algorithm (Wired 9 May 2018)

Ben Mepham, Ethical Principles and the Ethical Matrix. Chapter 3 in J. Peter Clark Christopher Ritson (eds), Practical Ethics for Food Professionals: Ethics in Research, Education and the Workplace (Wiley 2013)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Tom Upchurch, To work for society, data scientists need a hippocratic oath with teeth (Wired, 8 April 2018)



Stanford Encyclopedia of Philosophy: Computer and Information Ethics, Consequentialism, Utilitarianism

Related posts: Conflict of Interest (March 2018), Data and Intelligence Principles From Major Players (June 2018)

No comments:

Post a Comment