Tuesday, May 28, 2019

Five Elements of Responsibility by Design

I have been developing an approach to #TechnologyEthics, which I call #ResponsibilityByDesign. It is based on the five elements of #VPECT. Let me start with a high-level summary before diving into some of the detail.


Values
  • Why does ethics matter?
  • What outcomes for whom?

Policies
  • Principles and practices of technology ethics
  • Formal codes of practice, etc. Regulation.

Event-Driven (Activity Viewpoint)
  • Effective and appropriate action at different points: planning; risk assessment; design; verification, validation and test; deployment; operation; incident management; retirement. (Also known as the Process Viewpoint). 

Content (Knowledge Viewpoint)
  • What matters from an ethical point of view? What issues do we need to pay attention to?
  • Where is the body of knowledge and evidence that we can reference?

Trust (Responsibility Viewpoint)
  • Transparency and governance
  • Responsibility, Authority, Expertise, Work (RAEW)

Concerning technology ethics, there is a lot of recent published material on each of these elements separately, but I have not yet found much work that puts them together in a satisfactory way. Many working groups concentrate on a single element - for example, principles or transparency. And even when experts link multiple elements, the logical connections aren't always spelled out.

At the time of writing this post (May 2019), I haven't yet fully worked out how to join these elements either, and I shall welcome constructive feedback from readers and pointers to good work elsewhere. I am also keen to find opportunities to trial these ideas on real projects.


Related Posts

Responsibility by Design (June 2018) What is Responsibility by Design (October 2018) Why Responsibility by Design Now? (October 2018)




Further Discussion and Links


Values

This links to what I've sometimes called the Motivation Viewpoint. A lot of the rhetoric is driven by recognition of various forms of harm and injustice, which can generally be linked back to some breach of fundamental human values. 


Policies and Principles

As Luciano Floridi wrote recently (May 2019):
"There are ... currently more than 70 frameworks and lists of principles about the ethics of AI. This mushrooming of declarations is generating inconsistency and confusion."

And as well as inconsistency and confusion, the proliferation of these documents is also generating cynicism and suggestions of ethics washing, especially in relation to the principles published by the technology giants themselves. In my opinion, the critical problem with these documents is that they say little or nothing about how these principles are actually going to work in practice. See my post How Many Ethical Principles (April 2019).


Events

This links to what I've elsewhere called the Activity Viewpoint. Others may call it the Process Viewpoint. What are the implications for action - who needs to do what and when.

The critical insight behind security by design and privacy by design is that the best way of delivering security and privacy is to consider these requirements from the outset, rather than expecting to bolt on security and privacy features as an afterthought.

So we can identify a series of activities that are appropriate in any technology initiative, and show how these can be embedded into the entire lifecycle of the product.


Content

Ethical arguments need to be based not only on abstract arguments about harm, or theoretical debates about the Trolley Problem, but on empirical evidence of what actually causes harm. For example, much of the ethical concern around artificial intelligence and machine learning is motivated by concrete examples of algorithmic bias and injustice. Many people have made valuable contributions to this debate, but I hope none of them will mind if I single out Cathy O'Neil's book as a key resource.

In my opinion, technology ethics can usefully learn from medical ethics. Difficult decisions are not just about choosing between doing something and not doing something, based on following a simple set of principles, but about balancing probable outcomes based on the best available evidence. See my post Practical Ethics (June 2018)

In the regulation of medicines and medical devices, there are established protocols for conducting and recording trials, publishing and interpreting the data, and responding appropriately to the emergence of adverse signals. It may not be reasonable to expect a similar level of rigour across all types of technology; nevertheless, responsible technology requires a collective ability to learn from experience, with effective mechanisms for collecting, sharing and interpreting relevant data, and then responding appropriately. See my post Ethics and Uncertainty (March 2019)


Trust

Finally, we come to the big question: what does it take to make technology trustworthy? There are several parts to this question, including governance and transparency, as well as the nature of regulation. I have already blogged extensively on these topics - see for example my recent posts on Leadership versus Governance and Responsible Transparency - but there is much more to say here.


I plan to add more links to this post as I go along.



References


Luciano Floridi, Establishing the Rules for Building Trustworthy AI (Nature Machine Intelligence, May 2019)

Cathy O'Neil, Weapons of Math Destruction (2016)

Ben Wagner, Ethics as an Escape from Regulation: From ethics-washing to ethics-shopping? in M. Hildebrandt (ed), Being Profiling. Cogitas ergo sum. (Amsterdam University Press 2018)

Wikipedia: VPEC-T

No comments:

Post a Comment