Thursday, August 8, 2019

Automation Ethics

Many people start their journey into the ethics of automation and robotics by looking at Asimov's Laws of Robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm (etc. etc.)
As I've said before, I believe Asimov's Laws are problematic as a basis for ethical principles. Given that Asimov's stories demonstrate numerous ways in which the Laws don't actually work as intended. I have always regarded Asimov's work as being satirical rather than prescriptive.

While we usually don't want robots to harm people (although some people may argue for this principle to be partially suspended in the event of a "just war"), notions of harm are not straightforward. For example, a robot surgeon would have to cut the patient (minor harm) in order to perform an essential operation (major benefit). How essential or beneficial does the operation need to be, in order to justify it? Is the patient's consent sufficient?

Harm can be individual or collective. One potential harm from automation is that even if it creates wealth overall, it may shift wealth and employment opportunities away from some people, at least in the short term. But perhaps this can be justified in terms of the broader social benefit, or in terms of technological inevitability.

And besides the avoidance of (unnecessary) harm, there are some other principles to think about.
  • Human-centred work - Humans should be supported by robots, not the other way around. 
  • Whole system solutions - Design the whole system or process, don’t just optimize a robot as a single component.  
  • Self-correcting - Ensure that the system is capable of detecting and learning from errors. 
  • Open - Providing space for learning and future disruption. Don't just pave the cow-paths.
  • Transparent - The internal state and decision-making processes of a robot are accessible to (some) users.  

Let's look at each of these in more detail.


Human-Centred Work

Humans should be supported by robots, not the other way around. So we don't just leave humans to handle the bits and pieces that can't be automated, but try to design coherent and meaningful jobs for humans, with robots to make them more powerful, efficient, and effective.

Organization theorists have identified a number of job characteristics associated with job satisfaction, including skill variety, task identity, task significance, autonomy and feedback. So we should be able to consider how a given automation project affects these characteristics.


Whole Systems

When we take an architectural approach to planning and designing new technology, we can look at the whole system rather than merely trying to optimize a single robotic component.
  • Look across the business and technology domains (e.g. POLDAT).
  • Look at the total impact of a collection of automated devices, not at each device separately.
  • Look at this as a sociotechnical system, involving humans and robots collaborating on the business process.

Self-Correcting

Ensure that the (whole) system is capable of detecting and learning from errors (including near misses).

This typically requires a multi-loop learning process. The machines may handle the inner learning loops, but human intervention will be necessary for the outer loops.
 

Open

Okay, so do you improve the process first and then automate it, or do you automate first? If you search the Internet for "paving the cow-paths", you can find strong opinions on both sides of this argument. But the important point here is that automation shouldn't close down all possibility of future change. Paving the cow-paths may be okay, but not just paving the cow-paths and thinking that's the end of the matter.

In some contexts, this may mean leaving a small proportion of cases to be handled manually, so that human know-how is not completely lost. (Lewis Mumford argued that it is generally beneficial to retain some "craft" production alongside automated "factory" production, as a means to further insight, discovery and invention.)


Transparency

The internal state and decision-making processes of a robot are accessible to (some) users. Provide ways to monitor and explain what the robots are up to, or to provide an audit trail in the event of something going wrong.




Related posts

How Soon Might Humans Be Replaced At Work? (November 2015) Could we switch the algorithms off? (July 2017), How many ethical principles? (April 2019), Responsible Transparency (April 2019), Process Automation and Intelligence (August 2019), RPA - Real Value or Painful Experimentation? (August 2019)

Links

Jim Highsmith, Paving Cow Paths (21 June 2005)

Wikipedia

Job Characteristic Theory
Just War Theory

No comments:

Post a Comment