Sunday, March 25, 2018

Ethics as a Service

In the real world, ethics is rarely if ever the primary focus. People engaging with practical issues may need guidance or prompts to engage with ethical questions, as well as appropriate levels of governance.


@JPSlosar calls for
"a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent".

Slosar's particular interest is in healthcare. He wants to proactively integrate ethics in person-centered care, as a key enabler of the multiple (and sometimes conflicting) objectives of healthcare: improved outcomes, reduced costs and the best possible patient and provider experience. These four objectives are known as the Quadruple Aim.

According to Slosar, ethics can be understood as a service aimed at reducing, minimizing or avoiding harm. Harm can sometimes be caused deliberately, or blamed on human inattentiveness, but it is more commonly caused by system and process errors.

A team of researchers at Carnegie-Mellon, Berkeley and Microsoft Research have proposed an approach to ethics-as-a-service involving crowd-sourcing ethical decisions. This was presented at an Ethics-By-Design workshop in 2013.


Meanwhile, Ozdemir and Knoppers distinguish between two types of Upstream Ethics: Type 1 refers to early ethical engagement, while Type 2 refers to the choice of ethical principles, which they call "prenormative", part of the process by which "normativity" is achieved. Given that most of the discussion of EthicsByDesign assumes early ethical engagement in a project (Type 1), their Type 2 might be better called EthicsByFiat.





Cristian Bravo-Lillo, Serge Egelman, Cormac Herley, Stuart Schechter and Janice Tsai, Reusable Ethics‐Compliance Infrastructure for Human Subjects Research (CREDS 2013)

Derek Feeley, The Triple Aim or the Quadruple Aim? Four Points to Help Set Your Strategy (IHI, 28 November 2017)

Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639

John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)

Conflict of Interest

@riptari (Natasha Lomas) has a few questions for DeepMind's AI ethics research unit. She suggests that
"it really shouldn’t need a roster of learned academics and institutions to point out the gigantic conflict of interest in a commercial AI giant researching the ethics of its own technology’s societal impacts"

and points out that
"there’s a reason no one trusts the survey touting the amazing health benefits of a particular foodstuff carried out by the makers of said foodstuff".

As @marionnestle remarks in relation to the health claims of chocolate,
"industry-funded research tends to set up questions that will give them desirable results, and tends to be interpreted in ways that are beneficial to their interests". (via Nik Fleming)





Nic Fleming, The dark truth about chocolate (Observer, 25 March 2018)

Natasha Lomas, DeepMind now has an AI ethics research unit. We have a few questions for it… (TechCrunch, 4 Oct 2017)

Sunday, March 18, 2018

Security is downstream from strategy

Following @carolecadwalla's latest revelations about the misuse of personal data involving Facebook, she gets a response from Alex Stamos, Facebook's Chief Security Officer.

So let's take a look at some of his hand-wringing Tweets.

I'm sure many security professionals would sympathize with this. Nobody listens to me. Strategy and innovation surge ahead, and security is always an afterthought.

According to his Linked-In entry, Stamos joined Facebook in June 2015. Before that he had been Chief Security Officer at Yahoo!, which suffered a major breach under his watch in late 2014, affecting over 500 million user accounts. So perhaps a mere 50 million Facebook users having their data used for nefarious purposes doesn't really count as much of a breach in his book.

In a series of tweets he later deleted, Stamos argued that the whole problem was caused by the use of an API that everyone should have known about, because it was well-documented. As if his job was only to control the undocumented stuff.
Or as Andrew Keane Woods glosses the matter, "Don’t worry everyone, Cambridge Analytica didn’t steal the data; we were giving it out". By Monday night, Stamos had resigned.

In one of her articles, Carole Cadwalladr quotes the Breitbart doctrine
"politics is downstream from culture, so to change politics you need to change culture"
And culture eats strategy. And security is downstream from everything else. So much then for "by design and by default".







Carole Cadwalladr ‘I made Steve Bannon’s psychological warfare tool’: meet the data war whistleblower (Observer, 18 Mar 2018) via @BiellaColeman

Carole Cadwalladr and Emma Graham-Harrison, How Cambridge Analytica turned Facebook ‘likes’ into a lucrative political tool (Guardian, 17 Mar 2018)

Jessica Elgot and Alex Hern, No 10 'very concerned' over Facebook data breach by Cambridge Analytica (Guardian, 19 Mar 2018)

Hannes Grassegger and Mikael Krogerus, The Data That Turned the World Upside Down (Motherboard, 28 Jan 2017) via @BiellaColeman

Justin Hendrix, Follow-Up Questions For Facebook, Cambridge Analytica and Trump Campaign on Massive Breach (Just Security, 17 March 2018)

Casey Johnston, Cambridge Analytica's leak shouldn't surprise you, but it should scare you (The Outline, 19 March 2018)

Nicole Perlroth, Sheera Frenkel and Scott Shanemarch, Facebook Exit Hints at Dissent on Handling of Russian Trolls (New York Times, 19 March 2018)

Mattathias Schwartz, Facebook failed to protect 30 million users from having their data harvested by Trump campaign affiliate (The Intercept, 30 March 2017)

Andrew Keane Woods, The Cambridge Analytica-Facebook Debacle: A Legal Primer (Lawfare, 20 March 2018) via BoingBoing


Wikipedia: Yahoo data breaches

Related post: Making the World more Open and Connected (March 2018), Ethical Communication in a Digital Age (November 2018), The Future of Political Campaigning (November 2018)


Updated 20 March 2018 with new developments and additional commentary

Friday, March 9, 2018

Fail Fast - Burger Robotics

As @jjvincent observes, integrating robots into human jobs is tougher than it looks. Four days after it was installed in a Pasadena CA burger joint, Flippy the robot has been taken out of service for an upgrade. Turns out it wasn't fast enough to handle the demand. Does this count as Fail Fast?

Flippy's human minders have put a positive spin on the failure, crediting the presence of the robot for an unexpected increase in demand. As Vincent wryly suggests, Flippy is primarily earning its keep as a visitor attraction.

If this is a failure at all, what kind of failure is it? Drawing on earlier work by James Reason, Phil Boxer distinguishes between errors of intention, planning and execution.

If the intention for the robot is to improve productivity and throughput at peak periods, then the designers have got more work to do. And the productivity-throughput problem may be broader than just burger flipping: making Flippy faster may simply expose a bottleneck somewhere else in the system. But if the intention for the robot is to attract customers, this is of greatest value at off-peak periods. In which case, perhaps the robot already works perfectly.



Philip Boxer, ‘Unintentional’ errors and unconscious valencies (Asymmetric Leadership, 1 May 2008)

John Donohue, Fail Fast, Fail Often, Fail Everywhere (New Yorker, 31 May 2015)

Lora Kolodny, Meet Flippy, a burger-grilling robot from Miso Robotics and CaliBurger (TechCrunch 7 Mar 2017)

Brian Heater, Flippy, the robot hamburger chef, goes to work (TechCrunch, 5 March 2018)

James Vincent, Burger-flipping robot takes four-day break immediately after landing new job (Verge, 8 March 2018)





Related post Fail Fast - Why did the chicken cross the road? (March 2018)