She argues that the people who were appointed to the ATEAC were selected because they were "prominent" in the field. She notes that "although being prominent doesn't mean you're the best, it probably does mean you're at least pretty good, at least at something".
Ignoring the complexities of university politics, academics generally achieve prominence because they are pretty good at having interesting and original ideas, publishing papers and books, coordinating research, and supervising postgraduate work, as well as representing the field in wider social and intellectual forums (e.g. TED talks). Clearly that can be regarded as an important type of leadership.
Bryson argues that leading is about problem-solving. And clearly there are some aspects of problem-solving in what has brought her to prominence, although that's certainly not the whole story.
But that argument completely misses the point. The purpose of the ATEAC was not problem-solving. Google does not need help with problem-solving, it employs thousands of extremely clever people who spend all day solving problems (although it may sometimes need a bit of help in the diversity stakes).
The stated purpose of the ATEAC was to help Google implement its AI principles. In other words, governance.
When Google published its AI principles last year, the question everyone was asking was about governance:
- @mer__edith (Twitter 8 June 2018, tweet no longer available) called for "strong governance, independent external oversight and clarity"
- @katecrawford (Twitter 8 June 2018) asked "How are they implemented? Who decides? There's no mention of process, or people, or how they'll evaluate if a tool is 'beneficial'. Are they... autonomous ethics?"
- and @EricNewcomer (Bloomberg 8 June 2018) asked "who decides if Google has fulfilled its commitments".
Google's appointment of an "advisory" council was clearly a half-hearted attempt to answer this question.
Bryson points out that Kay Coles James (the most controversial appointee) had some experience writing technology policy. But what a truly independent governance body needs is experience monitoring and enforcing policy, which is not the same thing at all.
People talk a lot about transparency in relation to technology ethics. Typically this refers to being able to "look inside" an advanced technological product, such as an algorithm or robot. But transparency is also about process and organization - ability to scrutinize the risk assessment and the design and the potential conflicts of interest. There are many people performing this kind of scrutiny on a full-time basis within large organizations or ecosystems, with far more experience of extremely large and complex development programmes than your average professor.
Had Google really wanted a genuinely independent governance body to scrutinize them properly, could they have appointed a different set of experts? Can people appointed and paid by Google ever be regarded as genuinely independent? And doesn't the word "advisory" give the game away? As Brustein and Bergen point out, the actual decisions are made by an internal body, the Advanced Technology Review Council, and external critics doubt that this body will ever seriously challenge Google's commercial or strategic interests.
Veena Dubal suggests that the most effective governance over Google is currently coming from Google's own workforce. It seems that their protests were significant in getting Google to disband the ATEAC, while earlier protests (re Project Maven) had led to the production of the AI principles in the first place. Clearly the kind of courageous leadership demonstrated by people like Meredith Whittaker isn't just about problem-solving.
Joshua Brustein and Mark Bergen, The Google AI Ethics Board With Actual Power Is Still Around (Bloomberg, 6 April 2019)
Joanna Bryson, What we lost when we lost Google ATEAC (7 April 2019), What leaders are actually for (13 May 2019)
Veena Dubal, Who stands between you and AI dystopia? These Google activists (The Guardian, 3 May 2019)
Bobbie Johnson and Gideon Lichfield, Hey Google, sorry you lost your ethics council, so we made one for you (MIT Technology Review 6 April 2019
Abner Li, Google details formal review process for enforcing AI Principles, plans external advisory group (9to5 Google, 18 December 2018
Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)
Kent Walker, An external advisory council to help advance the responsible development of AI (Google, 26 March 2019, updated 4 April 2019)
Related post: Data and Intelligence Principles From Major Players (June 2018)
Updated 15 May 2019
No comments:
Post a Comment