There has been much criticism of the bias and distortion embedded in many of our modern digital tools and platforms, including search. Google recently released an AI image generation model that over-compensated for this, producing racially diverse images even for situations where such diversity would be historically inaccurate. With well-chosen prompts, this feature was made to look either ridiculous or politically dangerous (aka "woke"), and the model has been withdrawn for further refinement and testing.
I've just been reading an extended thread from Yishan Wong who argues
Google’s Gemini issue is not really about woke/DEI, and everyone who is obsessing over it has failed to notice the much, MUCH bigger problem that it represents.
— Yishan (@yishan) February 23, 2024
The bigger problem he identifies is the inability of the engineers to anticipate and constrain the behaviour of a complex intelligent systems. As in many of Asimov's stories, where the robots often behave in dangerous ways.
Some writers on technology ethics have called for ethical principles to be embedded in technology, along the lines of Asimov's Laws. I have challenged this idea in previous posts, because as I see it the whole point of the Three Laws is that they don't work properly. Thus my reading of Asimov's stories is similar to Yishan's.And the lesson was that even if we had the Three Laws of Robotics, supposedly very comprehensive, that robots were still going to do crazy things, sometimes harmful things, because we couldn’t anticipate how they’d follow our instructions?
— Yishan (@yishan) February 23, 2024
If this had been a truly existential situation where “we only get one chance to get it right,” we’d be dead.
— Yishan (@yishan) February 23, 2024
Because I’m sure Google tested it internally before releasing it and it was fine per their original intentions. They probably didn’t think to ask for Vikings or Nazis.
It looks like their testing didn't take context of use into account.
Update: Or as Dame Wendy Hall noted later, This is not just safety testing, this is does-it-make-any-sense training
.
Dan Milmo, Google pauses AI-generated images of people after ethnicity criticism (Guardian, 22 February 2024)
Dan Milmo and Alex Hern, ‘We definitely messed up’: why did Google AI tool make offensive historical images? (Guardian, 8 March 2024)
Related posts: Reinforcing Stereotypes (May 2007), Purpose of Diversity (January 2010) (December 2014), Automation Ethics (August 2019), Algorithmic Bias (March 2021)
No comments:
Post a Comment