Saturday, February 24, 2024

Anticipating Effects

There has been much criticism of the bias and distortion embedded in many of our modern digital tools and platforms, including search. Google recently released an AI image generation model that over-compensated for this, producing racially diverse images even for situations where such diversity would be historically inaccurate. With well-chosen prompts, this feature was made to look either ridiculous or politically dangerous (aka "woke"), and the model has been withdrawn for further refinement and testing.

I've just been reading an extended thread from Yishan Wong who argues 

The bigger problem he identifies is the inability of the engineers to anticipate and constrain the behaviour of a complex intelligent systems. As in many of Asimov's stories, where the robots often behave in dangerous ways.

Some writers on technology ethics have called for ethical principles to be embedded in technology, along the lines of Asimov's Laws. I have challenged this idea in previous posts, because as I see it the whole point of the Three Laws is that they don't work properly. Thus my reading of Asimov's stories is similar to Yishan's.

It looks like their testing didn't take context of use into account. 

Update: Or as Dame Wendy Hall noted later, This is not just safety testing, this is does-it-make-any-sense training.



Dan Milmo, Google pauses AI-generated images of people after ethnicity criticism (Guardian, 22 February 2024) 

Dan Milmo and Alex Hern, ‘We definitely messed up’: why did Google AI tool make offensive historical images? (Guardian, 8 March 2024)

Related posts: Reinforcing Stereotypes (May 2007), Purpose of Diversity (January 2010) (December 2014), Automation Ethics (August 2019), Algorithmic Bias (March 2021)

No comments:

Post a Comment