Saturday, February 16, 2019

Shoshana Zuboff on Surveillance Capitalism

@shoshanazuboff's latest book was published at the end of January. 700 pages of detailed research and analysis, and I've been trying to read as much as possible before everyone else. Meanwhile I have seen display copies everywhere - not just at the ICA bookshop which always has more interesting books than I shall ever have time to read, but also in my (excellent) local bookshop. (Fiona tells me she has already sold several copies.)

Although Zuboff spent much of her life at Harvard Business School, and has previously expressed optimism about technology (Distributed Capitalism, the Support Economy), she has form in criticizing the unacceptable face of capitalism (e.g. her 2009 critique of Wall Street). She now regards surveillance capitalism as "a profoundly undemocratic social force" (p 513), and in the passionate conclusion to her book I can hear echoes of Robert Burn's poem "Parcel of Rogues in a Nation".
"Our lives are scraped and sold to fund their freedom and our subjugation, their knowledge and our ignorance about what they know." (p 498)

One of the key words in the book is "power", especially what she calls instrumentarian power. She describes the emergence of this kind of power as a bloodless coup, and makes a point that will be extremely familiar to readers of Foucault.
"Instead of violence directed at our bodies, the instrumentarian third modernity operates more like a taming. Its solution to the increasingly clamorous demands for effective life pivots on the gradual elimination of chaos, uncertainty, conflict, abnormality, and discord in favor of predictability, automatic regularity, transparency, confluence, persuasion and pacification." (p 515)
In Foucauldian language, this would be described as a shift from sovereign power to disciplinary power, which he describes in terms of the Panopticon. Although Zuboff discusses Foucault and the Information Panopticon at some length in her book on the Smart Machine, I couldn't find a reference to Foucault in her latest book, merely a very brief mention of the panopticon (pp 470-1). So for a fuller explanation of this concept, I turned to the Stanford Encyclopedia of Philosophy.
"Bentham’s Panopticon is, for Foucault, a paradigmatic architectural model of modern disciplinary power. It is a design for a prison, built so that each inmate is separated from and invisible to all the others (in separate “cells”) and each inmate is always visible to a monitor situated in a central tower. Monitors do not in fact always see each inmate; the point is that they could at any time. Since inmates never know whether they are being observed, they must behave as if they are always seen and observed. As a result, control is achieved more by the possibility of internal monitoring of those controlled than by actual supervision or heavy physical constraints." (SEP: Michel Foucault)
I didn't read the Smart Machine when it first came out, so the first time I saw the term "panopticon" applied to information technology was in Mark Poster's brilliant book The Mode of Information, which came out a couple of years later. Introducing the term Superpanopticon to describe the databases of his time, his analysis seems uncannily accurate as a description of the present day.
"Foucault taught us to read a new form of power by deciphering discourse/practice formations instead of intentions of a subject or instrumental actions. Such a discourse analysis when applied to the mode of information yields the uncomfortable discovery that the population participates in its own self-constitution as subjects of the normalizing gaze of the Superpanopticon. We see databases not as an invasion of privacy, as a threat to a centred individual, but as the multiplication of the individual, the constitution of an additional self, one that may be acted upon to the detriment of the 'real' self without that 'real' self ever being aware of what is happening." (pp 97-8)

But the problem with invoking Foucault is that it appears to take agency away from the "parcel of rogues" - Zuckerberg, Bosworth, Nadella and the rest - who are the apparent villains of Zuboff's book. As I've pointed out elsewhere, the panopticon holds both the watcher and watched alike in its disciplinary power.

In his long and detailed review, Evgeny Morozov thinks the book owes more to Alfred Chandler, advocate of managerial capitalism, than to Foucault. (Even though Zuboff seems no longer to believe in the possibility of return to traditional managerial capitalism, and the book ends by taking sides with George Orwell in his strong critique of James Burnham, an earlier advocate of managerial capitalism.)


Meanwhile, there is another French thinker who may be haunting Zuboff's book, thanks to her adoption of the term Big Other, usually associated with Jacques Lacan. Jörg Metelmann suggests that Zoboff's use of the term "Big Other" corresponds (to a great extent, he says) to Lacan and Slavoj Žižek's psychoanalysis, but I'm not convinced. I suspect she may have selected the term "Big Other" (associated with Disciplinary Power) not as a conscious reference to Lacanian theory but because it rhymed with the more familiar "Big Brother" (associated, at least in Orwell's novel, with Sovereign Power). 

Talking of "otherness", Peter Benson explains how the Amazon Alexa starts to be perceived, not as a mere object but as an Other.
"(We) know perfectly well that she is an electronic device without consciousness, intentions, or needs of her own. But behaving towards Alexa as a person becomes inevitable, because she is programmed to respond as a person might, and our brains have evolved to categorize such a being as an Other, so we respond to her as a person. We can resist this categorization, but, as with an optical illusion, our perception remains unchanged even after it has been explained. The stick in water still looks bent, even though we know it isn’t. Alexa’s personhood is exactly such a psychological illusion."
But much as we may desire to possess this mysterious black tube, regarding Alexa as an equal partner in dialogue, almost a mirror of ourselves, the reality is that this black tube is just one of many endpoints in an Internet of Things consisting of millions of similar black tubes and other devices, part of an all-pervasive Big Other. Zuboff sees the Big Other as an apparatus, an instrument of power, "the sensate computational, connected puppet that renders, monitors, computes and modifies human behavior" (p 376).

The Big Other possesses what Zuboff calls radical indifference - it monitors and controls human behaviour while remaining steadfastly indifferent to the meaning of that experience (pp 376-7). She quotes an internal Facebook memo by Andrew "Boz" Bosworth advocating moral neutrality (pp 505-6). (For what it's worth, radical indifference is also celebrated by Baudrillard.)

She also refers to this as observation without witness. This can be linked to Henry Giroux's notion of disimagination, the internalization of surveillance.
"I argue that the politics of disimagination refers to images, and I would argue institutions, discourses, and other modes of representation, that undermine the capacity of individuals to bear witness to a different and critical sense of remembering, agency, ethics and collective resistance. The 'disimagination machine' is both a set of cultural apparatuses extending from schools and mainstream media to the new sites of screen culture, and a public pedagogy that functions primarily to undermine the ability of individuals to think critically, imagine the unimaginable, and engage in thoughtful and critical dialogue: put simply, to become critically informed citizens of the world."
(Just over a year ago, I managed to catch a rare performance of A Machine They're Secretly Building, which explores some of these ideas in a really interesting way. Strongly recommended. Check the proto_type website for UK tour dates.)

Zuboff's book concentrates on the corporate side of surveillance, although she does mention the common interest (elective affinity p 115) between the surveillance capitalists and the public security forces around the war on terrorism. She also mentions the increased ability of political actors to use the corporate instruments for political ends. So a more comprehensive genealogy of surveillance would have to trace the shifting power relations between corporate power, government power, media power and algorithmic power.

A good example of this kind of exploration took place at the PowerSwitch conference in March 2017, where I heard Ariel Ezrachi (author of a recent book on the Algorithm-Driven Economy) talking about "the end of competition as we know it" (see links below to video and liveblog).

But obviously there is much more on this topic than can be covered in one book. Although some reviewers (@bhaggart as well as @evgenymorozov) have noted a lack of intellectual depth and rigour, Shoshana Zuboff has nonetheless made a valuable contribution - both in terms of the weight of evidence she has assembled and also in terms of bringing these issues to a wider audience.




Peter Benson, The Concept of the Other from Kant to Lacan (Philosophy Now 127, August/September 2018)

Ariel Ezrachi and Maurice Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016) - more links via publisher's page 

Henry A. Giroux, The Politics of Disimagination and the Pathologies of Power (Truth Out, 27 February 2013)

Blayne Haggart, Evaluating scholarship, or why I won’t be teaching Shoshana Zuboff’s The Age of Surveillance Capitalism (15 February 2019)

Jörg Metelmann, Screening Surveillance Capitalism, in Daniel Cuonz, Scott Loren, Jörg Metelmann (eds) Screening Economies: Money Matters and the Ethics of Representation (transcript Verlag, 2018)

Evgeny Morozov, Capitalism’s New Clothes (The Baffler, 4 February 2019)

Mark Poster, The Mode of Information (Polity Press, 1990). See also notes in Donko Jeliazkov and Roger Blumberg, Virtualities (PDF, undated)

Shoshana Zuboff, In The Age of the Smart Machine (1988)

Shoshana Zuboff, Wall Street's Economic Crimes Against Humanity (Bloomberg, 20 March 2019)

Shoshana Zuboff, A Digital Declaration (Frankfurter Algemeiner Zeitung, 15 September 2014)

Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)

Wikipedia: Panopticism

Stanford Encyclopedia of Philosophy: Jacques Lacan, Michel Foucault

CRASSH PowerSwitch Conference (Cambridge, 31 March 2017) Panel 4: Algorithmic Power (via YouTube). See also liveblog by Laura James Power Switch - Conference Report.


Related posts: Power Switch (March 2017), The Price of Everything (May 2017), Witnessing Machines Built in Secret (November 2017), Pax Technica (November 2017), Big Data and Organizational Intelligence (November 2018), Insurance and the Veil of Ignorance (February 2019)

Sunday, November 18, 2018

Ethics in Technology - FinTech

Last Thursday, @ThinkRiseLDN (Rise London, a FinTech hub) hosted a discussion on Ethics in Technology (15 November 2018).

Since many of the technologies under discussion are designed to support the financial services industry, the core ethical debate is strongly correlated to the business ethics of the finance sector and is not solely a matter of technology ethics. But like most other sectors, the finance sector is being disrupted by the opportunities and challenges posed by technological innovation, and this entails a professional and moral responsibility on technologists to engage with a range of ethical issues.

(Clearly there are many ethical issues in the financial services industry besides technology. For example, my friends in the @LongFinance initiative are tackling the question of sustainability.)

The Financial Services industry has traditionally been highly regulated, although some FinTech innovations may be less well regulated for now. So people working in this sector may expect regulation - specifically principles-based regulation - to play a leading role in ethical governance (Note: the UK Financial Services Authority has been pursuing a principles-based regulation strategy for over ten years.)

Whether ethical questions can be reduced to a set of principles or rules is a moot point. In medical ethics, principles are generally held to be useful but not sufficient for resolving difficult ethical problems. (See McCormick for a good summary. See also my post on Practical Ethics.)

Nevertheless, there are undoubtedly some useful principles for technology ethics. For example, the principle that you can never foresee all the consequences of your actions, so you should avoid making irreversible technological decisions. In science fiction, this issue can be illustrated by a robot that goes rogue and cannot be switched off. @moniquebachner made the point that with a technology like Blockchain, you were permanently stuck, for good or ill, with your original design choices.

Several of the large tech companies have declared principles for data and intelligence. (My summary here.) But declaring principles is the easy bit; these companies taking them seriously (or us trusting them to take them seriously) may be harder.

One of the challenges discussed by the panel was how to negotiate the asymmetry of power. If your boss or your client wants to do something that you are uncomfortable with, you can't just assert some ethical principles and expect her to change her mind. So rather than walk away from an interesting technical challenge, you give yourself an additional organizational challenge - how to influence the project in the right way, without sacrificing your own position.

Obviously that's an ethical dilemma in its own right. Should you compromise your principles in the hope of retaining some influence over the outcome, or could you persuade yourself that the project isn't so bad after all? There is an interesting play-off between individual responsibility and collective responsibility, which we are also seeing in politics (Brexit passim).

Sheryl Sandberg appears to offer a high-profile example of this ethical dilemma. She had been praised by feminists for being "the one reforming corporate boy’s club culture from the inside ... the civilizing force barely keeping the organization from tipping into the abyss of greed and toxic masculinity." Crispin now disagrees with this view. "It seems clear what Sandberg truly is instead: a team player. And her team is not the working women of the world. It is the corporate culture that has groomed, rewarded, and protected her throughout her career." "This is the end of corporate feminism", comments @B_Ehrenreich.

And talking of Facebook ...

The title of Cathy O'Neil's book Weapons of Math Destruction invites a comparison between the powerful technological instruments now in the hands of big business, and the arsenal of nuclear and chemical weapons that have been a major concern of international relations since the Second World War. During the so-called Cold War, these weapons were largely controlled by the two major superpowers, and it was these superpowers that dominated the debate. As these weapons technologies have proliferated however, attention has shifted to the possible deployment of these weapons by smaller countries, and it seems that the world has become much more uncertain and dangerous.

In the domain of data ethics, it is the data superpowers (Facebook, Google) that command the most attention. But while there are undoubtedly major concerns about the way these companies use their powers, we may at least hope that a combination of forces may help to moderate the worst excesses. Besides regulatory action, these forces might include public opinion, corporate risk aversion from the large advertisers that provide the bulk of the income, as well as pressure from their own employees.

And in FinTech as with Data Protection, it will always be easier for regulators to deal with a small number of large players than with a very large number of small players. The large players will of course try to lobby for regulations that suit them, and may shift some operations into less strongly regulated jurisdictions, but in the end they will be forced to comply, more or less. Except that the ethically dubious stuff will always turn out to be led by a small company you've never heard of, and the large players will deny that they knew anything about it.

As I pointed out in my previous post on The Future of Political Campaigning, the regulators only have limited tools at their disposal, so this slants their powers to deal with the ethical ecosystem as a whole. If I had a hammer ...




Financial Services Authority, Principles-Based Regulation - Focusing on the Outcomes that Matter (FSA, April 2007)

Jessa Crispin, Feminists gave Sheryl Sandberg a free pass. Now they must call her out (Guardian, 17 November 2018)

Ian Harris, Commercial Ethics: Process or Outcome (Z/Yen, 2008)

Thomas R. McCormick, Principles of Bioethics (University of Washington, 2013)

Chris Yapp, Where does the buck stop now? (Long Finance, 28 October 2018)


Related posts Practical Ethics (June 2018) Data and Intelligence Principles from Major Players (June 2018) The Future of Political Campaigning (November 2018)

Saturday, November 17, 2018

The Future of Political Campaigning

#democracydisrupted Last Tuesday, @Demos organized a discussion on The Future of Political Campaigning (13 November 2018). The panelists included the Information Commissioner (@ElizabethDenham) and the CEO of the Electoral Commission (@ClaireERBassett).

The presenting problem is social and technological changes that disrupt the democratic process and some of the established mechanisms and assumptions that are supposed to protect it. Recent elections (including the Brexit referendum) have featured new methods of campaigning and new modes of propaganda. Voters are presented with a wealth of misinformation and disinformation on the Internet, while campaigners have new tools for targeting and influencing voters.

The regulators have some (limited) tools for dealing with these changes. The ICO can deal with organizations that misuse personal data, while the Electoral Commission can deal with campaigns that are improperly funded. But while the ICO in particular is demonstrating toughness and ingenuity in using the available regulatory instruments to maximum effect, these instruments are only indirectly linked to the problem of political misinformation. Bad actors in future will surely find new ways to achieve unethical political ends, out of the reach of these regulatory instruments.

@Jphsmith compared selling opposition to the "Chequers" Brexit deal with selling waterproof trousers. But if the trousers turn out not to be waterproof, there is legal recourse for the purchaser. Whereas there appears to be no direct accountability for political misinformation and disinformation. The ICO can deal with organizations that misuse personal data: that’s the main tool they’ve been provided with. What tool do they have for dealing with propaganda and false promises? Where is the small claims court I can go to when I discover my shiny new Brexit doesn’t hold water? (Twitter thread)

As I commented in my question from the floor, for the woman with a hammer, everything looks like a nail. Clearly misuse of data and illegitimate sources of campaign finance are problems, but they are not necessarily the main problem. And if the government and significant portions of the mass media (including the BBC) don't give these problems much airtime, downplay their impact on the democratic process, and (disgracefully) disparage and discredit those journalists who investigate them, notably @carolecadwalla, there may be insufficient public recognition of the need for reform, let alone enhanced and updated regulation. If antidemocratic forces are capable of influencing elections, they are surely also capable of persuading the man in the street that there is nothing to worry about.



Jaime Bartlett, Josh Smith, Rose Acton, The Future of Political Campaigning (Demos, 11 July 2018)

Carole Cadwalladr, Why Britain Needs Its Own Mueller (NYR Daily, 16 November 2018)

Nick Raikes, Online security and privacy: What an email address reveals (BBC News, 13 November 2018)

Josh Smith, A nation of persuadables: politics and campaigning in the age of data (Demos, 13 November 2018)

Jim Waterson, BBC women complain after Andrew Neil tweet about Observer journalist (Guardian, 16 November 2018)


Related posts

Security is downstream from strategy (March 2018), Ethical Communication in a Digital Age (November 2018)

Wednesday, November 7, 2018

YouTube Growth Hacking

Fascinating talk by @sophiehbishop at @DigiCultureKCL about YouTube growth hacks, and the "algorithm bros" that promote them (and themselves).



The subject of her talk was a small cadre of young men who have managed to persuade millions of followers (as well as some large corporations) that they can reveal the "secrets" of the YouTube algorithm.


I want to comment on two aspects of her work in particular. Firstly there is the question, addressed in her previous paper, of "anxiety, panic and self-optimization". When people create content on YouTube or similar platforms, they have an interest in getting their content viewed as widely as possible - indeed wide viewership ("going viral") is generally regarded as the measure of success. But these platforms are capricious, in the sense that they (deliberately) don't make it easy to manipulate this measure, and this generates a sense of precarity - not only among individual content providers but also among political and commercial organizations.

So when someone offers to tell you the secrets of success on YouTube, someone who is himself already successful on YouTube, it would be hard to resist the desire to learn these secrets. Or at least to listen to what they have to say. And risk-averse corporations may be willing to bung some consultancy money in their direction.

YouTube's own engineers describe the algorithm as "one of the largest scale and most sophisticated industrial recommendation systems in existence". Their models learn approximately one billion parameters and are trained on hundreds of billions of examples. The idea that a couple of amateurs without significant experience or funding can "reverse engineer" this algorithm stretches credibility, and Bishop points out several serious methodological flaws with their approach, while speculating that perhaps what really matters to the growth hacking community is not what the YouTube algorithm actually does but what the user thinks it does. She notes that the results of this "reverse engineering" experiment have been widely disseminated, and presented at an event sponsored by YouTube itself.

What is the effect of disseminating this kind of material? I don't know if it helps to make YouTubers less anxious, or conversely makes them more anxious than they were already. No doubt YouTube is happy about anything that encourages people to devote even more time to creating sticky content for YouTube. A dashboard (in this case, YouTube's Creator Studio) provides a framing device, focusing people's attention on certain metrics (financial gains and social capital), and fostering the illusion that the metrics on the dashboard are really the only ones that matter.


The other aspect of Bishop's work I wanted to discuss is the apparent gender polarization on YouTube - not only polarization of content and who gets to see which content, but also a significantly different operating style for male and female content providers. The traditional feminist view (McRobbie, Meehan) is that this polarization is a response to the commercial demands of the advertisers. But other dimensions of polarization have become apparent more recently, including political extremism, and Zeynep Tufekci argues that YouTube may be one of the most powerful radicalizing instruments of the 21st century. This hints at a more deeply rooted schismogenesis.

Meanwhile, how much of this was intended or foreseen by YouTube is almost beside the point. Individuals and organizations may be held responsible for the consequences of their actions, including unforeseen consequences.



Sophie Bishop, Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm (Convergence, 24(1), 2018 pp 69–84)

Paul Covington, Jay Adams, Emre Sargin, Deep Neural Networks for YouTube Recommendations (Proceedings of the 10th ACM Conference on Recommender Systems, 2016, pages 191-198)

Paul Lewis, 'Fiction is outperforming reality': how YouTube's algorithm distorts truth (Guardian, 2 Feb 2018)

Zeynep Tufekci, Opinion: YouTube, the Great Radicalizer (New York Times, 10 March 2018)

Wikipedia: Schismogenesis

Related post Ethical communication in a digital age (November 2018), Polarization (November 2018)

Thursday, November 1, 2018

Ethical communication in a digital age

At the @BritishAcademy_ yesterday evening for a lecture by Onora O'Neill on Ethical Communication in a Digital Age, supported by two more philosophy professors, Rowan Cruft and Rae Langton.

Much of the discussion was about the threats posed to public reason by electronically mediated speech acts, and the challenges of regulating social media. However, although the tech giants and regulators have an important role, the primary question in the event billing was not about Them but about Us - how do *we* communicate ethically in an increasingly digital age.

I don't claim to know as much about ethics as the three professors, but I do know a bit about communication and digital technology, so here is my take on the subject from that perspective.

The kind of communication we are talking about involves at least four different players - the speaker, the spoken-to, the spoken-about, and the medium / mediator. Communication can be punctuated into a series of atomic speech acts, but it is often the cumulative effects (on public reason or public decency) that worry us.

So let me look at each of the elements of this communication in turn.



First the speech act itself. O'Neill quoted Plato, who complained that the technology of writing served to decouple the writer from the text. On social media, the authorship of speech acts becomes more problematic still. This is not just because many of the speakers are anonymous, and we may not know whether they are bots or people. It is also because the dissemination mechanisms offered by the social media platforms allow people to dissociate themselves from the contents that they may "like" or "retweet". Thus people may disseminate nasty material while perceiving themselves not as the authors of this material but as merely mediators of it, and therefore not holding themselves personally responsible for the truth or decency of the material.

Indeed, some people act online as if they believed that the online world was entirely disconnected from the real physical world, as if online banter could never have real-world consequences, and the online alter ego was an entirely different person.

Did I say truth? At the event, the three philosophers devoted a lot of time to the relationship between ethics and epistemology (questions of truth and verifiability on the Internet). But even propositional speech acts are not always easily sorted into truth and lies, while many of the speech acts that pollute the internet are not propositions but other rhetorical gestures. For example, endless repetition of "what about her emails?" and "lock her up", which are designed to frame public discourse to accord with the rhetorical goals of the speaker. (I'll come back to the question of framing later.)

The popular social media platforms offer to punctuate our speech into discrete units - the tweet, the post, the YouTube video, or whatever. Each unit is then measured separately, and the speaker may be rewarded (financially or psychologically) when a unit becomes popular (or "goes viral"). We tend to take this punctuation at face value, but systems thinkers including Bateson and Maturana have drawn attention to the relationship between punctuation and epistemology.

(Note to self - add something here about metacommunication, which is a concept Bateson took from Benjamin Lee Whorf.)



Full communication requires a listener (the spoken-to) as well as a speaker. Much of the digital literacy agenda is about coaching people to interpret and evaluate material found on the internet, enabling them to work out who is actually speaking, and whether there is a hidden commercial or political agenda.

One of the challenges of the digital age is that I don't know who else is being spoken to. Am I part of an undifferentiated crowd (unlikely) or a filter bubble (probably)? The digital platforms have developed sophisticated mechanisms for targeting people who may be particularly receptive to particular messages or content. So why have I been selected for this message, why exactly does Twitter or Facebook think this would be of interest to me? This is a fundamental divergence from older forms of mass communication - the public meeting, the newspaper, the broadcast.

And sometimes a person can be targeted with violent threats and other unpleasantries. Harassment and trolling techniques developed as part of the #GamerGate campaign are now widely used across the internet, and may often be successful in intimidating and silencing the recipients.




The third (and often unwilling) party to communication is the person or community spoken about. Where this is an individual, there may be issues around privacy as well as avoidance of libel or slander. It is sometimes thought that people in the public eye (such as Hillary Clinton or George Soros) are somehow "fair game" for any criticism or disparagement that is thrown in their direction, whereas other people (especially children) deserve some protection. The gutter press has always pushed the boundaries of this, and the Internet undoubtedly amplifies this phenomenon.

What I find even more interesting here is the way recent political debate has focused on scapegoating certain groups. Geoff Shullenberger attributes some of this to Peter Thiel.

"Peter Thiel, whose support for Trump earned him a place on the transition team, is a former student of the most significant theorist of scapegoating, the late literary scholar and anthropologist of religion René Girard. Girard built an ambitious theory around the claim that scapegoating pervades social life in an occluded form and plays a foundational role in religion and politics. For Girard, the task of modern thought is to reveal and overcome the scapegoat mechanism–to defuse its immense potency by explaining its operation. Conversely, Thiel’s political agenda and successful career setting up the new pillars of our social world bear the unmistakable traces of someone who believes in the salvationary power of scapegoating as a positive project."

Clearly there are some ethical issues here to be addressed.



Fourthly we come onto the role of the medium / mediator. O'Neill talked about disintermediation, as if the Internet allowed people to talk directly to people without having to pass through gatekeepers such as newspaper editors and government censors. But as Rae Langton pointed out, this is not true disintermediation, as these mediators are merely being replaced by others - often amateur curators. Furthermore, the new mediators can't be expected to have the same establishment standards as the old mediators. (This may or may not be a good thing.)

Even the old mediators can't be relied upon to maintain the old standards. The BBC is often accused of bias, and its response to these accusations appears to be to hide behind a perverse notion of "balance" and "objectivity" that requires it to provide a platform for climate change denial and other farragoes.

Obviously the tech giants have a commercial agenda, linked to the Attention Economy. As Zeynep Tufekci and others have pointed out, people can be presented with increasingly extreme content in order to keep them on the platform, and this appears to be a significant force behind the emergence of radical groups, as well as a substantial shift in the Overton window. There appears to be some correlation between Facebook usage and attacks on migrants, although it may be difficult to establish the direction of causality.

But the platforms themselves are also subject to political influence - not only the weaponization of social media described by John Naughton but also old-fashioned coercion. Around Easter 2016, people were wondering whether Facebook would swing the American election against Trump. A posse of right-wing politicians had a meeting with Zuckerberg in May 2016, who then bent over backwards to avoid anyone thinking that Facebook would give Clinton an unfair advantage. (Spoiler: it didn't.)

So if there is a role for regulation here, it is not only to protect consumers from the commercial interests of the tech giants, but also to protect the tech giants themselves from improper influence.



Finally, I want to emphasize Framing, which is one of the most important ways people can influence public reason. For example, hashtags provide a simple and powerful framing mechanism, which can work to positive effect (#MeToo) or negative (#GamerGate).

President Trump is of course a master of framing - constantly moving the terms of the debate, so his opponents are always forced to debate on these terms. His frequent invocation of #FakeNews enables him to preempt and negate inconvenient facts, and his rhetorical playbook also includes antisemitic tropes (Hadley Freeman) and kettle logic (Heer Jeet). (But there are many examples of framing devices used by earlier presidents, and it is hard to delineate precisely what is new or objectionable about Trump's performance.)

In other words Rhetoric eats Epistemology for breakfast. (Perhaps that will give my philosopher friends something to chew on?)




J.L Austin, How to do things with words (Oxford University Press, 1962)

Anthony Cuthbertson, Facebook use linked to attacks on refugees, says study (Independent, 22 August 2018)

Paul F. Dell, Understanding Bateson and Maturana: Toward a Biological Foundation for the Social Sciences (Journal of Marital and Family Therapy, 1985, Vol. 11, No. 1, 1-20). (Note: even though I have both Bateson and Maturana on my bookshelf, the lazy way to get a reference is to use Google, which points me towards secondary sources like this. When I have time, I'll put the original references in.)

Alex Johnson and Matthew DeLuca, Facebook's Mark Zuckerberg Meets Conservatives Amid 'Trending' Furor (NBC News, 19 May 2016)

Robinson Meyer, How Facebook Could Tilt the 2016 Election (Atlantic, 18 April 2016)

Paul Lewis, 'Fiction is outperforming reality': how YouTube's algorithm distorts truth (Guardian, 2 Feb 2018)

John Naughton, Mark Zuckerberg’s dilemma - what to do with monster he has created? (Open Democracy, 29 October 2018)

Geoff Shullenberger, The Scapegoating Machine (The New Inquiry, 30 November 2016)

Zeynep Tufekci, YouTube, the Great Radicalizer (New York Times, 10 March 2018)



Wikipedia: Attention Economy, Disintermediation, Framing, Gamergate Controversy, Metacommunication, Overton Window

Related posts: Security is downstream from strategy (March 2018), YouTube Growth Hacking (November 2018), Polarization (November 2018), The Future of Political Campaigning (November 2018)


Updated 11 November 2018

Wednesday, June 13, 2018

Practical Ethics

A lot of ethical judgements appear to be binary ones. Good versus bad. Acceptable versus unacceptable. Angels versus Devils.

Where questions of ethics reach the public sphere, it is common for people to take strong positions for or against. For example, there have been some high-profile cases involving seriously sick children, whether they should be provided with some experimental treatment, or even whether they should be kept alive at all. These are incredibly difficult decisions for those closely involved, but the experts are then subjected to vitriolic attack from armchair critics (often from the other side of the world) who think they know better.

Practical ethics are mostly about trade-offs, interpreting the evidence, predicting the consequences, estimating and balancing the benefits and risks. There isn't a simple formula that can be applied, each case must be carefully considered to determine where it sits on a spectrum.

The same is true of business and technology ethics. There isn't a blanket rule that says that these forms of persuasion are good and these forms are bad, there are just different degrees of nudge. We might want to regard all nudges with some suspicion, but retailers have always nudged people to purchase things. The question is whether this particular form of nudge is acceptable in this context, or whether it crosses some fuzzy line into manipulation or worse. Where does this particular project sit on the spectrum?

Technologists sometimes abdicate responsibility for such questions. Whatever the client wants, or whatever the technology enables, is okay. Responsibility means owning that judgement.

When Google published its AI ethics recently, Eric Newcomer complained that balancing the benefits and risks sounded like the utilitarianism he learned about at high school. But he also complained that Google's approach lacks impartiality and agent-neutrality. It would therefore be more accurate to describe Google's approach as consequentialism.

In the real world, even the question of agent-neutrality is complicated. Sometimes this is interpreted as a call to disregard any judgement made by a stakeholder, on the grounds that they must be biased. For example, ignoring professional opinions (doctors, teachers) because they might be trying to protect their own professional status. But taking important decisions about healthcare or education away from the professionals doesn't solve the problem of bias, it merely replaces professional bias with some other form of bias.

In Google's case, people are entitled to question how exactly Google will make these difficult judgements, and the extent to which these judgements may be subject to some conflict of interest. But if there is no other credible body that can make these judgements, perhaps the best we can ask for (at least for now) is some kind of transparency or scrutiny.

As I said above, practical ethics are mostly about consequences - which philosophers call consequentialism. But not entirely. Ethical arguments about the human subject aren't always framed in terms of observable effects, but may be framed in terms of human values. For example, the idea people should be given control over something or other, not because it makes them happier, but just because, you know, they should. Or the idea that certain things (truth, human life, etc.) are sacrosanct.

In his book The Human Use of Human Beings, first published in 1950, Norbert Wiener based his computer ethics on what he called four great principles of justice. So this is not just about balancing outcomes.
Freedom. Justice requires “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him.”  
Equality. Justice requires “the equality by which what is just for A and B remains just when the positions of A and B are interchanged.” 
Benevolence. Justice requires “a good will between man and man that knows no limits short of those of humanity itself.”  
Minimum Infringement of Freedom. “What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom”


Of course, a complex issue may require more than a single dimension. It may be useful to draw spider diagrams or radar charts, to help to visualize the relevant factors. Alternatively, Cathy O'Neil recommends the Ethical or Stakeholder Matrix technique, originally invented by Professor Ben Mepham.

"A construction from the world of bio-ethics, the ethical or “stakeholder” matrix is a way of determining the answer to the question, does this algorithm work? It does so by considering all the stakeholders, and all of their concerns, be them positive (accuracy, profitability) or negative (false negatives, bad data), and in particular allows the deployer to think about and gauge all types of best case and worst case scenarios before they happen. The matrix is color coded with red, yellow, or green boxes to alert people to problem areas." [Source: ORCAA]
"The Ethical Matrix is a versatile tool for analysing ethical issues. It is intended to help people make ethical decisions, particularly about new technologies. It is an aid to rational thought and democratic deliberation, not a substitute for them. ... The Ethical Matrix sets out a framework to help individuals and groups to work through these debates in relation to a particular issue. It is designed so that a broader than usual range of ethical concerns is aired, differences of perspective become openly discussed, and the weighting of each concern against the others is made explicit. The matrix is based in established ethical theory but, as far as possible, employs user-friendly language." [Source: Food Ethics Council]




Jessi Hempel, Want to prove your business is fair? Audit your algorithm (Wired 9 May 2018)

Ben Mepham, Ethical Principles and the Ethical Matrix. Chapter 3 in J. Peter Clark Christopher Ritson (eds), Practical Ethics for Food Professionals: Ethics in Research, Education and the Workplace (Wiley 2013)

Eric Newcomer, What Google's AI Principles Left Out (Bloomberg 8 June 2018)

Tom Upchurch, To work for society, data scientists need a hippocratic oath with teeth (Wired, 8 April 2018)



Stanford Encyclopedia of Philosophy: Computer and Information Ethics, Consequentialism, Utilitarianism

Related posts: Conflict of Interest (March 2018), Data and Intelligence Principles From Major Players (June 2018)

Sunday, March 25, 2018

Ethics as a Service

In the real world, ethics is rarely if ever the primary focus. People engaging with practical issues may need guidance or prompts to engage with ethical questions, as well as appropriate levels of governance.


@JPSlosar ‏calls for
"a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent".

Slosar's particular interest is in healthcare. He wants to proactively integrate ethics in person-centered care, as a key enabler of the multiple (and sometimes conflicting) objectives of healthcare: improved outcomes, reduced costs and the best possible patient and provider experience. These four objectives are known as the Quadruple Aim.

According to Slosar, ethics can be understood as a service aimed at reducing, minimizing or avoiding harm. Harm can sometimes be caused deliberately, or blamed on human inattentiveness, but it is more commonly caused by system and process errors.

A team of researchers at Carnegie-Mellon, Berkeley and Microsoft Research have proposed an approach to ethics-as-a-service involving crowd-sourcing ethical decisions. This was presented at an Ethics-By-Design workshop in 2013.


Meanwhile, Ozdemir and Knoppers distinguish between two types of Upstream Ethics: Type 1 refers to early ethical engagement, while Type 2 refers to the choice of ethical principles, which they call "prenormative", part of the process by which "normativity" is achieved. Given that most of the discussion of EthicsByDesign assumes early ethical engagement in a project (Type 1), their Type 2 might be better called EthicsByFiat.





Cristian Bravo-Lillo, Serge Egelman, Cormac Herley, Stuart Schechter and Janice Tsai, Reusable Ethics‐Compliance Infrastructure for Human Subjects Research (CREDS 2013)

Derek Feeley, The Triple Aim or the Quadruple Aim? Four Points to Help Set Your Strategy (IHI, 28 November 2017)

Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639

John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)