Saturday, March 9, 2019

Upstream Ethics

We can roughly characterize two places where ethical judgements are called for, which I shall call upstream and downstream. There is some inconsistency about how these terms are used in the literature; here are my definitions.

I use the term upstream ethics to refer to
  • Establishing priorities and goals - for example, emphasising precaution and prevention
  • Establishing general principles, processes and practices
  • Embedding these in standards, policies and codes of practices
  • Enacting laws and regulations
  • Establishing governance - monitoring and enforcement
  • Training and awareness - enabling, encouraging and empowering people to pay due attention to ethical concerns
  • Approving and certifying technologies, products, services and supply chains. 
Some people call these (or some of them) "pre-normative" ethics.


I use the term downstream ethics to refer to
  • Making judgements about a specific instance
  • Eliciting values and concerns in a specific context as part of the requirements elicitation process
  • Detecting ethical warning signals
  • Applying, interpreting and extending upstream ethics to a specific case or challenge
  • Auditing compliance with upstream ethics

There is also a feedback and learning loop, where downstream issues and experiences are used to evaluate and improve the efficacy of upstream ethics.


Downstream ethics does not take place at a single point in time. I use the term early downstream to mean paying attention to ethical questions at an early stage of an initiative. Among other things, this may involve picking up early warning signals of potential ethical issues affecting a particular case. Early downstream means being ethically proactive - introducing responsibility by design - while late downstream means reacting to ethical issues only after they have been forced upon you by other stakeholders.

However, some writers regard what I'm calling early downstream as another type of upstream. Thus Ozdemir and Knoppers talk about Type 1 and Type 2 upstream. And John Paul Slosar writes

"Early identification of the ethical dimensions of person-centered care before the point at which one might recognize the presence of a more traditionally understood “ethics case” is vital for Proactive Ethics Integration or any effort to move ethics upstream. Ideally, there would be a set of easily recognizable ethics indicators that would signal the presence of an ethics issue before it becomes entrenched, irresolvable or even just obviously apparent."

For his part, as a lawyer specializing in medical technology, Christopher White describes upstream ethics as a question of confidence and supply - in other words, having some level of assurance about responsible sourcing and supply of component technologies and materials. He mentions a range of sourcing issues, including conflict minerals, human slavery, and environmentally sustainable extraction.

Extending this point, advanced technology raises sourcing issues not only for physical resources and components, but also for intangible inputs like data and knowledge. For example, medical innovation may be dependent upon clinical trials, while machine learning may be dependent on large quantities of training data. So there are important questions of upstream ethics as to whether these data were collected properly and responsibly, which may affect the extent to which these data can be used responsibly, or at all. As Rumman Chowdhury asks, "How do we institute methods of ethical provenance?"

There is a trade-off between upstream effort and downstream effort. If you take more care upstream, you should hope to experience fewer difficulties downstream. Conversely, some people may wish to invest little or no time upstream, and face the consequences downstream. One way of thinking about responsibility is shifting the balance of effort and attention upstream. But obviously you can't work everything out upstream, so you will always have further stuff to do downstream.

So it's about getting the balance right, and joining the dots. Wherever we choose to draw the line between "upstream" and "downstream", with different institutional arrangements and mobilizing different modes of argumentation and evidence at different stages, "upstream" and "downstream" still need to be properly connected, as part of a single ethical system.




(In a separate post, Ethics - Soft and Hard, I discuss Luciano Floridi's use of the terms hard and soft ethics, which covers some of the same distinctions I'm making here but in a way I find more confusing.)

Os Keyes, Nikki Stevens, and Jacqueline Wernimont, The Government Is Using the Most Vulnerable People to Test Facial Recognition Software (Slate 17 March 2019) HT @ruchowdh

Vural Ozdemir and Bartha Maria Knoppers, One Size Does Not Fit All: Toward “Upstream Ethics”? (The American Journal of Bioethics, Volume 10 Issue 6, 2010) https://doi.org/10.1080/15265161.2010.482639

John Paul Slosar, Embedding Clinical Ethics Upstream: What Non-Ethicists Need to Know (Health Care Ethics, Vol 24 No 3, Summer 2016)

Christopher White, Looking the Other Way: What About Upstream Corporate Considerations? (MedTech, 29 Mar 2017)


Updated 18 March 2019

Sunday, March 3, 2019

Ethics and Uncertainty

How much knowledge is required, in order to make a proper ethical judgement?

Assuming that consequences matter, it would obviously be useful to be able to reason about the consequences. This is typically a combination of inductive reasoning (what has happened when people have done this kind of thing in the past) and predictive reasoning (what is likely to happen when I do this in the future).

There are several difficulties here. The first is the problem of induction - to what extent can we expect the past to be a guide to the future, and how relevant is the available evidence to the current problem. The evidence doesn't speak for itself, it has to be interpreted.

For example, when Stephen Jay Gould was informed that he had a rare cancer of the abdomen, the medical literature indicated that the median survival for this type of cancer was only eight months. However, his statistical analysis of the range of possible outcomes led him to the conclusion that he had a good chance of finding himself at the favourable end of the range, and in fact he lived for another twenty years until an unrelated cancer got him.

The second difficulty is that we don't know enough. We are innovating faster than we can research the effects. And longer term consequences are harder to predict than short-term consequences: even if we assume an unchanging environment, we usually don't have as much hard data about longer-term consequences.

For example, a clinical trial of a drug may tell us what happens when people take the drug for six months. But it will take a lot longer before we have a clear picture of what happens when people continue to take the drug for the rest of their lives. Especially when taken alongside other drugs.

This might suggest that we should be more cautious about actions with long-term consequences. But that is certainly not an excuse for inaction or procrastination. One tactic of Climate Sceptics is to argue that the smallest inaccuracy in any scientific projection of climate change invalidates both the truth of climate science and the need for action. But that's not the point. Gould's abdominal cancer didn't kill him - but only because he took action to improve his prognosis. @Alexandria Ocasio-Cortez has recently started using the term Climate Delayers for those who find excuses for delaying action on climate change.

The third difficulty is that knowledge itself comes packaged in various disciplines or discourses. Medical ethics is dependent upon specialist medical knowledge, and technology ethics is dependent upon specialist technical knowledge. However, it would be wrong to judge ethical issues exclusively on the basis of this technical knowledge, and other kinds of knowledge (social, cultural or whatever) must also be given a voice. This probably entails some degree of cognitive diversity. Will Crouch also points out the uncertainty of predicting the values and preferences of future stakeholders.

The fourth difficulty is that there could always be more knowledge. This raises the question as to whether it is responsible to go ahead on the basis of our current knowledge, and how we can build in mechanisms to make future changes when more knowledge becomes available. Research may sometimes be a moral duty, as Tannert et al argue, but it cannot be an infinite duty.

The question of adequacy of knowledge is itself an ethical question. One of the classic examples in Moral Philosophy concerns a ship owner who sends a ship to sea without bothering to check whether the ship was sea-worthy. Some might argue that the ship owner cannot be held responsible for the deaths of the sailors, because he didn't actually know that the ship would sink. However, most people would see the ship owner having a moral duty of diligence, and would regard him as accountable for neglecting this duty.

But how can we know if we have enough knowledge? This raises the question of the "known unknowns" and "unknown unknowns", which is sometimes used with a shrug to imply that noone can be held responsible for the unknown unknowns.

(And who is we? J. Nathan Matias argues that the obligation to experiment is not limited to the creators of an artefact, but may extend to other interested parties.)

The French psychoanalyst Jacques Lacan was interested in the opposition between impulsiveness and procrastination, and talks about three phases of decision-making: the instant of seeing (recognizing that some situation exists that calls for a decision), the time for understanding (assembling and analysing the options), and the moment to conclude (the final choice).

The purpose of Responsibility by Design is not just to prevent bad or dangerous consequences, but to promote good and socially useful consequences. The result of applying Responsibility by Design should not be reduced innovation, but better and more responsible innovation. The time for understanding should not be dragged on forever, there should always be a moment to conclude.




Matthew Cantor, Could 'climate delayer' become the political epithet of our times? (The Guardian, 1 March 2019)

Will Crouch, Practical Ethics Given Moral Uncertainty (Oxford University, 30 January 2012)

Stephen Jay Gould, The Median Isn't the Message" (Discover 6, June 1985) pp 40–42.

J. Nathan Matias, The Obligation To Experiment (Medium, 12 December 2016)

Alex Matthews-King, Humanity producing potentially harmful chemicals faster than they can test their effects, experts warn (Independent, 27 February 2019)

Christof Tannert, Horst-Dietrich Elvers and Burkhard Jandrig, The ethics of uncertainty. In the light of possible dangers, research becomes a moral duty (EMBO Rep. 8(10) October 2007) pp 892–896

Stanford Encyclopedia of Philosophy: Consequentialism, The Problem of Induction

Wikipedia: There are known knowns 

The ship-owner example can be found in an essay called "The Ethics of Belief" (1877) by W.K. Clifford, in which he states that "it is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence".

I describe Lacan's model of time in my book on Organizational Intelligence (Leanpub 2012)

Related posts: Ethics and Intelligence (April 2010), Practical Ethics (June 2018), Big Data and Organizational Intelligence (November 2018)

updated 11 March 2019

Saturday, February 23, 2019

Ethics - Soft and Hard

Professor Luciano @Floridi has recently introduced the concept of Soft Ethics. He wants to make a distinction between ethical judgements that go into laws, regulations and other norms, which he calls Hard Ethics, and ethical judgements that apply and extend these codes of practice in practical situations, which he calls Soft Ethics. The latter he also calls Post-Compliance Ethics - in other words, what you should do over and above complying with all applicable laws and regulations.

The labels "hard" and "soft" are used in many different domains, and carry diverse connotations. Some readers may interpret Floridi's use of these labels as implying that the "hard" ethics are clear-cut while the "soft" ethics are more fuzzy. Others may think that "hard" means difficult or tough, and "soft" means easy or lax. But even if the laws themselves were unambiguous and definitive (which obviously they aren't, otherwise lawyers would be redundant), the thinking that goes into the law-making process is complex and polyvocal, and the laws themselves usually represent a compromise between the interests of different stakeholder groups, as well as practical considerations of enforcement. (Floridi refers to "lobbying in favour of some good legislation or to improve that which already exists" as an example of hard ethics.) For this reason, regulations such as GDPR tend to fall short of the grand ethical vision that motivated the initiative in the first place.

In some quarters, the term "pre-normative" is used for the conceptual (and sometimes empirical) work that goes into the formulation of law and regulations. However, this might confuse those philosophers familiar with Peirce's use of the term. So my own preference is for the term "upstream". See my post on Upstream Ethics (March 2019).

Floridi suggests that soft ethics are most relevant "in places of the world where digital regulation is already on the good side of the moral vs. immoral divide", and seems to think it would be a mistake to apply a soft ethics approach in places like the USA, where the current regulation is (in his opinion) not fit for purpose. But then what is the correct ethical framework for global players?

For example, in May 2018, Microsoft announced that it would extend the rights at the heart of GDPR to all of its consumer customers worldwide. In most of the countries in which Microsoft operates, including the USA, this is clearly over and above the demands of local law, and therefore counts as "Soft Ethics" under Floridi's schema. Unless we regard this announcement as a further move in Microsoft's ongoing campaign for national privacy legislation in the United States, in which case it counts as "Hard Ethics". At this point, we start to wonder how useful Floridi's distinction is going to be in practice.

At some point during 2018, Floridi was alerted to the work of Ronald Dworkin by an anonymous reviewer. He therefore inserted a somewhat puzzling paragraph into his second paper, attributing to Dworkin the notion that "legal judgment is and should be guided by principles of soft ethics" which are "implicitly incorporated in the law", while attributing to H.L.A. Hart the position that soft ethics are "external to the legal system and used just for guidance". But if soft ethics is defined as being "over and above" the existing regulation, Floridi appears to be closer to the position he attributes to Hart.

Of course, the more fundamental debate between Dworkin and Hart was about the nature of authority in legal matters. Hart took a position known as Legal Positivism, strongly rejected by Dworkin, in which the validity of law depended on social conventions and customs.
The legal system is norms all the way down, but at its root is a social norm that has the kind of normative force that customs have. It is a regularity of behavior towards which officials take 'the internal point of view': they use it as a standard for guiding and evaluating their own and others' behavior, and this use is displayed in their conduct and speech, including the resort to various forms of social pressure to support the rule and the ready application of normative terms such as 'duty' and 'obligation' when invoking it. (SEP: Legal Positivism)

For Floridi, the authority of the law appears to rely on what he calls a normative cascade. This is a closed loop in which Ethics constrains Law, Law constrains Business/Government, Business/Government constrains People (as consumers or citizens), and the People (by deciding in what society they wish to live) can somehow bring about changes in Ethics. Perhaps Professor Floridi can explain which portions of this loop are Hard and which are Soft?




Julie Brill, Microsoft’s commitment to GDPR, privacy and putting customers in control of their own data (Microsoft, 21 May 2018)

James Feibleman, A Systematic Presentation of Peirce's Ethics (Ethics, Vol. 53, No. 2, January 1943) pp. 98-109

Luciano Floridi, Soft Ethics and the Governance of the Digital (Philos. Technol. 31:1–8, 17 February 2018)

Luciano Floridi, Soft ethics, the governance of the digital and the General Data Protection Regulation (Philosophical Transactions of the Royal Society, Volume 376 Issue 2133, 15 November 2018)

Jack Hirshleifer, Capitalist Ethics--Tough or Soft? (The Journal of Law and Economics, Vol 2, October 1959), pp. 114-119

Scott J. Shapiro, The Hart-Dworkin Debate: A Short Guide for the Perplexed (5 March 2007)

Bucks County Courier Times, Getting Tough on Soft Ethics (10 February 2015)

Stanford Encyclopedia of Philosophy: Legal Positivism


Related post: Ethics as a Service (March 2018), Upstream Ethics (March 2019)

Saturday, February 16, 2019

Shoshana Zuboff on Surveillance Capitalism

@shoshanazuboff's latest book was published at the end of January. 700 pages of detailed research and analysis, and I've been trying to read as much as possible before everyone else. Meanwhile I have seen display copies everywhere - not just at the ICA bookshop which always has more interesting books than I shall ever have time to read, but also in my (excellent) local bookshop. (Fiona tells me she has already sold several copies.)

Although Zuboff spent much of her life at Harvard Business School, and has previously expressed optimism about technology (Distributed Capitalism, the Support Economy), she has form in criticizing the unacceptable face of capitalism (e.g. her 2009 critique of Wall Street). She now regards surveillance capitalism as "a profoundly undemocratic social force" (p 513), and in the passionate conclusion to her book I can hear echoes of Robert Burn's poem "Parcel of Rogues in a Nation".
"Our lives are scraped and sold to fund their freedom and our subjugation, their knowledge and our ignorance about what they know." (p 498)

One of the key words in the book is "power", especially what she calls instrumentarian power. She describes the emergence of this kind of power as a bloodless coup, and makes a point that will be extremely familiar to readers of Foucault.
"Instead of violence directed at our bodies, the instrumentarian third modernity operates more like a taming. Its solution to the increasingly clamorous demands for effective life pivots on the gradual elimination of chaos, uncertainty, conflict, abnormality, and discord in favor of predictability, automatic regularity, transparency, confluence, persuasion and pacification." (p 515)
In Foucauldian language, this would be described as a shift from sovereign power to disciplinary power, which he describes in terms of the Panopticon. Although Zuboff discusses Foucault and the Information Panopticon at some length in her book on the Smart Machine, I couldn't find a reference to Foucault in her latest book, merely a very brief mention of the panopticon (pp 470-1). So for a fuller explanation of this concept, I turned to the Stanford Encyclopedia of Philosophy.
"Bentham’s Panopticon is, for Foucault, a paradigmatic architectural model of modern disciplinary power. It is a design for a prison, built so that each inmate is separated from and invisible to all the others (in separate “cells”) and each inmate is always visible to a monitor situated in a central tower. Monitors do not in fact always see each inmate; the point is that they could at any time. Since inmates never know whether they are being observed, they must behave as if they are always seen and observed. As a result, control is achieved more by the possibility of internal monitoring of those controlled than by actual supervision or heavy physical constraints." (SEP: Michel Foucault)
I didn't read the Smart Machine when it first came out, so the first time I saw the term "panopticon" applied to information technology was in Mark Poster's brilliant book The Mode of Information, which came out a couple of years later. Introducing the term Superpanopticon to describe the databases of his time, his analysis seems uncannily accurate as a description of the present day.
"Foucault taught us to read a new form of power by deciphering discourse/practice formations instead of intentions of a subject or instrumental actions. Such a discourse analysis when applied to the mode of information yields the uncomfortable discovery that the population participates in its own self-constitution as subjects of the normalizing gaze of the Superpanopticon. We see databases not as an invasion of privacy, as a threat to a centred individual, but as the multiplication of the individual, the constitution of an additional self, one that may be acted upon to the detriment of the 'real' self without that 'real' self ever being aware of what is happening." (pp 97-8)

But the problem with invoking Foucault is that it appears to take agency away from the "parcel of rogues" - Zuckerberg, Bosworth, Nadella and the rest - who are the apparent villains of Zuboff's book. As I've pointed out elsewhere, the panopticon holds both the watcher and watched alike in its disciplinary power.

In his long and detailed review, Evgeny Morozov thinks the book owes more to Alfred Chandler, advocate of managerial capitalism, than to Foucault. (Even though Zuboff seems no longer to believe in the possibility of return to traditional managerial capitalism, and the book ends by taking sides with George Orwell in his strong critique of James Burnham, an earlier advocate of managerial capitalism.)


Meanwhile, there is another French thinker who may be haunting Zuboff's book, thanks to her adoption of the term Big Other, usually associated with Jacques Lacan. Jörg Metelmann suggests that Zoboff's use of the term "Big Other" corresponds (to a great extent, he says) to Lacan and Slavoj Žižek's psychoanalysis, but I'm not convinced. I suspect she may have selected the term "Big Other" (associated with Disciplinary Power) not as a conscious reference to Lacanian theory but because it rhymed with the more familiar "Big Brother" (associated, at least in Orwell's novel, with Sovereign Power). 

Talking of "otherness", Peter Benson explains how the Amazon Alexa starts to be perceived, not as a mere object but as an Other.
"(We) know perfectly well that she is an electronic device without consciousness, intentions, or needs of her own. But behaving towards Alexa as a person becomes inevitable, because she is programmed to respond as a person might, and our brains have evolved to categorize such a being as an Other, so we respond to her as a person. We can resist this categorization, but, as with an optical illusion, our perception remains unchanged even after it has been explained. The stick in water still looks bent, even though we know it isn’t. Alexa’s personhood is exactly such a psychological illusion."
But much as we may desire to possess this mysterious black tube, regarding Alexa as an equal partner in dialogue, almost a mirror of ourselves, the reality is that this black tube is just one of many endpoints in an Internet of Things consisting of millions of similar black tubes and other devices, part of an all-pervasive Big Other. Zuboff sees the Big Other as an apparatus, an instrument of power, "the sensate computational, connected puppet that renders, monitors, computes and modifies human behavior" (p 376).

The Big Other possesses what Zuboff calls radical indifference - it monitors and controls human behaviour while remaining steadfastly indifferent to the meaning of that experience (pp 376-7). She quotes an internal Facebook memo by Andrew "Boz" Bosworth advocating moral neutrality (pp 505-6). (For what it's worth, radical indifference is also celebrated by Baudrillard.)

She also refers to this as observation without witness. This can be linked to Henry Giroux's notion of disimagination, the internalization of surveillance.
"I argue that the politics of disimagination refers to images, and I would argue institutions, discourses, and other modes of representation, that undermine the capacity of individuals to bear witness to a different and critical sense of remembering, agency, ethics and collective resistance. The 'disimagination machine' is both a set of cultural apparatuses extending from schools and mainstream media to the new sites of screen culture, and a public pedagogy that functions primarily to undermine the ability of individuals to think critically, imagine the unimaginable, and engage in thoughtful and critical dialogue: put simply, to become critically informed citizens of the world."
(Just over a year ago, I managed to catch a rare performance of A Machine They're Secretly Building, which explores some of these ideas in a really interesting way. Strongly recommended. Check the proto_type website for UK tour dates.)

Zuboff's book concentrates on the corporate side of surveillance, although she does mention the common interest (elective affinity p 115) between the surveillance capitalists and the public security forces around the war on terrorism. She also mentions the increased ability of political actors to use the corporate instruments for political ends. So a more comprehensive genealogy of surveillance would have to trace the shifting power relations between corporate power, government power, media power and algorithmic power.

A good example of this kind of exploration took place at the PowerSwitch conference in March 2017, where I heard Ariel Ezrachi (author of a recent book on the Algorithm-Driven Economy) talking about "the end of competition as we know it" (see links below to video and liveblog).

But obviously there is much more on this topic than can be covered in one book. Although some reviewers (@bhaggart as well as @evgenymorozov) have noted a lack of intellectual depth and rigour, Shoshana Zuboff has nonetheless made a valuable contribution - both in terms of the weight of evidence she has assembled and also in terms of bringing these issues to a wider audience.




Peter Benson, The Concept of the Other from Kant to Lacan (Philosophy Now 127, August/September 2018)

Ariel Ezrachi and Maurice Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016) - more links via publisher's page 

Henry A. Giroux, The Politics of Disimagination and the Pathologies of Power (Truth Out, 27 February 2013)

Blayne Haggart, Evaluating scholarship, or why I won’t be teaching Shoshana Zuboff’s The Age of Surveillance Capitalism (15 February 2019)

Jörg Metelmann, Screening Surveillance Capitalism, in Daniel Cuonz, Scott Loren, Jörg Metelmann (eds) Screening Economies: Money Matters and the Ethics of Representation (transcript Verlag, 2018)

Evgeny Morozov, Capitalism’s New Clothes (The Baffler, 4 February 2019)

Mark Poster, The Mode of Information (Polity Press, 1990). See also notes in Donko Jeliazkov and Roger Blumberg, Virtualities (PDF, undated)

Shoshana Zuboff, In The Age of the Smart Machine (1988)

Shoshana Zuboff, Wall Street's Economic Crimes Against Humanity (Bloomberg, 20 March 2019)

Shoshana Zuboff, A Digital Declaration (Frankfurter Algemeiner Zeitung, 15 September 2014)

Shoshana Zuboff, The Age of Surveillance Capitalism (UK Edition: Profile Books, 2019)

Wikipedia: Panopticism

Stanford Encyclopedia of Philosophy: Jacques Lacan, Michel Foucault

CRASSH PowerSwitch Conference (Cambridge, 31 March 2017) Panel 4: Algorithmic Power (via YouTube). See also liveblog by Laura James Power Switch - Conference Report.


Related posts: Power Switch (March 2017), The Price of Everything (May 2017), Witnessing Machines Built in Secret (November 2017), Pax Technica (November 2017), Big Data and Organizational Intelligence (November 2018), Insurance and the Veil of Ignorance (February 2019)

Sunday, November 18, 2018

Ethics in Technology - FinTech

Last Thursday, @ThinkRiseLDN (Rise London, a FinTech hub) hosted a discussion on Ethics in Technology (15 November 2018).

Since many of the technologies under discussion are designed to support the financial services industry, the core ethical debate is strongly correlated to the business ethics of the finance sector and is not solely a matter of technology ethics. But like most other sectors, the finance sector is being disrupted by the opportunities and challenges posed by technological innovation, and this entails a professional and moral responsibility on technologists to engage with a range of ethical issues.

(Clearly there are many ethical issues in the financial services industry besides technology. For example, my friends in the @LongFinance initiative are tackling the question of sustainability.)

The Financial Services industry has traditionally been highly regulated, although some FinTech innovations may be less well regulated for now. So people working in this sector may expect regulation - specifically principles-based regulation - to play a leading role in ethical governance (Note: the UK Financial Services Authority has been pursuing a principles-based regulation strategy for over ten years.)

Whether ethical questions can be reduced to a set of principles or rules is a moot point. In medical ethics, principles are generally held to be useful but not sufficient for resolving difficult ethical problems. (See McCormick for a good summary. See also my post on Practical Ethics.)

Nevertheless, there are undoubtedly some useful principles for technology ethics. For example, the principle that you can never foresee all the consequences of your actions, so you should avoid making irreversible technological decisions. In science fiction, this issue can be illustrated by a robot that goes rogue and cannot be switched off. @moniquebachner made the point that with a technology like Blockchain, you were permanently stuck, for good or ill, with your original design choices.

Several of the large tech companies have declared principles for data and intelligence. (My summary here.) But declaring principles is the easy bit; these companies taking them seriously (or us trusting them to take them seriously) may be harder.

One of the challenges discussed by the panel was how to negotiate the asymmetry of power. If your boss or your client wants to do something that you are uncomfortable with, you can't just assert some ethical principles and expect her to change her mind. So rather than walk away from an interesting technical challenge, you give yourself an additional organizational challenge - how to influence the project in the right way, without sacrificing your own position.

Obviously that's an ethical dilemma in its own right. Should you compromise your principles in the hope of retaining some influence over the outcome, or could you persuade yourself that the project isn't so bad after all? There is an interesting play-off between individual responsibility and collective responsibility, which we are also seeing in politics (Brexit passim).

Sheryl Sandberg appears to offer a high-profile example of this ethical dilemma. She had been praised by feminists for being "the one reforming corporate boy’s club culture from the inside ... the civilizing force barely keeping the organization from tipping into the abyss of greed and toxic masculinity." Crispin now disagrees with this view. "It seems clear what Sandberg truly is instead: a team player. And her team is not the working women of the world. It is the corporate culture that has groomed, rewarded, and protected her throughout her career." "This is the end of corporate feminism", comments @B_Ehrenreich.

And talking of Facebook ...

The title of Cathy O'Neil's book Weapons of Math Destruction invites a comparison between the powerful technological instruments now in the hands of big business, and the arsenal of nuclear and chemical weapons that have been a major concern of international relations since the Second World War. During the so-called Cold War, these weapons were largely controlled by the two major superpowers, and it was these superpowers that dominated the debate. As these weapons technologies have proliferated however, attention has shifted to the possible deployment of these weapons by smaller countries, and it seems that the world has become much more uncertain and dangerous.

In the domain of data ethics, it is the data superpowers (Facebook, Google) that command the most attention. But while there are undoubtedly major concerns about the way these companies use their powers, we may at least hope that a combination of forces may help to moderate the worst excesses. Besides regulatory action, these forces might include public opinion, corporate risk aversion from the large advertisers that provide the bulk of the income, as well as pressure from their own employees.

And in FinTech as with Data Protection, it will always be easier for regulators to deal with a small number of large players than with a very large number of small players. The large players will of course try to lobby for regulations that suit them, and may shift some operations into less strongly regulated jurisdictions, but in the end they will be forced to comply, more or less. Except that the ethically dubious stuff will always turn out to be led by a small company you've never heard of, and the large players will deny that they knew anything about it.

As I pointed out in my previous post on The Future of Political Campaigning, the regulators only have limited tools at their disposal, so this slants their powers to deal with the ethical ecosystem as a whole. If I had a hammer ...




Financial Services Authority, Principles-Based Regulation - Focusing on the Outcomes that Matter (FSA, April 2007)

Jessa Crispin, Feminists gave Sheryl Sandberg a free pass. Now they must call her out (Guardian, 17 November 2018)

Ian Harris, Commercial Ethics: Process or Outcome (Z/Yen, 2008)

Thomas R. McCormick, Principles of Bioethics (University of Washington, 2013)

Chris Yapp, Where does the buck stop now? (Long Finance, 28 October 2018)


Related posts Practical Ethics (June 2018) Data and Intelligence Principles from Major Players (June 2018) The Future of Political Campaigning (November 2018)

Saturday, November 17, 2018

The Future of Political Campaigning

#democracydisrupted Last Tuesday, @Demos organized a discussion on The Future of Political Campaigning (13 November 2018). The panelists included the Information Commissioner (@ElizabethDenham) and the CEO of the Electoral Commission (@ClaireERBassett).

The presenting problem is social and technological changes that disrupt the democratic process and some of the established mechanisms and assumptions that are supposed to protect it. Recent elections (including the Brexit referendum) have featured new methods of campaigning and new modes of propaganda. Voters are presented with a wealth of misinformation and disinformation on the Internet, while campaigners have new tools for targeting and influencing voters.

The regulators have some (limited) tools for dealing with these changes. The ICO can deal with organizations that misuse personal data, while the Electoral Commission can deal with campaigns that are improperly funded. But while the ICO in particular is demonstrating toughness and ingenuity in using the available regulatory instruments to maximum effect, these instruments are only indirectly linked to the problem of political misinformation. Bad actors in future will surely find new ways to achieve unethical political ends, out of the reach of these regulatory instruments.

@Jphsmith compared selling opposition to the "Chequers" Brexit deal with selling waterproof trousers. But if the trousers turn out not to be waterproof, there is legal recourse for the purchaser. Whereas there appears to be no direct accountability for political misinformation and disinformation. The ICO can deal with organizations that misuse personal data: that’s the main tool they’ve been provided with. What tool do they have for dealing with propaganda and false promises? Where is the small claims court I can go to when I discover my shiny new Brexit doesn’t hold water? (Twitter thread)

As I commented in my question from the floor, for the woman with a hammer, everything looks like a nail. Clearly misuse of data and illegitimate sources of campaign finance are problems, but they are not necessarily the main problem. And if the government and significant portions of the mass media (including the BBC) don't give these problems much airtime, downplay their impact on the democratic process, and (disgracefully) disparage and discredit those journalists who investigate them, notably @carolecadwalla, there may be insufficient public recognition of the need for reform, let alone enhanced and updated regulation. If antidemocratic forces are capable of influencing elections, they are surely also capable of persuading the man in the street that there is nothing to worry about.



Jaime Bartlett, Josh Smith, Rose Acton, The Future of Political Campaigning (Demos, 11 July 2018)

Carole Cadwalladr, Why Britain Needs Its Own Mueller (NYR Daily, 16 November 2018)

Nick Raikes, Online security and privacy: What an email address reveals (BBC News, 13 November 2018)

Josh Smith, A nation of persuadables: politics and campaigning in the age of data (Demos, 13 November 2018)

Jim Waterson, BBC women complain after Andrew Neil tweet about Observer journalist (Guardian, 16 November 2018)


Related posts

Security is downstream from strategy (March 2018), Ethical Communication in a Digital Age (November 2018)

Wednesday, November 7, 2018

YouTube Growth Hacking

Fascinating talk by @sophiehbishop at @DigiCultureKCL about YouTube growth hacks, and the "algorithm bros" that promote them (and themselves).



The subject of her talk was a small cadre of young men who have managed to persuade millions of followers (as well as some large corporations) that they can reveal the "secrets" of the YouTube algorithm.


I want to comment on two aspects of her work in particular. Firstly there is the question, addressed in her previous paper, of "anxiety, panic and self-optimization". When people create content on YouTube or similar platforms, they have an interest in getting their content viewed as widely as possible - indeed wide viewership ("going viral") is generally regarded as the measure of success. But these platforms are capricious, in the sense that they (deliberately) don't make it easy to manipulate this measure, and this generates a sense of precarity - not only among individual content providers but also among political and commercial organizations.

So when someone offers to tell you the secrets of success on YouTube, someone who is himself already successful on YouTube, it would be hard to resist the desire to learn these secrets. Or at least to listen to what they have to say. And risk-averse corporations may be willing to bung some consultancy money in their direction.

YouTube's own engineers describe the algorithm as "one of the largest scale and most sophisticated industrial recommendation systems in existence". Their models learn approximately one billion parameters and are trained on hundreds of billions of examples. The idea that a couple of amateurs without significant experience or funding can "reverse engineer" this algorithm stretches credibility, and Bishop points out several serious methodological flaws with their approach, while speculating that perhaps what really matters to the growth hacking community is not what the YouTube algorithm actually does but what the user thinks it does. She notes that the results of this "reverse engineering" experiment have been widely disseminated, and presented at an event sponsored by YouTube itself.

What is the effect of disseminating this kind of material? I don't know if it helps to make YouTubers less anxious, or conversely makes them more anxious than they were already. No doubt YouTube is happy about anything that encourages people to devote even more time to creating sticky content for YouTube. A dashboard (in this case, YouTube's Creator Studio) provides a framing device, focusing people's attention on certain metrics (financial gains and social capital), and fostering the illusion that the metrics on the dashboard are really the only ones that matter.


The other aspect of Bishop's work I wanted to discuss is the apparent gender polarization on YouTube - not only polarization of content and who gets to see which content, but also a significantly different operating style for male and female content providers. The traditional feminist view (McRobbie, Meehan) is that this polarization is a response to the commercial demands of the advertisers. But other dimensions of polarization have become apparent more recently, including political extremism, and Zeynep Tufekci argues that YouTube may be one of the most powerful radicalizing instruments of the 21st century. This hints at a more deeply rooted schismogenesis.

Meanwhile, how much of this was intended or foreseen by YouTube is almost beside the point. Individuals and organizations may be held responsible for the consequences of their actions, including unforeseen consequences.



Sophie Bishop, Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm (Convergence, 24(1), 2018 pp 69–84)

Paul Covington, Jay Adams, Emre Sargin, Deep Neural Networks for YouTube Recommendations (Proceedings of the 10th ACM Conference on Recommender Systems, 2016, pages 191-198)

Paul Lewis, 'Fiction is outperforming reality': how YouTube's algorithm distorts truth (Guardian, 2 Feb 2018)

Zeynep Tufekci, Opinion: YouTube, the Great Radicalizer (New York Times, 10 March 2018)

Wikipedia: Schismogenesis

Related post Ethical communication in a digital age (November 2018), Polarization (November 2018)