Showing posts with label regulation. Show all posts
Showing posts with label regulation. Show all posts

Thursday, December 10, 2020

The Social Dilemma

Just watched the documentary The Social Dilemma on Netflix, which takes a critical look at some of the tech giants that dominate our world today (although not Netflix itself, for some reason), largely from the perspective of some former employees who helped them achieve this dominance and are now having second thoughts. One of the most prominent members of this group is Tristan Harris, formerly with Google, now the president of an organization called the Center for Humane Technology. He and others have been airing these concerns for several years already - see for example Noah Kulwin's 2018 article (link below).

The documentary opens by asking the contributors to state the problem, and shows them all initially hesitating. By the end of the documentary, however, they are mostly making large statements about the morality of encouraging addictive behaviour, the propagation of truth and lies, the threat to democracy, the ease with which these platforms can be used by authoritarian rulers and other bad actors, and the need for regulation.

Quantity becomes quality. To some extent, the phenomena and affordances of social media can be regarded as merely scaled-up versions of previous social tools, including advertising and television: the maxim If you aren't paying, you are the product derives from a 1973 video about the power of commercial television. However, several of the contributors to the documentary observed that the power of the modern platforms and the wealth of the businesses that control these platforms is unprecedented, while noting that social media is far less regulated than other mass communication enterprises, including television and telecommunications.

Contributors doubted whether we could expect these enterprises, or the technology sector generally, to fix these problems on their own - especially given the focus on profit, growth and shareholder value that drives all enterprises within the capitalist system. Is it fair to ask them to reform capitalism? (Many years ago, the architect J.P. Eberhard noted a tendency to escalate even small problems to the point where the entire capitalist system comes into question, and argued that We Ought To Know The Difference.) So is regulation the answer?

Surprisingly enough, Facebook doesn't think so. In its response to the documentary, it complains

The film’s creators do not include insights from those currently working at the companies or any experts that take a different view to the narrative put forward by the film.

As Pranav Malhotra notes, it's not hard to find experts who would offer a different perspective, in many cases offering far more fundamental and far-reaching criticisms of Facebook and its peers. Hey Facebook, careful what you wish for!

Last year, Tristan Harris appeared to call for a new interdisciplinary field of research, focused on exploring the interaction between technology and society. Several people including @ruchowdh pointed out that such a field was already well-established. (In response he said he already knew this, and apologized for his poor choice of words, blaming the Twitter character limit.)

So there is already an abundance of deep and interesting work that can help challenge the simplistic thinking of Silicon Valley in a number of areas including

  • Truth and Objectivity
  • Technological Determinism
  • Custodianship of Technology (for example Latour's idea that we should Love Our Monsters - see also article by Adam Briggle)

These probably deserve a separate post each, if I can find time to write them. 



The Social Dilemma (dir Jeff Orlowski, Netflix 2020)

Wikipedia: The Social Dilemma, Television Delivers People

Stanford Encyclopedia of Philosophy: Ethics of Artificial Intelligence and Robotics, Phenomenological Approaches to Ethics and Information Technology, Philosophy of Technology

 

Adam Briggle, What can be done about our modern-day Frankensteins? (The Conversation, 26 December 2017)

Robert L. Carneiro, The transition from quantity to quality: A neglected causal mechanism in accounting for social evolution  (PNAS 97:23, 7 November 2000)

Rumman Chowdhury, To Really 'Disrupt,' Tech Needs to Listen to Actual Researchers (Wired, 26 June 2019)

Facebook, What the Social Dilemma Gets Wrong (2020)

Tristan Harris, “How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist”, Thrive Global, 18 May 2016

Noah Kulwin, The Internet Apologizes (New York Magazine, 16 April 2018)

John Lanchester, You Are The Product (London Review of Books, Vol. 39 No. 16, 17 August 2017)

Bruno Latour, Love Your Monsters: Why we must care for our technologies as we do our children (Breakthrough, 14 February 2012) 

Pranav Malhotra, The Social Dilemma Fails to Tackle the Real Issues in Tech (Slate, 18 September 2020)

Richard Serra and Carlota Fay Schoolman, Television Delivers People (1973) 

Zadie Smith, Generation Why? (New York Review of Books, 25 November 2010)

Siva Vaidhyanathan, Making Sense of the Facebook Menace (The New Republic, 11 January 2021)


Related posts: The Perils of Facebook (February 2009), We Ought to Know the Difference (April 2013), Rhyme or Reason: The Logic of Netflix (June 2017), On the Nature of Platforms (July 2017), Ethical Communication in a Digital Age (November 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Tuesday, October 8, 2019

Ethics of Transparency and Concealment

Last week I was in Berlin at the invitation of the IEEE to help develop standards for responsible technology (P7000). One of the working groups (P7001) is looking at transparency, especially in relation to autonomous and semi-autonomous systems. In this blogpost, I want to discuss some more general ideas about transparency.

In 1986 I wrote an article for Human Systems Management promoting the importance of visibility. There were two reasons I preferred this word. Firstly, "transparency" is a contronym - it has two opposite senses. When something is transparent, this either means you don't see it, you just see through it, or it means you can really see it. And secondly, transparency appears to be merely a property of an object, whereas visibility is about the relationship between the object and the viewer - visibility to whom?

(P7001 addresses this by defining transparency requirements in relation to different stakeholder groups.)

Although I wasn't aware of this when I wrote the original article, my concept of visibility shares something with Heidegger's concept of Unconcealment (Unverborgenheit). Heidegger's work seems a good starting point for thinking about the ethics of transparency.

Technology generally makes certain things available while concealing other things. (This is related to what Albert Borgmann, a student of Heidegger, calls the Device Paradigm.)
In our time, things are not even regarded as objects, because their only important quality has become their readiness for use. Today all things are being swept together into a vast network in which their only meaning lies in their being available to serve some end that will itself also be directed towards getting everything under control. Levitt
Goods that are available to us enrich our lives and, if they are technologically available, they do so without imposing burdens on us. Something is available in this sense if it has been rendered instantaneous, ubiquitous, safe, and easy. Borgmann
I referred above to the two opposite meanings of the word "transparent". For Heidegger and his followers, the word "transparent" often refers to tools that can be used without conscious thought, or what Heidegger called ready-to-hand (zuhanden). In technology ethics, on the other hand, the word "transparent" generally refers to something (product, process or organization) being open to scrutiny, and I shall stick to this meaning for the remainder of this blogpost.

We are surrounded by technology, we rarely have much idea how most of it works, and usually cannot be bothered to find out. Thus when technological devices are designed to conceal their inner workings, this is often exactly what the users want. How then can we object to concealment?

The ethical problems of concealment depend on what is concealed by whom and from whom, why it is concealed, and whether, when and how it can be unconcealed.

Let's start with the why. Sometimes people deliberately hide things from us, for dishonest or devious reasons. This category includes so-called defeat devices that are intended to cheat regulations. Less clear-cut is when people hide things to avoid the trouble of explaining or justifying them.

(If something is not visible, then we may not be aware that there is something that needs to be explained. So even if we want to maintain a distinction between transparency and explainability, the two concepts are interdependent.)

People may also hide things for aesthetic reasons. The Italian civil engineer Riccardo Morandi designed bridges with the steel cables concealed, which made them difficult to inspect and maintain. The Morandi Bridge in Genoa collapsed in August 2018, killing 43 people.

And sometimes things are just hidden, not as a deliberate act but because nobody has thought it necessary to make them visible. (This is one of the reasons why a standard could be useful.)

We also need to consider the who. For whose benefit are things being hidden? In particular, who is pulling the strings, where is the funding coming from, and where are the profits going - follow the money. In technology ethics, the key question is Whom Does The Technology Serve?

In many contexts, therefore, the main focus of unconcealment is not understanding exactly how something works but being aware of the things that people might be trying to hide from you, for whatever reason. This might include being selective about the available evidence, or presenting the most common or convenient examples and ignoring the outliers. It might also include failing to declare potential conflicts of interest.

For example, the #AllTrials campaign for clinical trial transparency demands that drug companies declare all clinical trials in advance, rather than waiting until the trials are complete and then deciding which ones to publish.

Now let's look at the possibility of unconcealment. Concealment doesn't always mean making inconvenient facts impossible to discover, but may mean making them so obscure and inaccessible that most people don't bother, or creating distractions that divert people's attention elsewhere. So transparency doesn't just entail possibility, it requires a reasonable level of accessibility.

Sometimes too much information can also serve to conceal the truth. Onora O'Neill talks about the "cult of transparency" that fails to produce real trust.
Transparency can produce a flood of unsorted information and misinformation that provides little but confusion unless it can be sorted and assessed. It may add to uncertainty rather than to trust. Transparency can even encourage people to be less honest, so increasing deception and reducing reasons for trust. O'Neill
Sometimes this can be inadvertent. However, as Chesterton pointed out in one of his stories, this can be a useful tactic for those who have something to hide.
Where would a wise man hide a leaf? In the forest. If there were no forest, he would make a forest. And if he wished to hide a dead leaf, he would make a dead forest. And if a man had to hide a dead body, he would make a field of dead bodies to hide it in. Chesterton
Stohl et al call this strategic opacity (via Ananny and Crawford).

Another philosopher who talks about the "cult of transparency" is Shannon Vallor. However, what she calls the "Technological Transparency Paradox" seems to be merely a form of asymmetry: we are open and transparent to the social media giants, but they are not open and transparent to us.

In the absence of transparency, we are forced to trust people and organizations - not only for their honesty but also their competence and diligence. Under certain conditions, we may trust independent regulators, certification agencies and other institutions to verify these attributes on our behalf, but this in turn depends on our confidence in their ability to detect malfeasance and enforce compliance, as well as believing them to be truly independent. (So how transparent are these institutions themselves?) And trusting products and services typically means trusting the organizations and supply chains that produce them, in addition to any inspection, certification and official monitoring that these products and services have undergone.

Instead of seeing transparency as a simple binary (either something is visible or it isn't), it makes sense to discuss degrees of transparency, depending on stakeholder and context. For example, regulators, certification bodies and accident investigators may need higher levels of transparency than regular users. And regular users may be allowed to choose whether to make things visible or invisible. (Thomas Wendt discusses how Heideggerian thinking affects UX design.)

Finally, it's worth noting that people don't only conceal things from others, they also conceal things from themselves, which leads us to the notion of self-transparency. In the personal world this can be seen as a form of authenticity; in the corporate world, it translates into ideas of responsibility, due diligence, and a constant effort to overcome wilful blindness.

If transparency and openness is promoted as a virtue, then people and organizations can make their virtue apparent by being transparent and open, and this may make us more inclined to trust them. We should perhaps be wary of organizations that demand or assume that we trust them, without providing good evidence of their trustworthiness. (The original confidence trickster asked strangers to trust him with their valuables.) The relationship between trust and trustworthiness is complicated. 



UK Department of Health and Social Care, Response to the House of Commons Science and Technology Committee report on research integrity: clinical trials transparency (UK Government Policy Paper, 22 February 2019) via AllTrials

Mike Ananny and Kate Crawford, Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability (new media and society 2016) pp 1–17

Albert Borgmann, Technology and the Character of Contemporary Life (University of Chicago Press, 1984)

G.K. Chesterton, The Sign of the Broken Sword (The Saturday Evening Post, 7 January 1911)

Martin Heidegger, The Question Concerning Technology (Harper 1977) translated and with an introduction by William Lovitt

Onora O'Neill, Trust is the first casualty of the cult of transparency (Telegraph, 24 April 2002)

Cynthia Stohl, Michael Stohl and P.M. Leonardi, Managing opacity: Information visibility and the paradox of transparency in the digital age (International Journal of Communication Systems 10, January 2016) pp 123–137.

Richard Veryard, The Role of Visibility in Systems (Human Systems Management 6, 1986) pp 167-175 (this version includes some further notes dated 1999)

Thomas Wendt, Designing for Transparency and the Myth of the Modern Interface (UX Magazine, 26 August 2013)

Stanford Encyclopedia of Philosophy: Heidegger, Technological Transparency Paradox

Wikipedia: Confidence Trick, Follow The Money, Ponte Morandi, Regulatory Capture,Willful Blindness


Related posts: Defeating the Device Paradigm (October 2015), Transparency of Algorithms (October 2016), Pax Technica (November 2017), Responsible Transparency (April 2019), Whom Does The Technology Serve (May 2019)

Monday, September 16, 2019

The Ethics of Diversion - Tobacco Example

What are the ethics of diverting people from smoking to vaping?

On the one hand, we have the following argument.
  • E-cigarettes ("vaping") offer a plausible substitute for smoking cigarettes.
  • Smoking is dangerous, and vaping is probably much less dangerous.
  • Many smokers find it difficult to give up, even if they are motivated to do so. So vaping provides a plausible exit route.
  • Observed reductions in the level of smoking can be partially attributed to the availability of alternatives such as vaping. (This is known as the diversion hypothesis.)
  • It is therefore justifiable to encourage smokers to switch from cigarettes to e-cigarettes.

Critics of this argument make the following points.
  • While the dangers of smoking are now well-known, some evidence is now emerging to suggest that vaping may also be dangerous. In the USA, a handful of people have died and hundreds have been hospitalized.
  • While some smokers may be diverted to vaping, there are also concerns that vaping may provide an entry path to smoking, especially for young people. This is known as the gateway or catalyst hypothesis.
Some defenders of vaping blame the potential health risks and the gateway effect not on vaping itself but on the wide range of flavours that are available. While these may increase the attraction of vaping to children, the flavour ingredients are chemically unstable and may produce toxic compounds. For this reason, President Trump has recently proposed a ban on flavoured e-cigarettes.

Juul, which dominates the e-cigarette market in the US, is currently being investigated by the FDA and federal prosecutors for its marketing, and the inappropriately named Mr Burns has just stepped down as CEO.

And elsewhere in the world, significant differences in regulation are emerging between countries. While some countries are looking to ban e-cigarettes altogether, the UK position (as presented by Public Health England and the MHRA) is to encourage e-cigarettes as a safe alternative to smoking. At some point in the future presumably, UK data can be compared with data from other countries to provide evidence for or against the UK position. Professor Simon Capewell of Liverpool University (quoted in the Observer) calls this a "bizarre national experiment".

While we await convincing data about outcomes, ethical reasoning may appeal to several different principles.

Firstly, the minimum interference principle. In this case, this means not restricting people's informed choice without good reason.

Secondly, the utilitarian principle. The benefit of helping a large number of people to reduce a known harm outweighs the possibility of causing a lesser but unknown harm to a smaller number of people.

Thirdly, the cautionary principle. Even if vaping appears to be safer than traditional smoking, Professor Capewell reminds us of other things that were assumed to be safe - until we discovered that they weren't safe at all.

And finally, the conflict of interest principle. Elliott Reichardt, a researcher at the University of Calvary and a campaigner against vaping, argues that any study, report or campaign funded by the tobacco industry should be regarded with some suspicion.



Meanwhile, the traditional tobacco industry is hedging its bets - investing in e-cigarettes but doing well when vaping falters.



US Food and Drug Administration, Warning Letter to Juul Labs (FDA, 9 September 2019) via BBC News

Allan M. Brandt, Inventing Conflicts of Interest: A History of Tobacco Industry Tactics (Am J Public Health 102(1) January 2012) 63–71

Tom Chivers, Stop Hating on Vaping (Unherd, 13 September 2019) via @IanDunt

Jamie Doward, After six deaths in the US and bans around the world – is vaping safe? (Observer, 15 September 2019)

David Heath, Contesting the Science of Smoking (Atlantic, 4 May 2016)

Angelica Lavito, Juul built an e-cigarette empire. Its popularity with teens threatens its future (CNBC 4 August 2018)

Levy DT, Warner KE, Cummings KM, et al, Examining the relationship of vaping to smoking initiation among US youth and young adults: a reality check (Tobacco Control 20 November 2018)

Jennifer Maloney, Federal Prosecutors Conducting Criminal Probe of Juul (Wall Street Journal, 23 September 2019)

Elliott Reichardt and Juliet Guichon, Vaping is an urgent threat to public health (The Conversation, 13 March 2019)

Tuesday, April 23, 2019

Decentred Regulation and Responsible Technology

In 2001-2, Julia Black published some papers discussing the concept of Decentred Regulation, with particular relevance to the challenges of globalization. In this post, I shall summarize her position as I understand it, and apply it to the topic of responsible technology.

Black identifies a number of potential failures in regulation, which are commonly attributed to command and control (CAC) regulation - regulation by the state through the use of legal rules backed by (often criminal) sanctions.

  • instrument failure - the instruments used (laws backed by sanctions) are inappropriate and unsophisticated
  • information and knowledge failure - governments or other authorities have insufficient knowledge to be able to identify the causes of problems, to design solutions that are appropriate, and to identify non-compliance
  • implementation failure - implementation of the regulation is inadequate
  • motivation failure and capture theory - those being regulated are insufficiently inclined to comply, and those doing the regulating are insufficiently motivated to regulated in the public interest

For Black, decentred regulation represents an alternative to CAC regulation, based on five key challenges. These challenges echo the ideas of Michel Foucault around governmentality, which Isabell Lorey (2005, p23) defines as "the structural entanglement between the government of a state and the techniques of self-government in modern Western societies".

  • complexity - emphasising both causal complexity and the complexity of interactions between actors in society (or systems), which are imperfectly understood and change over time
  • fragmentation - of knowledge, and of power and control. This is not just a question of information asymmetry; no single actor has sufficient knowledge, or sufficient control of the instruments of regulation.
  • interdependencies - including the co-production of problems and solutions by multiple actors across multiple jurisdictions (and amplified by globalization)
  • ungovernability - Black explains this in terms of autopoiesis, the self-regulation, self-production and self-organisation of systems. As a consequence of these (non-linear) system properties, it may be difficult or impossible to control things directly
  • the rejection of a clear distinction between public and private - leading to rethinking the role of formal authority in governance and regulation

In response to these challenges, Black describes a form of regulation with the following characteristics

  • hybrid - combining governmental and non-governmental actors
  • multifaceted - using a number of different strategies simultaneously or sequentially
  • indirect - this appears to link to what (following Teubner) she calls reflexive regulation - for example setting the decision-making procedures within organizations in such a way that the goals of public policy are achieved

And she asks if it counts as regulation at all, if we strip away much of what people commonly associate with regulation, and if it lacks some key characteristics, such as intentionality or effectiveness. Does regulation have to be what she calls "cybernetic", which she defines in terms of three functions: standard-setting, information gathering and behaviour modification? (Other definitions of "cybernetic" are available, such as Stafford Beer's Viable Systems Model.)

Meanwhile, how does any of this apply to responsible technology? Apart from the slogan, what I'm about to say would be true of any large technology company, but I'm going to talk about Google, for no other reason than its former use of the slogan "Don't Be Evil". (This is sometimes quoted as "Do No Evil", but for now I shall ignore the difference between being evil and doing evil.) What holds Google to this slogan is not primarily government regulation (mainly US and EU) but mostly an interconnected set of other forces, including investors, customers (much of its revenue coming from advertising), public opinion and its own workforce. Clearly these stakeholders don't all have the same view on what counts as Evil, or what would be an appropriate response to any specific ethical concern.

If we regard each of these stakeholder domains as a large-scale system, each displaying complex and sometimes apparently purposive behaviour, then the combination of all of them can be described as a system of systems. Mark Maier distinguished between three types of System of System (SoS), which he called Directed, Collaborative and Virtual; Philip Boxer identifies a fourth type, which he calls Acknowledged.

  • Directed - under the control of a single authority
  • Acknowledged - some aspects of regulation are delegated to semi-autonomous authorities, within a centrally planned regime 
  • Collaborative - under the control of multiple autonomous authorities, collaborating voluntarily to achieve an agreed purpose
  • Virtual - multiple authorities with no common purpose

Black's notion of "hybrid" clearly moves from the Directed type to one of the other types of SoS. But which one? Where technology companies are required to interpret and enforce some rules, under the oversight of a government regulator, this might belong to the Acknowledged type. For example, social media platforms being required to enforce some rules about copyright and intellectual property, or content providers being required to limit access to those users who can prove they are over 18. (Small organizations sometimes complain that this kind of regime tends to favour larger organizations, which can more easily absorb the cost of building and implementing the necessary mechanisms.) 

However, one consequence of globalization is that there is no single regulatory authority. In Data Protection, for example, the tech giants are faced with different regulations in different jurisdictions, and can choose whether to adopt a single approach worldwide, or to apply the stricter rules only where necessary. (So for example, Microsoft has announced it will apply GDPR rules worldwide, while other technology companies have apparently migrated personal data of non-EU citizens from Ireland to the US in order to avoid the need to apply GDPR rules to these data subjects.)

But although the detailed rules on privacy and other ethical issues vary significantly between countries and jurisdictions, there is a reasonably broad acceptance of the principle that some privacy is probably a Good Thing. Similarly, although dozens of organizations have published rival sets of ethical principles for AI or robotics or whatever, there appears to be a fair amount of common purpose between them, indicating that all these organizations are travelling (or pretending to travel) in more or less the same direction. Therefore it seems reasonable to regard this as the Collaborative type.


Decentred regulation raises important questions of agency and purpose. And if it is to be maintain relevance and effectiveness in a rapidly changing technological world, there needs to be some kind of emergent / collective intelligence conferring the ability to solve not only downstream problems (making judgements on particular cases) but also upstream problems (evolving governance principles and practices).





Julia Black, Decentring Regulation: Understanding the Role of Regulation and Self-Regulation in a ‘Post-Regulatory’ World (Current Legal Problems, Volume 54, Issue 1, 2001) pp 103–146

Julia Black, Decentred Regulation (LSE Centre for Analysis of Risk and Regulation, 2002)

Philip Boxer, Architectures that integrate differentiated behaviours (Asymmetric Leadership, August 2011)

Martin Innes, Bethan Davies and Morag McDermont, How Co-Production Regulates (Social and Legal Studies, 2008)

Mark W. Maier, Architecting Principles for Systems-of-Systems (Systems Engineering, Vol 1 No 4, 1998)

Isabell Lorey, State of Insecurity (Verso 2015)

Gunther Teubner, Substantive and Reflexive Elements in Modern Law (Law and Society Review, Vol. 17, 1983) pp 239-285

Wikipedia: Don't Be Evil,


Related posts: How Many Ethical Principles (April 2019), Algorithms and Governmentality (July 2019)

Sunday, November 18, 2018

Ethics in Technology - FinTech

Last Thursday, @ThinkRiseLDN (Rise London, a FinTech hub) hosted a discussion on Ethics in Technology (15 November 2018).

Since many of the technologies under discussion are designed to support the financial services industry, the core ethical debate is strongly correlated to the business ethics of the finance sector and is not solely a matter of technology ethics. But like most other sectors, the finance sector is being disrupted by the opportunities and challenges posed by technological innovation, and this entails a professional and moral responsibility on technologists to engage with a range of ethical issues.

(Clearly there are many ethical issues in the financial services industry besides technology. For example, my friends in the @LongFinance initiative are tackling the question of sustainability.)

The Financial Services industry has traditionally been highly regulated, although some FinTech innovations may be less well regulated for now. So people working in this sector may expect regulation - specifically principles-based regulation - to play a leading role in ethical governance (Note: the UK Financial Services Authority has been pursuing a principles-based regulation strategy for over ten years.)

Whether ethical questions can be reduced to a set of principles or rules is a moot point. In medical ethics, principles are generally held to be useful but not sufficient for resolving difficult ethical problems. (See McCormick for a good summary. See also my post on Practical Ethics.)

Nevertheless, there are undoubtedly some useful principles for technology ethics. For example, the principle that you can never foresee all the consequences of your actions, so you should avoid making irreversible technological decisions. In science fiction, this issue can be illustrated by a robot that goes rogue and cannot be switched off. @moniquebachner made the point that with a technology like Blockchain, you were permanently stuck, for good or ill, with your original design choices.

Several of the large tech companies have declared principles for data and intelligence. (My summary here.) But declaring principles is the easy bit; these companies taking them seriously (or us trusting them to take them seriously) may be harder.

One of the challenges discussed by the panel was how to negotiate the asymmetry of power. If your boss or your client wants to do something that you are uncomfortable with, you can't just assert some ethical principles and expect her to change her mind. So rather than walk away from an interesting technical challenge, you give yourself an additional organizational challenge - how to influence the project in the right way, without sacrificing your own position.

Obviously that's an ethical dilemma in its own right. Should you compromise your principles in the hope of retaining some influence over the outcome, or could you persuade yourself that the project isn't so bad after all? There is an interesting play-off between individual responsibility and collective responsibility, which we are also seeing in politics (Brexit passim).

Sheryl Sandberg appears to offer a high-profile example of this ethical dilemma. She had been praised by feminists for being "the one reforming corporate boy’s club culture from the inside ... the civilizing force barely keeping the organization from tipping into the abyss of greed and toxic masculinity." Crispin now disagrees with this view. "It seems clear what Sandberg truly is instead: a team player. And her team is not the working women of the world. It is the corporate culture that has groomed, rewarded, and protected her throughout her career." "This is the end of corporate feminism", comments @B_Ehrenreich.

And talking of Facebook ...

The title of Cathy O'Neil's book Weapons of Math Destruction invites a comparison between the powerful technological instruments now in the hands of big business, and the arsenal of nuclear and chemical weapons that have been a major concern of international relations since the Second World War. During the so-called Cold War, these weapons were largely controlled by the two major superpowers, and it was these superpowers that dominated the debate. As these weapons technologies have proliferated however, attention has shifted to the possible deployment of these weapons by smaller countries, and it seems that the world has become much more uncertain and dangerous.

In the domain of data ethics, it is the data superpowers (Facebook, Google) that command the most attention. But while there are undoubtedly major concerns about the way these companies use their powers, we may at least hope that a combination of forces may help to moderate the worst excesses. Besides regulatory action, these forces might include public opinion, corporate risk aversion from the large advertisers that provide the bulk of the income, as well as pressure from their own employees.

And in FinTech as with Data Protection, it will always be easier for regulators to deal with a small number of large players than with a very large number of small players. The large players will of course try to lobby for regulations that suit them, and may shift some operations into less strongly regulated jurisdictions, but in the end they will be forced to comply, more or less. Except that the ethically dubious stuff will always turn out to be led by a small company you've never heard of, and the large players will deny that they knew anything about it.

As I pointed out in my previous post on The Future of Political Campaigning, the regulators only have limited tools at their disposal, so this slants their powers to deal with the ethical ecosystem as a whole. If I had a hammer ...




Financial Services Authority, Principles-Based Regulation - Focusing on the Outcomes that Matter (FSA, April 2007)

Jessa Crispin, Feminists gave Sheryl Sandberg a free pass. Now they must call her out (Guardian, 17 November 2018)

Ian Harris, Commercial Ethics: Process or Outcome (Z/Yen, 2008)

Thomas R. McCormick, Principles of Bioethics (University of Washington, 2013)

Chris Yapp, Where does the buck stop now? (Long Finance, 28 October 2018)


Related posts Practical Ethics (June 2018) Data and Intelligence Principles from Major Players (June 2018) The Future of Political Campaigning (November 2018)

Saturday, November 17, 2018

The Future of Political Campaigning

#democracydisrupted Last Tuesday, @Demos organized a discussion on The Future of Political Campaigning (13 November 2018). The panelists included the Information Commissioner (@ElizabethDenham) and the CEO of the Electoral Commission (@ClaireERBassett).

The presenting problem is social and technological changes that disrupt the democratic process and some of the established mechanisms and assumptions that are supposed to protect it. Recent elections (including the Brexit referendum) have featured new methods of campaigning and new modes of propaganda. Voters are presented with a wealth of misinformation and disinformation on the Internet, while campaigners have new tools for targeting and influencing voters.

The regulators have some (limited) tools for dealing with these changes. The ICO can deal with organizations that misuse personal data, while the Electoral Commission can deal with campaigns that are improperly funded. But while the ICO in particular is demonstrating toughness and ingenuity in using the available regulatory instruments to maximum effect, these instruments are only indirectly linked to the problem of political misinformation. Bad actors in future will surely find new ways to achieve unethical political ends, out of the reach of these regulatory instruments.

@Jphsmith compared selling opposition to the "Chequers" Brexit deal with selling waterproof trousers. But if the trousers turn out not to be waterproof, there is legal recourse for the purchaser. Whereas there appears to be no direct accountability for political misinformation and disinformation. The ICO can deal with organizations that misuse personal data: that’s the main tool they’ve been provided with. What tool do they have for dealing with propaganda and false promises? Where is the small claims court I can go to when I discover my shiny new Brexit doesn’t hold water? (Twitter thread)

As I commented in my question from the floor, for the woman with a hammer, everything looks like a nail. Clearly misuse of data and illegitimate sources of campaign finance are problems, but they are not necessarily the main problem. And if the government and significant portions of the mass media (including the BBC) don't give these problems much airtime, downplay their impact on the democratic process, and (disgracefully) disparage and discredit those journalists who investigate them, notably @carolecadwalla, there may be insufficient public recognition of the need for reform, let alone enhanced and updated regulation. If antidemocratic forces are capable of influencing elections, they are surely also capable of persuading the man in the street that there is nothing to worry about.



Jaime Bartlett, Josh Smith, Rose Acton, The Future of Political Campaigning (Demos, 11 July 2018)

Carole Cadwalladr, Why Britain Needs Its Own Mueller (NYR Daily, 16 November 2018)

Nick Raikes, Online security and privacy: What an email address reveals (BBC News, 13 November 2018)

Josh Smith, A nation of persuadables: politics and campaigning in the age of data (Demos, 13 November 2018)

Jim Waterson, BBC women complain after Andrew Neil tweet about Observer journalist (Guardian, 16 November 2018)


Related posts

Security is downstream from strategy (March 2018), Ethical Communication in a Digital Age (November 2018)

Monday, January 15, 2018

Carillion Struck By Lightning

@NilsPratley blames delusion in the boardroom (on a grand scale, he says) for Carillion's collapse. "In the end, it comes down to judgments made in the boardroom."

A letter to the editor of the Financial Times agrees.
"This situation has been caused, in part, by the unprofessional, fatalistic and blasé attitude to contract risk management of some senior executives in the UK construction industry."


By no means the first company brought low by delusion (I've talked some about Enron on this blog, as well as in my book on organizational intelligence), and probably not the last.

And given that Carillion was the beneficiary of some very large public sector contracts, we could also talk about delusion and poor risk management in government circles. As @econtratacion points out, "the public sector had had information pointing towards Carillion's increasingly dire financial situation for a while".



As it happens, the Home Secretary was at the London Stock Exchange today, talking to female executives about gender diversity at board level. So I thought I'd just check the gender make-up of the Carillion board. According to the Carillion website, there were two female executives and two female non-executive directors in a board of twelve.

In the future, Amber Rudd would like half of all directors to be female. An earlier Government-backed review had recommended that at least a third should be female by 2020.

But compared to other large UK companies, the Carillion gender ratio wasn't too bad. "On paper, the directors looked well qualified", writes Kate Burgess in the Financial Times, noting that "the board ticked all the boxes in terms of good governance". But now even the Institute of Directors has expressed belated concerns about the effective governance at Carillion, and Burgess says the board fell into what she calls "a series of textbook traps".

So what kind of traps were these? The board paid large dividends to the shareholders and awarded large bonuses to themselves and other top executives, despite the fact that key performance targets were not met, and there was a massive hole in the pension fund. In other words, they looked after themselves first and the shareholders second, and to hell with pensioners and other stakeholders. Meanwhile, Larry Elliott notes that the directors of the company took steps to shield themselves from financial risk. These are not textbook traps, they are not errors of judgement, they are moral failings.

Of course we shouldn't rely solely on the moral integrity of company executives. If there is no regulation or regulator able to prevent a board behaving in this way, this points to a fundamental weakness in the financial system as a whole. As @RSAMatthew writes,
"There are many culprits in this tale. Lazy or ideologically blinkered ministers, incompetent public sector commissioners, cynical private sector providers signing 'suicide bids' on the assumption that they can renegotiate when things go wrong and, as always, a financial sector willing to arbitrage any profit regardless of consequences or ethics."

There is a strong case that diversity mitigates against groupthink - but as I've argued in my earlier posts, this needs to be real diversity not just symbolic or imaginary diversity (ticking boxes). And even if having more women or ethnic minorities on the board might possibly reduce errors of judgement, women as well as men can have moral failings. It's as if we imagined that Ivanka Trump was going to be a wise and restraining influence on her father, simply because of her gender.

As it happens, the remuneration director at Carillion was a woman. We may never know whether she was coerced or misled by her fellow directors or whether she participated enthusiastically in the gravy. But we cannot say that having a woman in that position is automatically going to be better than having a man. Women on boards may be a necessary step, but it is not a sufficient one.





Martin Bentham, Amber Rudd: 'It makes no sense to have more men than women in the boardroom' (Evening Standard, 15 January 2018)

Mark Bull, A lesson on risk from Carillion’s collapse (FT Letters to the Editor, 16 January 2018)

Kate Burgess, Carillion’s board: misguided or incompetent? (FT, 17 January 2018) HT @AidanWard3

Larry Elliott, Four lessons the Carillion crisis can teach business, government and us (Guardian, 17 January 2018)

Vanessa Fuhrmans, Companies With Diverse Executive Teams Posted Bigger Profit Margins, Study Shows (WSJ, 18 January 2018)

Simon Goodley, Carillion's 'highly inappropriate' pay packets criticised (Guardian, 15 January 2018)

Nils Pratley, Blame the deluded board members for Carillion's collapse (Guardian, 15 January 2018)

Albert Sánchez-Graells, Some thoughts on Carillion's liquidation and systemic risk management in public procurement (15 January 2018)

Rebecca Smith, Women should hold one third of senior executive jobs at FTSE 100 firms by 2020, says Sir Philip Hampton's review (City Am, 6 November 2016)

Matthew Taylor, Is Carillion the end for Public Private Partnerships? (RSA, 16th January 2018)


Related posts

Explaining Enron (January 2010)
The Purpose of Diversity (January 2010)
Organizational Intelligence and Gender (October 2010)
Delusion and Diversity (October 2012)
Intelligence and Governance (February 2013)
More on the Purpose of Diversity (December 2014)


Updated 25 January 2018

Wednesday, November 14, 2012

Conflicting Narratives

@queenchristina_ writes an excellent article on Google, Starbucks, and Amazon, arguing that "for these multinationals immorality is now standard practice" (Independent 13 November 2012). See also Martin Hickman, Good Bean Counters (Independent 16 October 2012).

It is much too easy for British politicians, journalists and taxpayers to get a sense of moral outrage when they discover how little UK tax these American companies pay on their UK earnings. There may be nothing illegal about the fact that the coffee beans are purchased from a Starbucks subsidiary in Switzerland, or that the UK subsidiary pays a royalty for the use of the Starbucks brand to another Starbucks subsidiary in the Netherlands. By a strange coincidence, the Netherlands charges a very low tax rate on royalty payments. Of course there are many British companies that use similar devices to reduce their UK tax bill.

The word "account" essentially means "story". The Starbucks accountants have constructed a story in which Switzerland and the Netherlands are essential links in the Starbucks value chain. British politicians have constructed a different story in which Starbucks is ripping off its British hosts. The moral outrage comes from the clash between these two narratives.

When two narratives clash, it seems natural for us to want to impose our preferred narrative on the Other. Wouldn't it be grand if Starbucks saw the error of its ways and started to pay a fair rate of UK tax. Or wouldn't it be equally grand if the UK tax laws were changed to regulate against these tax avoidance schemes? Or from Starbuck's point of view, wouldn't it be grand if UK corporate tax rates were reduced, so it could simplify its value chain at no cost to its shareholders? (Obviously words like "grand" and "fair" depend on the narrative.)

Of course, what is more likely is that the politicians will issue some threat of tighter regulation, the companies will make some temporary gesture to alleviate public hostility, and that the media will move onto the next target. In the meantime, politicians and the media can make things uncomfortable for corporate executives in the public eye.

And here's a slightly older example - the attempts by the US Government to hold BP to account for the oil spill in the Gulf of Mexico. One BP executive complained that "The administration keeps pushing the boundaries on what we are responsible for." (Wall Street Journal 1 June 2010 via NakedCapitalism)

In any case, there are always going to be conflicting narratives. I was at a workshop in the City this morning discussing how externalities might affect the future of money and the future of commerce. We discussed a range of topics, from mega-cities to carbon trading. 

But what exactly are these externalities? Almost anything that one person thinks to be part of The System and another person thinks to be outside The System. As William P. Fisher, Jr points out, "If we have to articulate and communicate a message that people then have to act on, we remain a part of the problem and not part of the solution." (Reimagining Capitalism Again, Sept 2011).

Oliver Greenfield identifies the following challenge:

"The externalities created by companies - or, for that matter, nation states - in their pursuit of self-interest can seem rational at the local, country and even regional level.  But at a global level, in a closed system, externalities are costs. What is rational at a company or nation state level is irrational at a global level." (Green Economy Coalition, April 2012)

Thus we have conflicting narratives, which result from disagreement about system boundaries (including time horizon as a type of boundary). A true systems approach might give us a systematic way of playing contested narratives off against each other.



See also


William P. Fisher, Jr, Question Authority (Oct 2011)

José M. Ramos, Temporalities of the Commons: Toward Narrative Coherence and Strategic Vision (Nov 2012)

Linked-In discussion on Good Bean Counters

and my post on Regulation and Complexity (Oct 2012)

Saturday, December 31, 2011

The Group of Six

According to Buddhist tradition, there was a group of six monks who constantly behaved in ways that exasperated the Buddha, causing him to produce a series of monastic rules to regulate their conduct.

    "Six bhikkhus wearing wooden sandals, and each holding a staff with both hands, were walking to and fro on a big stone slab, making much noise. The Buddha hearing the noises asked Thera Ananda what was going on, and Thera Ananda told him about the six bhikkhus. The Buddha then prohibited the bhikkhus from wearing wooden sandals. He further exhorted the bhikkhus to restrain themselves both in words and deeds." Khuddaka Nikaya. The Dhammapada Stories. Translated by Daw Mya Tin, M.A., Burma Pitaka Association (1986)
    "...when the group-of-six bhikkhus went in a vehicle yoked with cows and bulls, they were criticized by the lay people. The Buddha then established a fault of Wrong-doing for a bhikkhu to travel in a vehicle; later illness was exempted from this guideline..." The Bhikkhus' Rules. A Guide for Laypeople compiled and explained by Bhikkhu Ariyesako 
    "when the general guidelines were first worked out, some group-of-six bhikkhus abused the system to impose penalties on innocent bhikkhus they didn't like (Mv.IX.3.1), so the Buddha formulated a number of checks to prevent the system from working against the innocent." Thanissaro Bhikkhu, Buddhist Monastic Code I, Chapter 6 Aniyata

See also "Six Monks, Group Of" in The Soka Gakkai Dictionary of Buddhism.

What puzzles me in this tradition is the apparent repetition of the Buddha's behaviour. Why does he keep defining more rules to guide the behaviour of the six errant monks (bhikkhus), when it is surely apparent that a more profound intervention ("enlightenment" perhaps) is required. Or is this tradition intended to demonstrate exactly that - the inadequacy of rules?

Wednesday, September 22, 2010

Bearing Limit and Financial Regulation

An excellent keynote address by Avinash Persaud at the Long Finance conference yesterday, in which he deployed a few apparently simple ideas about risk management to mount an eloquent and powerful critique of the Basel 3 regulatory regime.

Here is a crude summary of some of the key points of Persaud's argument

1. Regulation should be counter-cyclical. Credit mistakes are made during the boom and exposed during the downturn. Regulation therefore needs to be stricter during the boom and relaxed during the downturn.

2. Basel 3 attempts to regulate risk in terms of risk sensitivity. This concept has several flaws.
  • It focuses on the private risks to banks and their shareholders, rather than the public risks to system and society.
  • It is based on the market price of risk, which is cyclical and therefore cannot support counter-cyclical regulation.
  • It assumes that all risk is homogeneous.
3. Financial risk is not homogeneous. There are different types of risk, which call for different kinds of hedging over different timescales. Persaud identified three types.
  • Credit risk denotes the risk that a given creditor will be unable to pay. This risk is mitigated by having a portfolio of uncorrelated creditors, and assuming that the failure of each creditor is a statistically independent event.
  • Liquidity risk denotes the risk that a given asset cannot be sold at short notice for the desired amount. This risk is mitigated by a preparedness to hold assets for long periods.
  • Market risk is a combination of credit risk and liquidity risk.
3. Banks are good at dealing with credit risk and bad at dealing with liquidity risk. Insurance companies and pension funds should be good at dealing with liquidity risk, provided they are not forced into inappropriate measures by stupid regulation.

4. Sustainable long-term investment entails liquidity risk. A regulatory regime that supports credit risk and fails to support liquidity risk tends to militate against sustainable long-term investment. But this is exactly the outcome of the Basel 3 regulations, according to Persaud. Instead, he argues, we need a regulatory regime that encourages firms to take appropriate long-term risk, according to their risk absorptive capacity.

5. The Basel 3 regulations force risk to be misallocated, because of a failure to appreciate time and its effect on risk. The goal of regulation should not be on reducing risk sensitivity but on increasing risk absorptive capacity.

6. The Basel 3 regulations therefore represent a missed opportunity for financing sustainable activities and longterm finance.



Note: In our risk management work, we use the term Bearing Limit, which roughly corresponds to what Persaud calls Risk Absorptive Capacity.


Papers by Avinash Persaud:

Saturday, December 18, 2004

Feedback as Regulation

Can we trust software? Bruce Schneier has recently been advocating software liability as a trust mechanism. In the latest issue of Crypto-Gram (December 2004), Geoff Kuenning contributes a contrary opinion.

If you've never read it, you should track down the CACM article on the history of steam boilers that appeared some time in the 80's. The brief summary is that after steam power was invented, there were lots of nasty boiler explosions. In the UK, the problem was dealt with by regulation. In the U.S., free-market advocates succeeded in arguing that liability law was sufficient: boiler makers would lose a few lawsuits and would then have an incentive to develop safer boilers.



The result was that boiler-related deaths dropped to near zero in the UK and continued at high rates for 20 more years in the U.S., until finally we broke down and regulated the industry.



The problem with liability as a feedback mechanism is that the negative feedback is strongly disconnected from the original action. Liability can help, but it's not at all clear to me that simply making MS liable for all the worms in the world would cause them to start making secure software.
From a systems engineering point of view, feedback may be regarded as simply a different mode of regulation. One of the first feedback control systems, invented by James Watt, has become known as the Governor.



Feedback corresponds to a kind of Network Regulation, producing Network Trust. As Kuenning points out, this doesn't work effectively when there are strong disconnects between cause and effect. Kuenning contrasts this with a kind of Authority Regulation, and argues that because the latter was more effective at dealing with boiler safety, it would therefore be more effective at dealing with software quality and trustworthiness.



However, Authority Regulation can also suffer strong disconnects between cause and effect. Is there really an Authority anywhere in the world that can keep Microsoft in check?

Sunday, November 21, 2004

Drug Regulation

Dr David Graham, described by the FT as a safety official, told US senators last week that the FDA system was broken because of a conflict of interest.



According to the Financial Times, the FDA once enjoyed a high reputation. Few consumers challenged its judgement. The FT writes: "Such trust is one reason why the US public has been more willing than Europeans to accept foodstuffs containing genetically modified organisms."



So what has eroded this authority trust? The FT identifies two factors that may be relevant.

  1. One can detect a regulatory cycle in drug approval. Testing standards became tougher after Thalidomide, were relaxed when AIDS stimulated demand for new and experimental drugs, and so on.
  2. Division of responsibility and inconsistent handling of drugs at different stages of the innovation lifecycle. Approval follows one process; monitoring of approved drugs is done by a different department and follows a different process.

Oscillation and internal contradiction are both natural phenomena of complex systems, and tend to have a destabilizing effect on trust. One obvious response to this is to distrust complexity. But denial of complexity may merely create a false illusion of trust. Authentic trust may just have to accept oscillation and contradiction.



So whither the FDA - towards a restoration of authority, or towards an engagement with authenticity (if we can believe that)? Either way, a more intelligent and honest attitude to risk (as recommended by the FT) will be useful.



Source: FT Editorial No Pain-Free Option (November 20, 2004)

Friday, October 8, 2004

Trust as she is spoken

originally posted by Aidan

On the front page of the FT the other day is a big article saying how pissed off the FS sector is with the FSA for changing its mind about risk management regulations it was introducing. There was a raft of new recommendation in the Prudential Sourcebook about how the players were supposed to manage their operational risk in particular. In the event the FSA said that the matter was being overtaken by EU regulation and they were dropping the matter. The industry said that they had already spent millions on compliance and that this was now wasted.


The FSA claims that it works with the industry to produce sensible regulation that is in the industry's interest to comply with. The industry in practice will only implement things that have official force in case their base costs get out of line with the competition. That is nobody will actually implement sensible measures for their own sake as part of responsible business management. I know this first hand as well.


Needless to say the FT article does not conclude that this shows that the whole process does not work and cannot work.

Friday, July 9, 2004

Excuses

Cross-posted from System Viability and Corporate Governance blog



Once upon a time, the standard excuse for bureaucratic incompetence was the computer. Sorry sir, we’d love to help you but the computer won’t let us, the computer is down, the computer is so scary you wouldn’t want to upset it now would you?

This excuse isn’t used so much nowadays. This could mean that computer systems have gotten better, or it could simply mean there are now too many computer literate people in the general population for this excuse to wash.

In the financial services sector, the new excuse is compliance. Sorry sir, you can’t do that here. This morning, a stock broker told me I cannot open an investment account for my children, because of compliance. Compliance with what, I asked. Of course, he didn’t know. I rang another stock broker, who had no problem with my request.

Friday, February 6, 2004

Exposure to knowledge

originally posted by John


Taking Richard’s point and briefly reflecting again on the history of English capitalism, I think now, just as I’ve thought for a long time, that caveat emptor has always to be the guiding rule when it comes to risk management in this context.


Knowledge is power for the people at the top and the commercial law of limited liability merely does what it says on the label - when Enron or the Maxwell empire collapses all you lose is your shareholding. The risk for small shareholders is of course that when your shareholding is your entire fortune and your entire future then collapse becomes a human tragedy.


It’s no small coincidence that the profession of ‘risk management’ is owned by guys who specialize in making sure that a financial institution’s financial exposure – the aggregate of its limited liability exposures – is never enough to threaten the institution’s future. They are also required to ensure that the institution’s rewards adequately reflect that exposure.


The market efficiency model I referred to earlier claimed that no decision could be taken by a FTSE100 firm without that decision instantly being reflected in its share price. Risk management analysis is (ostensibly) what underpins this claim. Risk managers ‘look unfavourably’ on plc leaders who act as sole owners or entrepreneurs. Whatever the true POSIWID of risk management in this context – and financial institutions never lose sight of the risk-reward nexus that ordinary people can often be blind to - it clearly exists to look after its own and its master’s interests.


Those of us who can’t afford risk management – and can’t do what those Irish guys are doing to Man Utd - are forced either to make a leap of faith and go with some kind of ‘fund’ or a ‘mutual’ or else stand the risk ourselves. Whatever our choice, we are forced to exist on second-hand knowledge, to wash our feet with our socks on and ultimately to put our fortunes in the hands of others. Funds and mutuals, institutions that try to minimize risk by ‘spreading’ their exposure, were traditionally claimed to be a ‘safe’ if less rewarding option for the little guy than was full-on exposure. When the FTSE crashed and Wall Street plummeted, these guys instantly revealed their POSIWID, cried ‘foul’ and stung millions of little guys just as Enron and Maxwell had.


Richard says the issue here is about bosses who behave as if they were the sole owners, trampling over the rights of the real owners. I don’t necessarily agree. For example people used to swear by Tiny Rowlands’s freebooting trampling – just as they did about Maxwell’s and just as they do about Murdoch and many another stockmarket ‘star’ - and fought off regulation as long as they possibly could. This is capitalism and the only rule is ‘more is better’. That means that as long as they keep paying dividends and the share price keeps rising they can do precisely as they please. Caveat emptor.


My position then is that I'm pretty risk-indifferent as to who owns plcs - sic transit gloria mundi - I say. Ownership of knowledge though, that's an entirely different kettle of risk.