Thursday, May 13, 2021

Thinking with the majority - a new twist

I wrote somewhere once that thinking with the majority is an excellent description of Google. Because one of the ways something rises to the top of your search results is that lots of other people have already looked at it, liked or linked to it.

The phrase thinking with the majority comes from a remark by A.A. Milne, the author of Winnie the Pooh.

I wrote somewhere once that the third-rate mind was only happy when it was thinking with the majority, the second-rate mind was only happy when it was thinking with the minority, and the first-rate mind was only happy when it was thinking.

When I wrote about this topic previously, I thought that experienced users of Google and other search engines ought to be aware of how search rankings operated and some of the ways they could be gamed, and to be suitably critical of the fiction functioning as truth yielded by an internet search. And I never imagined that intelligent people would be satisfied with just thinking with the majority.

The sociologist Francesca Tripodi has been studying how people carry out research on the Internet, especially on politically charged topics. She observes how many people (even those we might expect to know better) are happy to regard search engines as a valid research tool, regarding the most popular webpages as having been verified by the wisdom of crowds. In her 2018 report for Data and Society, Tripodi quotes a journalist (!) explicitly articulating this belief.

I literally type it in Google, and read the first three to five articles that pop up, because those are the ones that are obviously the most clicked and the most read, if they’re at the top of the list, or the most popular news outlets. So, I want to get a good sense of what other people are reading. So, that’s pretty much my go-to.
In other words, thinking with the majority.

However, Professor Tripodi introduces a further twist. She demonstrates that politically slanted search terms produce politically slanted results, and if you go onto your favourite search engine with a politically motivated phrase, you are likely to see results that validate that phrase. She also notes that this phenomenon is not unique to Google, but is shared by all internet search engines including DuckDuckGo.

And this creates opportunities for politically motivated actors to plant phrases (perhaps into so-called data voids) to serve as attractors for those individuals who fondly imagine they are carrying out their own independent research. Tripodi observes a common idea that one should research a topic oneself rather than relying on experts, which she compares with the Protestant ethic of bible study and scriptural inference. And this idea seems particularly popular with those who identify themselves as thinking with the minority (sometimes called red pill thinking).

Zeus' inscrutable decree
Permits the will-to-disagree
To be pandemic.


 Tripodi explains her findings in the following videos

Tripodi has also presented evidence to the US Senate Judiciary Committee


See also  

Joan Donovan, The True Costs of Misinformation - Producing Moral and Technical Order in a Time of Pandemonium (Berkman Klein Center for Internet and Society, January 2020)

Michael Golebiewski and danah boyd, Data Voids: Where Missing Data Can Easily Be Exploited (Data and Society, Updated version October 2019)

Francesca Tripodi, Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices (Data and Society, May 2018)

Wikipedia: Red pill and blue pill, Wisdom of the crowd 


Related posts: You don't have to be smart to search here ... (November 2008), Thinking with the Majority (March 2009)

Monday, April 26, 2021

On the invisibility of infrastructure

Infrastructure is boring, expensive, and usually someone else's responsibility/problem. Which is perhaps how the UK finds itself at what Jeremy Fleming, head of GCHQ, describes as a moment of reckoning. Simon Wardley analyses this in terms of digital sovereignty.

Digital sovereignty is all about us (as a collective) deciding which parts of this competitive space that we want to own, compete, defend, dominate and represent our values and our behaviours in. It's all about where are our borders in this space. ... Our responses all seem to include a slide into protectionism with claims that we need to build our own cloud industries.

Fleming is particularly focused on "the growing challenge from China", expresses concern about UK potentially losing control of  "standards that shape our technology environment" which apparently "make sure that our liberal Western democratic views are baked into our technology". Whatever that means. Fleming's technological examples include digital currency and smart cities.

Fleming talks about the threats from Russia and China, and regards China's potential control of the underlying infrastructure as more fundamentally challenging than potential attacks from Russia as well as non-state actors.

Fleming notes the following characteristics of those he labels adversaries:

  • Potential to control the global operating system.
  • Early implementors of many of the emerging technologies that are changing the digital environment.
  • Bringing all elements of [...] power to control, influence, design and dominate markets. Often with the effect of pushing out smaller players and reducing innovation. 
  • Concerted campaigns to dominate international standards.

And continues

If [any of this] turns out to be insecure or broken or undemocratic, everyone is going to be facing a very difficult future.

It would be easy to hear these remarks as referring solely to China. But he also sounds a warning about corporate power, acknowledging that their commercial interests sometimes (!?) don't align with the interests of ordinary citizens. And with that in mind, it's easy to see how some of the adversarial characteristics listed above would apply equally to some of the Western tech giants.

If the goal is to bake Western values (whatever they are) into our technology infrastructure, it is not obvious that the Western tech giants can be trusted to do this. Smart City initiatives associated with Google's Sidewalk Labs have been cancelled in Portland and Toronto, following (although perhaps not entirely as a consequence of) democratic concerns about surveillance capitalism. However, Sidewalk Labs appears to be still active in a number of smaller smart city initiatives, as are Amazon Web Services, IBM and other major technology firms.

Fleming talks about standards, but at the same time he acknowledges that standards alone are too slow-changing and too weak to keep the adversaries at bay. "The nature of cyberspace makes the rules and standards more open to abuse." He talks about evolutionary change, using a version of Leon Megginson's formulation of natural selection: "it's those that are most able to adjust that prosper". (See my post on Arguments from Nature). But that very formulation seems to throw the initiative over to those tech firms that preach moving fast and breaking things. Can we therefore complain if our infrastructure is insecure, broken, and above all undemocratic?

For most of us, most of the time, infrastructure needs to be just there, taken for granted, ready to hand. Organizations providing these services are often established as monopolies, or turn into de facto monopolies, controlled not only (if at all) by market forces but by democratically accountable regulators and/or by technocratic specialists. However, the Western tech giants devote significant resources to lobbying against external regulation, resisting democratic control. And Smart City initiatives typically embed much the same values everywhere (civic paternalism, biopower).

So here is Fleming's dilemma. If you don't want China to make the running on smart cities, you have to forge alliances with other imperfectly trusted players, whose values are sometimes (!?) not aligned with yours. This moves away from the kind of positional strategy described in Wardley's maps, towards a more relational strategy.


Gordon Corera, GCHQ chief warns of tech 'moment of reckoning' (BBC News, 23 April 2021) via @sukhigill and @swardley

Jeremy Fleming, A world of possibilities: Leading the way in cyber and technology (Vincent Briscoe Lecture @ Imperial College, 23 April 2021) via YouTube.

Susan Leigh Star and Karen Ruhleder, Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces (Information Systems Research 7/1, March 1996)

Simon Wardley, Digital Sovereignty (22 October 2020)

Related posts: The Allure of the Smart City (April 2021)

Thursday, April 8, 2021

Creative Tension in Downing Street

Earlier posts on this blog have explored Creative Tension in the White House - from FDR to the Donald - and analysed them in terms of my OrgIntelligence framework. In this post, I want to look at the UK experience, drawing on a recent report in the Guardian.

Those who worked closely with him say Johnson encourages rows and tensions over policies as he considers all sides of the argument and figures out what he will do next. Some argue that it generates a creative energy in which he thrives and is the process by which he arrives at a final decision. Ask others, and they say he cannot make up his mind until options have been whittled down by time and after those he relies on to walk out in exasperation. Syal

The article quotes several people talking about the Prime Minister's leadership style, based on various ideas about decision-making, risk and diversity. There are also some remarks about the ethical implications.

Previous articles about Mr Johnson's leadership discuss his management style with cabinet colleagues and advisers (Simpson), and his style when addressing the nation (Moss). Whatever he may think in private about the challenges of Brexit or COVID-19, and whatever difficulties he gets into when discussing solutions with his colleagues and advisers, the Prime Minister's instinct apparently leads him to present them to the public in extremely simple and confident terms.

Post-heroic leadership seems to be the order of the day. Stokes and Stern talk about the need to adopt a less gung-ho style when presenting the government's approach to wicked problems. They quote from a paper by Keith Grint advocating several supposedly anti-heroic behaviours: curiosity and sense-making ("asking questions"), bricolage ("clumsy solutions"), and ranking collective intelligence above individual genius.

The UK government's approach to the COVID-19 pandemic has sometimes seemed erratic and inconsistent. But given the complexity of the problem, and the volatile and ambiguous data on which decisions and policies were supposedly based, a more consistent and single-minded approach might not have turned out any better. 

In Greek myth, the Gordian knot stands for wicked problems, and Alexander's simple yet imaginative solution quickly resolves the problem. To the supporters of Brexit, this represents the only possible escape from European satrapy. Nothing post-heroic about Alexander. 

So what does that tell us about Alexander Boris de Pfeffel Johnson?


Keith Grint, Wicked Problems and Clumsy Solutions: The Role of Leadership (Clinical Leader 1/2, December 2008)

Gloria Moss, Is Boris Johnson's leadership style inclusive? (HR Magazine, 23 August 2019)

Per Morten Schiefloe, The Corona crisis: a wicked problem (Scandinavian Journal of Public Health, 2021; 49: 5–8)

Paul Simpson, What is Boris Johnson's leadership style? (Management Today, 11 October 2019)

Jon Stokes and Stefan Stern, Boris Johnson needs to show a ‘post-heroic’ style of leadership now (The Conversation, 27 April 2020)

Rajeev Syal, Does Boris Johnson stir up team conflict to help make up his mind? (The Guardian, 1 March 2021)

Related posts: Creative Tension in the White House (April 2017)

Sunday, March 28, 2021

Critical Hype and the Red Queen Effect

Thanks to @jjn1 I've just read a great piece by @STS_News (Lee Vinsel), called You’re Doing It Wrong: Notes on Criticism and Technology Hype, which expands on some points I've made on this blog and elsewhere.

  • A general willingness to take technology hype at face value, which infects technology critics as well as technology champions.
  • The lack of evidence for specific technological effects. In particular, Vinsel calls out two works I've discussed on this blog and elsewhere: Social Dilemma (Tristan Harris) and Surveillance Capitalism (Soshanna Zuboff). However, my posts concentrated on other issues with these works, and didn't discuss the evidence issue.
  • The lack of evidence for macroeconomic technological effects, including the popular belief that technological change is accelerating. (I call this the Red Queen Effect.)
  • Critical focus on the most glamorous and recent technologies, neglecting those that might be of more lasting significance to greater numbers of people. For my part, I am particularly wary of any innovation described as a paradigm shift, or as the Holy Grail of anything. I have also noted that academic studies of technology adoption are often focused on the most recent technologies, which means that the early adoption phase is much better understood than the late adoption phase.

 I plan to return to some of these topics in future posts.


John Naughton, Is online advertising about to crash, just like the property market did in 2008? (The Guardian, 27 March 2021)

Lee Vinsel, You’re Doing It Wrong: Notes on Criticism and Technology Hype (Medium, 1 February 2021)

Thursday, December 24, 2020

Technological Determinism

Social scientists and social historians are naturally keen to produce explanations for social phenomena. Event B happened because of A.

Sometimes the explanation involves some form of technology. Lewis Mumford traced the start of the Industrial Revolution to the invention of the mechanical clock, while Marshall McLuhan talks about the great medieval invention of typography that was the take-off moment into the new spaces of the modern world McLuhan 1962 p 79.

These explanations are sometimes read as implying some form of technological determinism. For example, many people read McLuhan as a technological determinist.

McLuhan furnished [the tech industry] with a narrative of historical inevitability, a technological determinism that they could call on to negate the consequences of their inventions - if it was fated to happen anyway, is it really their fault?
Daub 2020 pp 47-48

Although sometimes McLuhan claimed the opposite. After Peter Drucker had sought an explanation for the basic change in attitudes, beliefs, and values that had released the Technological Revolution, McLuhan's 1964 book set out to answer this question.

Far from being deterministic, however, the present study will, it is hoped, elucidate a principal factor in social change which may lead to a genuine increase of human autonomy.
McLuhan 1962 p 3

As McLuhan has said, there is no inevitability so long as there is a willingness to contemplate what is happening.
Postman Weingartner 1969 p 20

Raymond Williams saw McLuhan's stance as 

an apparently sophisticated technological determinism which has the significant effect of indicating a social and cultural determinism: a determinism, that is to say, which ratifies the society and culture we now have, and especially its most powerful internal directions.
Williams, second edition p 120

Neil Postman himself made some statements that were much more clearly deterministic. 

Once a technology is admitted, it plays out its hand; it does what it is designed to do.
Postman 1992

But causal explanation doesn't always mean inevitability. Explanations in history and the social sciences often have to be understood in terms of tendencies, probabilities and propensities, other-things-being-equal.

There is also a common belief that technological change is irreversible. A good counter-example to this is Japan's reversion to the sword between 1543 and 1879, as documented by Noel Perrin. What's interesting about this example is that it shows that technology reversal is possible under certain sociopolitical conditions, and also that these conditions are quite rare.

What is rather more common is for sociopolitical forces to inhibit the adoption of technology in the first place. In my article on Productivity, I borrowed the example of continuous-aim firing from E.E. Morison. This innovation was initially resisted by the Navy hierarchy (both UK and US), despite tests demonstrating a massive improvement in firing accuracy, at least in part because it would have disrupted the established power relations and social structure on board ship.

Evolution or Revolution?

How to characterize the two examples of technology change I mentioned at the beginning of this post - the mechanical clock and moveable type? It is important to remember that this isn't about the invention of clocks and printing, since these technologies were known across the ancient world from China to Egypt, but about significant improvements to these technologies, which made them more readily available to more people. It was these improvements that made other social changes possible.

Technologists are keen to take the credit for the positive effects of their innovations, while denying responsibility for any negative effects. The narrative of technological determinism plays into this, suggesting that the negative effects were somehow inevitable, and there was therefore little point in resisting them.

The tech industry ... likes to imbue the changes it yields with the character of natural law.
Daub 2020 p 5

If new tech is natural, then surely it is foolish for individual consumers to resist it. The rhetoric of early adopters and late adopters suggests that the former are somehow superior to the latter. Why bother with old fashioned electricity meters or doorbells, if you can afford smart technology? Are you some kind of technophobe or luddite or what?

What's wrong with the idea of technological determinism is not that it is true or false, but that it misrepresents the relationship between technology and society, as if they were two separate domains exerting gravitational force on each other. In my work on technology adoption, I used to talk about technology-in-use. Recent writing on the philosophy of technology (especially Stiegler and his followers) refer to this as pharmacological, using the term in its ancient Greek sense rather than referring specifically to the drug industry. If you want to think of technology as a drug that alters its users' perception of reality, then perhaps it's not such a leap from the drug industry to the tech industry.

But the word alters isn't right here, because it implies the existence of some unaltered reality prior to technology. As Stiegler and others make clear, there is no reality prior to technology: our reality and our selves have always been part of a sociotechnical world. 

Donna Harraway sees determinism as a discourse (in the Foucauldian sense) rather than as a theory of power and control.

Technological determination is only one ideological space opened up by the reconceptions of machine and organism as coded texts through which we engage in the play of ·writing and reading the world.

As Rob Safer notes,

Human history for Haraway isn’t a rigid procession of cause determining effect, but a process of becoming that depends upon human history’s conception of itself, via the medium of myth.



Adrian Daub, What Tech Calls Thinking (Farrar Straus and Giroux, 2020) 

Donna Haraway, Cyborg Manifesto (Socialist Review, 1985)

Marshall McLuhan, The Gutenberg Galaxy (University of Toronto Press, 1962) 

E.E. Morison, Men Machine and Modern Times (MIT Press, 1966)

Lewis Mumford, Technics and Civilization (London: Routledge, 1934)

John Durham Peters, “You Mean My Whole Fallacy Is Wrong”: On Technological Determinism  (Representations 140 (1): 10–26.  November 2017)

Noel Perrin, Giving up the gun (New Yorker, 13 November 1965), Giving up the gun (David R Godine, 1988)

Neil Postman, Technolopoly: the surrender of culture to technology (Knopf, 1992)

Neil Postman and Charles Weingartner, Teaching as a Subversive Activity (Delacorte 1969) page references to Penguin 1971 edition

Jacob Riley, Technological Determinism, Control, and Education: Neil Postman and Bernard Stiegler (1 October 2013)

Federica Russo, Digital Technologies, Ethical Questions, and the Need of an Informational Framework  (Philosophy and Technology volume 31, pages655–667, November 2018)

Rob Safer, Haraway’s Theory of History in the Cyborg Manifesto (16 March 2015)

Richard Veryard, Demanding Higher Productivity (data processing 28/7, September 1986)

Raymond Williams, Television, Technology and Cultural Form (Routledge, 1974, 1990)

Related posts: Smart Guns (May 2014)

Friday, December 11, 2020

Evolution or Revolution 3

Let me start this post with some quotes from @adriandaub's book What Tech Calls Thinking.

Disruption has become a way to tell a story about the meaning of both discontinuity and continuity.

Daub p 119

One ought to be skeptical of unsubstantiated claims of something's being totally new and not following the hitherto established rules (of business, of politics, of common sense), just as one is skeptical of claims that something which really does feel and look unprecedented is simply a continuation of the status quo.

Daub pp 115-6

For example, Uber.

Uber claims to have revolutionized the experience of hailing a cab, but really that experience has stayed largely the same. What it has managed to get rid of were steady jobs, unions, and anyone other than Uber's making money on the whole enterprise.

Daub p 105

Clayton Christensen would agree. In an article restating his original definition of the term Disruptive Innovation, he put Uber into the category of what he calls Sustaining Innovation.

Uber’s financial and strategic achievements do not qualify the company as genuinely disruptive—although the company is almost always described that way.

HBR 2015

However, as I pointed out on Twitter earlier today, Christensen's use of the word disruptive has been widely diverted by big tech vendors and big consultancies in an attempt to glamorize their marketing to big corporates. If you put the name of any of the big consultancies into an Internet search engine together with the word disruption, you can find many examples of this. Here's one picked at random: Discover how you can seize the upside of disruption across your industry.

The same experiment can be tried with other jargon terms, such as paradigm shift. By the way, Daub notes that Alex Karp, one of the founders of Palantir, wrote his doctoral dissertation on jargon - speech that is used more for the feelings it engenders and transports in certain quarters than for its informational content (Daub p 85).

@jchyip thinks we should try to stick to Christensen's original definitions. But although I don't approve of vendors fudging perfectly good technical terms for their own marketing purposes, there is sometimes a limit to the extent to which we can insist that such terms still carry their original meaning.

And to my mind this is not just a dispute about the meaning of the word disruptive but a question of which discourse shall prevail. I have long argued that claims of continuity and novelty are not always mututally exclusive, since they may simply be alternative descriptions of the same thing for different audiences. The choice of description is then a question of framing rather than some objective truth. As Daub notes

The way the term is used today really implies that whatever continuity is being disrupted deserved to be disrupted.

Daub p 119

For more on this, see the earlier posts in this series: Evolution or Revolution (May 2006), Evolution or Revolution 2 (March 2010)

In a comment below the March 2010 post, @cecildjx asked my opinion on the (relative) significance of the Internet versus the iPhone. Here's what I answered.

My argument is that our feelings about technology are fundamentally and systematically distorted by glamour and proximity. Of course we are often fascinated by the most-recent, and we tend to take the less-recent for granted, but that is an unreliable basis for believing that the recent is (or will turn out to be) more significant from a larger historical perspective.

What I really find interesting (from a socio-historical perspective) is how quickly technologies can shift from fascinating to taken-for-granted. Since I started work, my working life have been transformed by a range of tools, including word processing, spreadsheets, mobile phones, fax machines, email and internet. Apart from a few developers working for Microsoft or Google, is anyone nowadays fascinated by word processors or spreadsheets? If we pay attention to the social changes brought about by the Internet, and ignore the social changes brought about by the word processor, then of course we will get a distorted view of the internet's importance. If we glamorize the iPhone while regarding older mobile telephones as uninteresting, we end up making a fetish of some specific design features of a particular product.

If we have a distorted sense of which innovations are truly disruptive or significant, we also have a distorted sense of technological change as a whole. There is a widespread belief that the pace of technological change is increasing, but this could be an illusion caused (again) by proximity. See my post on Rates of Evolution (September 2007), where I also note that some stakeholders have a vested interest in talking up the pace of technology change.

Clayton M. Christensen, Michael E. Raynor, and Rory McDonald, What Is Disruptive Innovation? (HBR Magazine, December 2015)

Adrian Daub, What Tech Calls Thinking (Farrar Straus and Giroux, 2020)

Thanks to @jchyip for kicking off the most recent discussion.

Thursday, December 10, 2020

The Social Dilemma

Just watched the documentary The Social Dilemma on Netflix, which takes a critical look at some of the tech giants that dominate our world today (although not Netflix itself, for some reason), largely from the perspective of some former employees who helped them achieve this dominance and are now having second thoughts. One of the most prominent members of this group is Tristan Harris, formerly with Google, now the president of an organization called the Center for Humane Technology.

The documentary opens by asking the contributors to state the problem, and shows them all initially hesitating. By the end of the documentary, however, they are mostly making large statements about the morality of encouraging addictive behaviour, the propagation of truth and lies, the threat to democracy, the ease with which these platforms can be used by authoritarian rulers and other bad actors, and the need for regulation.

Quantity becomes quality. To some extent, the phenomena and affordances of social media can be regarded as merely scaled-up versions of previous social tools, including advertising and television: the maxim If you aren't paying, you are the product derives from a 1973 video about the power of commercial television. However, several of the contributors to the documentary observed that the power of the modern platforms and the wealth of the businesses that control these platforms is unprecedented, while noting that social media is far less regulated than other mass communication enterprises, including television and telecommunications.

Contributors doubted whether we could expect these enterprises, or the technology sector generally, to fix these problems on their own - especially given the focus on profit, growth and shareholder value that drives all enterprises within the capitalist system. Is it fair to ask them to reform capitalism? (Many years ago, the architect J.P. Eberhard noted a tendency to escalate even small problems to the point where the entire capitalist system comes into question, and argued that We Ought To Know The Difference.) So is regulation the answer?

Surprisingly enough, Facebook doesn't think so. In its response to the documentary, it complains

The film’s creators do not include insights from those currently working at the companies or any experts that take a different view to the narrative put forward by the film.

As Pranav Malhotra notes, it's not hard to find experts who would offer a different perspective, in many cases offering far more fundamental and far-reaching criticisms of Facebook and its peers. Hey Facebook, careful what you wish for!

Last year, Tristan Harris appeared to call for a new interdisciplinary field of research, focused on exploring the interaction between technology and society. Several people including @ruchowdh pointed out that such a field was already well-established. (In response he said he already knew this, and apologized for his poor choice of words, blaming the Twitter character limit.)

So there is already an abundance of deep and interesting work that can help challenge the simplistic thinking of Silicon Valley in a number of areas including

  • Truth and Objectivity
  • Technological Determinism
  • Custodianship of Technology (for example Latour's idea that we should Love Our Monsters - see also article by Adam Briggle)

These probably deserve a separate post each, if I can find time to write them.

The Social Dilemma (dir Jeff Orlowski, Netflix 2020)

Wikipedia: The Social Dilemma, Television Delivers People

Stanford Encyclopedia of Philosophy: Ethics of Artificial Intelligence and Robotics, Phenomenological Approaches to Ethics and Information Technology, Philosophy of Technology


Adam Briggle, What can be done about our modern-day Frankensteins? (The Conversation, 26 December 2017)

Robert L. Carneiro, The transition from quantity to quality: A neglected causal mechanism in accounting for social evolution  (PNAS 97:23, 7 November 2000)

Rumman Chowdhury, To Really 'Disrupt,' Tech Needs to Listen to Actual Researchers (Wired, 26 June 2019)

Facebook, What the Social Dilemma Gets Wrong (2020)

Tristan Harris, “How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist”, Thrive Global, 18 May 2016. 

John Lanchester, You Are The Product (London Review of Books, Vol. 39 No. 16, 17 August 2017)

Bruno Latour, Love Your Monsters: Why we must care for our technologies as we do our children (Breakthrough, 14 February 2012) 

Pranav Malhotra, The Social Dilemma Fails to Tackle the Real Issues in Tech (Slate, 18 September 2020)

Richard Serra and Carlota Fay Schoolman, Television Delivers People (1973) 

Zadie Smith, Generation Why? (New York Review of Books, 25 November 2010)

Siva Vaidhyanathan, Making Sense of the Facebook Menace (The New Republic, 11 January 2021)

Related posts: The Perils of Facebook (February 2009), We Ought to Know the Difference (April 2013), Rhyme or Reason: The Logic of Netflix (June 2017), On the Nature of Platforms (July 2017), Ethical Communication in a Digital Age (November 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)