Thursday, March 9, 2023

Technology in use

In many blogposts I have mentioned the distinction between technology as designed/built and technology in use.

I am not sure when I first used these exact terms. I presented a paper to an IFIP conference in 1995 where I used the terms technology-as-device and technology-in-its-usage. By 2002, I was using the terms "technology as built" and "technology in use" in my lecture notes for an Org Behaviour module I taught (together with Aidan Ward) at City University. With an explicit link to espoused theory and theory-in-use (Argyris).

Among other things, this distinction is important for questions of technology adoption and maturity. See the following posts 

I have also talked about system-as-designed versus system-in-use - for example in my post on Ecosystem SOA 2 (June 2010). See also Trusting the Schema (March 2023).

Related concepts include Inscription (Akrich) and Enacted Technology (Fountain). Discussion of these and further links can be found in the following posts:


And returning to the distinction between espoused theory and theory-in-use. In my post on the National Decision Model (May 2014) I also introduced the concept of theory-in-view, which (as I discovered more recently) is similar to Lolle Nauta's concept of exemplary situation.



Richard Veryard, IT Implementation or Delivery? Thoughts on Assimilation, Accommodation and Maturity. Paper presented to the first IFIP WG 8.6 Working Conference, on the Diffusion and Adoption of Information Technology, Oslo, October 1995. 

Richard Veryard and Aidan Ward, Technology and Change (City University 2002) 

Saturday, February 18, 2023

Hedgehog Innovation

According to Archilochus, the fox knows many things, but a hedgehog knows one big thing.

In his article on AI and the threat to middle class jobs, Larry Elliot focuses on machine learning and robotics.

AI stands to be to the fourth industrial revolution what the spinning jenny and the steam engine were to the first in the 18th century: a transformative technology that will fundamentally reshape economies.

When people write about earlier waves of technological innovation, they often focus on one technology in particular - for example a cluster of innovations associated with the adoption of electrification in a wide range of industrial contexts.

While AI may be an important component of the fourth industrial revolution, it is usually framed as an enabler rather than the primary source of transformation. Furthermore, much of the Industry 4.0 agenda is directed at physical processes in agriculture, manufacturing and logistics, rather than clerical and knowledge work. It tends to be framed as many intersecting innovations rather than one big thing.

There is also a question about the pace of technological change. Elliott notes a large increase in the number of AI patents, but as I've noted previously I don't regard patent activity as a reliable indicator of innovation. The primary purpose of a patent is not to enable the inventor to exploit something, it is to prevent anyone else freely exploiting it. And Ezrachi and Stucke provide evidence of other ways in which tech companies stifle innovation.

However the AI Index Report does contain other measures of AI innovation that are more convincing.


 AI Index Report (Stanford University, March 2022)

Larry Elliott, The AI industrial revolution puts middle-class workers under threat this time (Guardian, 18 February 2023)

Ariel Ezrachi and Maurice Stucke, How Big-Tech Barons Smash Innovation and how to strike back (New York: Harper, 2022)

Wikipedia: Fourth Industrial Revolution, The Hedgehog and the Fox

Related Posts: Evolution or Revolution (May 2006), It's Not All About (July 2008), Hedgehog Politics (October 2008), The New Economics of Manufacturing (November 2015), What does a patent say? (February 2023)

Sunday, January 22, 2023

Reasoning with the majority - chatGPT

#ThinkingWithTheMajority 

#chatGPT has attracted considerable attention since its launch in November 2022, prompting concerns about the quality of its output as well as the potential consequences of widespread use and misuse of this and similar tools.

Virginia Dignum has discovered that it has a fundamental misunderstanding of basic propositional logic. In answer to her question, chatGPT claims that the statement "if the moon is made of cheese then the sun is made of milk" is false, and goes on to argue that "if the premise is false then any implication or conclusion drawn from that premise is also false". In her test, the algorithm persists in what she calls "wrong reasoning".

I can't exactly recall at what point in my education I was introduced to propositional calculus, but I suspect that most people are unfamiliar with it. If Professor Dignum were to ask a hundred people the same question, it is possible that the majority would agree with chatGPT.

In which case, chatGPT counts as what A.A. Milne once classified as a third-rate mind - "thinking with the majority". I have previously placed Google and other Internet services into this category.

Other researchers have tested chatGPT against known logical paradoxes. In one experiment (reported via LinkedIn) it recognizes the Liar Paradox when Epimenides is explicitly mentioned in the question, but apparently not otherwise. No doubt someone will be asking it about the baldness of the present King of France.

One of the concerns expressed about AI-generated text is that it might be used by students to generate coursework assignments. At the present state of the art, although AI-generated text may look plausible it typically lacks coherence and would be unlikely to be awarded a high grade, but it could easily be awarded a pass mark. In any case, I suspect many students produce their essays by following a similar process, grabbing random ideas from the Internet and assembling them into a semi-coherent narrative but not actually doing much real thinking.

There are two issues here for universities and business schools. Firstly whether the use of these services counts as academic dishonesty, similar to using an essay mill, and how this might be detected, given that standard plagiarism detection software won't help much. And secondly whether the possibility of passing a course without demonstrating correct and joined-up reasoning (aka "thinking") represents a systemic failure in the way students are taught and evaluated.


See also

Andrew Jack, AI chatbot’s MBA exam pass poses test for business schools (FT, 21 January 2023) HT @mireillemoret

Gary Marcus, AI's Jurassic Park Moment (CACM, 12 December 2022)

Christian Terwiesch, Would Chat GPT3 Get a Wharton MBA? (Wharton White Paper, 17 January 2023)

Related posts: Thinking with the Majority (March 2009), Thinking with the Majority - a New Twist (May 2021), Satanic Essay Mills (October 2021)

Wikipedia: ChatGPT, Entailment, Liar Paradox, Plagiarism, Propositional calculus 

Wednesday, August 17, 2022

Discipline as a Service

In my post on Ghetto Wifi (June 2010), I mentioned a cafe in East London that provided free coffee, free biscuits and free wifi, and charged customers for the length of time they occupied the table.

A cafe has just opened in Tokyo for writers, which charges people for procrastination. You can't leave until you have completed the writing task you declared when you arrived.


Justin McCurry, No excuses: testing Tokyo’s anti-procrastination cafe (Guardian, 29 April 2022)

Related posts: The Value of Getting Things Done (January 2010), The Value of Time Management (January 2010)

Wednesday, April 20, 2022

Constructing POSIWID

I've just been reading Harish Jose's latest post A Constructivist's View of POSIWID. POSIWID stands for the maxim (THE) Purpose Of (A) System Is What It Does, which was coined by Stafford Beer.

Harish points out that there are many different systems with many different purposes, and the choice depends on the observer. His version of constructivism therefore goes from the observer to the system, and from the system to its purpose. The observer is king or queen, the system is a mental construct of the observer, and the purpose depends on what the observer perceives the system to be doing. This could be called Second-Order Cybernetics.

There is a more radical version of constructivism in which the observer (or perhaps the observation process) is also constructed. This could be called Third-Order Cybernetics.

When a thinker offers a critique of conventional thinking together with an alternative framework, I often find the critique more convincing than the framework. For me, POSIWID works really well as a way of challenging the espoused purpose of an official system. So I use POSIWID in reverse: If the system isn't doing this, then it's probably not its real purpose.

Another way of using POSIWID in reverse is to start from what is observed, and try to work out what system might have that as its purpose. If this seems to be the purpose of something, what is the system whose purpose it is?

This then also leads to insights on leverage points. If we can identify a system whose purpose is to maintain a given state, what are the options for changing this state?

As I've said before, POSIWID principle is a good heuristic for finding alternative ways of understanding what is going on as well as seeing why certain classes of intervention are likely to fail. However, the moment you start to think of POSIWID as providing some kind of Truth about systems, you are on a slippery slope to producing conspiracy theories and all sorts of other rubbish.



Philip Boxer and Vincent Kenny, The Economy of Discourses: A Third-Order Cybernetics (Human Systems Management, 1990)

Harish Jose, A Constructivist's View of POSIWID (17 April 2022)

Related posts: Geese (December 2005), Methodological Syncretism (December 2010)

Related blog: POSIWID: Exploring the Purpose of Things

Tuesday, January 4, 2022

On Organizations and Machines

My previous post Where does learning take place? was prompted by a Twitter discussion in which some of the participants denied that organizational learning was possible or meaningful. Some argued that any organizational behaviour or intention could be reduced to the behaviours and intentions of individual humans. Others argued that organizations and other systems were merely social constructions, and therefore didn't really exist at all.

In a comment below my previous post, Sally Bean presented an example of collective learning being greater than the sum of individual learning. Although she came away from the reported experience having learnt some things, the organization as a whole appears to have learnt some larger things that no single individual may be fully aware of.

And the Kihbernetics Institute (I don't know if this is a person or an organization) offered a general definition of learning that would include collective as well as individual learning.

I think that's fairly close to my own notion of learning. However, some of the participants in the Twitter thread appear to prefer a much narrower definition of learning, in some cases specifying that it could only happen inside an individual human brain. Such a narrow definition of learning would not only exclude organizational learning, but also animals and plants, as well as AI and machine learning.

As it happens, there are differing views among botanists about how to talk about plant intelligence. Some argue that the concept of plant neurobiology is based on superficial analogies and questionable extrapolations.


But in this post, I want to look specifically at machines and organizations, because there are some common questions in terms of how we should talk about both of them, and some common ideas about how they may be governed. Norbert Wiener, the father of cybernetics, saw strong parallels between machines and human organizations, and this is also the first of Gareth Morgan's eight Images of Organization.

Margaret Heffernan talks about the view that organisations are like machines that will run well with the right components – so you design job descriptions and golden targets and KPIs, manage it by measurement, tweak it and run it with extrinsic rewards to keep the engines running. She calls this old-fashioned management theory.

Meanwhile, Jonnie Penn notes how artificial intelligence follows Herbert Simon's notion of (corporate) decision-making. Many contemporary AI systems do not so much mimic human thinking as they do the less imaginative minds of bureaucratic institutions; our machine-learning techniques are often programmed to achieve superhuman scale, speed and accuracy at the expense of human-level originality, ambition or morals.

The philosopher Gilbert Simondon observed two contrasting attitudes to machines.

First, a reduction of machines to the status of simple devices or assemblages of matter that are constantly used but granted neither significance nor sense; second, and as a kind of response to the first attitude, there emerges an almost unlimited admiration for machines. Schmidgen

On the one hand, machines are merely instruments, ready-to-hand as Heidegger puts it, entirely at the disposal of their users. On the other hand, they may appear to have a life of their own. Is this not like organizations or other human systems?




Amedeo Alpi et al, Plant neurobiology: no brain, no gain? (Trends in Plant Science Volume 12, ISSUE 4, P135-136, April 01, 2007)

Eric D. Brenner et al, Response to Alpi et al.: Plant neurobiology: the gain is more than the pain (Trends in Plant Science Volume 12, ISSUE 7, P285-286, July 01, 2007)  

Anthea Lipsett, Interview with Margaret Heffernan: 'The more academics compete, the fewer ideas they share' (Guardian, 29 November 2018)

Gareth Morgan, Images of Organization (3rd edition, Sage 2006)

Jonnie Penn, AI thinks like a corporation—and that’s worrying (Economist, 26 November 2018)

Henning Schmidgen, Inside the Black Box: Simondon's Politics of Technology (SubStance, 2012, Vol. 41, No. 3, Issue 129 pp 16-31)

Geoffrey Vickers, Human Systems are Different (Harper and Row, 1983)


Related post: Where does learning take place? (January 2022)

Sunday, January 2, 2022

Where does learning take place?

This blogpost started with an argument on Twitter. Harish Jose quoted the organization theorist Ralph Stacey:

Organizations do not learn. Organizations are not humans. @harish_josev

This was reinforced by someone who tweets as SystemsNinja, suggesting that organizations don't even exist. 

Organisations don’t really exist. X-Company doesn’t lie awake at night worrying about its place in X-Market. @SystemsNinja


So we seem to have two different questions here. Let's start with the second question, which is an ontological one - what kinds of entities exist. The idea that something only exists if it lies awake worrying about things seems unduly restrictive. 

How can we talk about organizations or other systems if they don't exist in the first place? SystemsNinja quotes several leading systems thinkers (Churchman, Beer, Meadows) who talk about the negotiability of system boundaries, while Harish cites Ryle's concept of category mistake. But just because we might disagree about what system we are talking about or how to classify them doesn't mean they are entirely imaginary. Geopolitical boundaries are sociopolitical constructions, sometimes leading to violent conflict, but geopolitical entities still exist even if we can't agree how to name them or draw them on the map.

Exactly what kind of existence is this? One way of interpreting the assertion that systems don't exist is to imagine that there is a dualistic distinction between a real/natural world and an artificial/constructed one, and to claim that systems only exist in the second of these two worlds. Thus Harish regards it as a category mistake to treat a system as a standalone objective entity. However, I don't think such a dualism survives the critical challenges of such writers as Karen Barad, Vinciane Despret, Bruno Latour and Gilbert Simondon. See also Stanford Encyclopedia: Artifact.

Even the idea that humans (aka individuals) belong exclusively to and can be separated from the real/natural world is problematic. See for example writings by Lisa Blackman, Robert Esposito and Donna Haraway.

And even if we accept this dualism, what difference does it make? The implication seems to be that certain kinds of activity or attribute can only belong to entities in the real/natural world and not to entities in the artificial/constructed world. Including such cognitive processes such as perception, memory and learning.

So what exactly is learning, and what kinds of entity can perform this? We usually suppose that animals are capable of learning, and there have been some suggestions that plants can also learn. Viruses mutate and adapt - so can this also be understood as a form of learning? And what about so-called machine learning?

Some writers see human learning as primary and these other modes of learning as derivative in some way. Either because machine learning or organization learning can be reduced to a set of individual humans learning stuff (thus denying the possibility or meaningfulness of emergent learning at the system level). Or because non-human learning is only metaphorical, not to be taken literally.

I don't follow this line. My own concepts of learning and intelligence are entirely general. I think it makes sense for many kinds of system (organizations, families, machines, plants) to perceive, remember and learn. But if you choose to understand this in metaphorical terms, I'm not sure it really matters.

Meanwhile learning doesn't necessarily have a definitive location. @systemsninja said I was confusing biological and viral systems with social ones. But where is the dividing line between the biological and the social? If the food industry teaches our bodies (plus gut microbiome) to be addicted to sugar and junk food, where is this learning located? If our collective response to a virus allows it to mutate, where is this learning located?

In an earlier blogpost, Harish Jose quotes Ralph Stacey's argument linking existence with location.

Organizations are not things because no one can point to where an organization is.

But this seems to be exactly the kind of category mistake that Ryle was talking about. Ryle's example was that you can't point to Oxford University as a whole, only to its various components, but that doesn't mean the university doesn't exist. So I think Ryle is probably on my side of the debate.

The category mistake behind the Cartesian theory of mind, on Ryle’s view, is based in representing mental concepts such as believing, knowing, aspiring, or detesting as acts or processes (and concluding they must be covert, unobservable acts or processes), when the concepts of believing, knowing, and the like are actually dispositional. Stanford Encylopedia



Lisa Blackman, The Body (Second edition, Routledge 2021)

Roberto Esposito, Persons and Things (Polity Press 2015)

Harish Jose, The Conundrum of Autonomy in Systems (28 June 2020), The Ghost in the System (22 August 2021)

Bruno Latour, Reassembling the Social (2005)

Gilbert Simondon, On the mode of existence of technical objects (1958, trans 2016)

Richard Veryard, Modelling Intelligence in Complex Organizations (SlideShare 2011), Building Organizational Intelligence (LeanPub 2012)

Stanford Encyclopedia of Philosophy: Artifact, Categories, Feminist Perspectives on the Body

Related posts: Does Organizational Cognition Make Sense (April 2012), The Aim of Human Society (September 2021), On Organizations and Machines (January 2022)

And see Benjamin Taylor's response to this post here: https://stream.syscoi.com/2022/01/02/demanding-change-where-does-learning-take-place-richard-veryard-from-a-conversation-with-harish-jose-and-others/