Sunday, January 22, 2023

Reasoning with the majority - chatGPT

#ThinkingWithTheMajority 

#chatGPT has attracted considerable attention since its launch in November 2022, prompting concerns about the quality of its output as well as the potential consequences of widespread use and misuse of this and similar tools.

Virginia Dignum has discovered that it has a fundamental misunderstanding of basic propositional logic. In answer to her question, chatGPT claims that the statement "if the moon is made of cheese then the sun is made of milk" is false, and goes on to argue that "if the premise is false then any implication or conclusion drawn from that premise is also false". In her test, the algorithm persists in what she calls "wrong reasoning".

I can't exactly recall at what point in my education I was introduced to propositional calculus, but I suspect that most people are unfamiliar with it. If Professor Dignum were to ask a hundred people the same question, it is possible that the majority would agree with chatGPT.

In which case, chatGPT counts as what A.A. Milne once classified as a third-rate mind - "thinking with the majority". I have previously placed Google and other Internet services into this category.

Other researchers have tested chatGPT against known logical paradoxes. In one experiment (reported via LinkedIn) it recognizes the Liar Paradox when Epimenides is explicitly mentioned in the question, but apparently not otherwise. No doubt someone will be asking it about the baldness of the present King of France.

One of the concerns expressed about AI-generated text is that it might be used by students to generate coursework assignments. At the present state of the art, although AI-generated text may look plausible it typically lacks coherence and would be unlikely to be awarded a high grade, but it could easily be awarded a pass mark. In any case, I suspect many students produce their essays by following a similar process, grabbing random ideas from the Internet and assembling them into a semi-coherent narrative but not actually doing much real thinking.

There are two issues here for universities and business schools. Firstly whether the use of these services counts as academic dishonesty, similar to using an essay mill, and how this might be detected, given that standard plagiarism detection software won't help much. And secondly whether the possibility of passing a course without demonstrating correct and joined-up reasoning (aka "thinking") represents a systemic failure in the way students are taught and evaluated.


See also

Andrew Jack, AI chatbot’s MBA exam pass poses test for business schools (FT, 21 January 2023) HT @mireillemoret

Gary Marcus, AI's Jurassic Park Moment (CACM, 12 December 2022)

Christian Terwiesch, Would Chat GPT3 Get a Wharton MBA? (Wharton White Paper, 17 January 2023)

Related posts: Thinking with the Majority (March 2009), Thinking with the Majority - a New Twist (May 2021), Satanic Essay Mills (October 2021)

Wikipedia: ChatGPT, Entailment, Liar Paradox, Plagiarism, Propositional calculus 

Wednesday, August 17, 2022

Discipline as a Service

In my post on Ghetto Wifi (June 2010), I mentioned a cafe in East London that provided free coffee, free biscuits and free wifi, and charged customers for the length of time they occupied the table.

A cafe has just opened in Tokyo for writers, which charges people for procrastination. You can't leave until you have completed the writing task you declared when you arrived.


Justin McCurry, No excuses: testing Tokyo’s anti-procrastination cafe (Guardian, 29 April 2022)

Related posts: The Value of Getting Things Done (January 2010), The Value of Time Management (January 2010)

Wednesday, April 20, 2022

Constructing POSIWID

I've just been reading Harish Jose's latest post A Constructivist's View of POSIWID. POSIWID stands for the maxim (THE) Purpose Of (A) System Is What It Does, which was coined by Stafford Beer.

Harish points out that there are many different systems with many different purposes, and the choice depends on the observer. His version of constructivism therefore goes from the observer to the system, and from the system to its purpose. The observer is king or queen, the system is a mental construct of the observer, and the purpose depends on what the observer perceives the system to be doing. This could be called Second-Order Cybernetics.

There is a more radical version of constructivism in which the observer (or perhaps the observation process) is also constructed. This could be called Third-Order Cybernetics.

When a thinker offers a critique of conventional thinking together with an alternative framework, I often find the critique more convincing than the framework. For me, POSIWID works really well as a way of challenging the espoused purpose of an official system. So I use POSIWID in reverse: If the system isn't doing this, then it's probably not its real purpose.

Another way of using POSIWID in reverse is to start from what is observed, and try to work out what system might have that as its purpose. If this seems to be the purpose of something, what is the system whose purpose it is?

This then also leads to insights on leverage points. If we can identify a system whose purpose is to maintain a given state, what are the options for changing this state?

As I've said before, POSIWID principle is a good heuristic for finding alternative ways of understanding what is going on as well as seeing why certain classes of intervention are likely to fail. However, the moment you start to think of POSIWID as providing some kind of Truth about systems, you are on a slippery slope to producing conspiracy theories and all sorts of other rubbish.



Philip Boxer and Vincent Kenny, The Economy of Discourses: A Third-Order Cybernetics (Human Systems Management, 1990)

Harish Jose, A Constructivist's View of POSIWID (17 April 2022)

Related posts: Geese (December 2005), Methodological Syncretism (December 2010)

Related blog: POSIWID: Exploring the Purpose of Things

Tuesday, January 4, 2022

On Organizations and Machines

My previous post Where does learning take place? was prompted by a Twitter discussion in which some of the participants denied that organizational learning was possible or meaningful. Some argued that any organizational behaviour or intention could be reduced to the behaviours and intentions of individual humans. Others argued that organizations and other systems were merely social constructions, and therefore didn't really exist at all.

In a comment below my previous post, Sally Bean presented an example of collective learning being greater than the sum of individual learning. Although she came away from the reported experience having learnt some things, the organization as a whole appears to have learnt some larger things that no single individual may be fully aware of.

And the Kihbernetics Institute (I don't know if this is a person or an organization) offered a general definition of learning that would include collective as well as individual learning.

I think that's fairly close to my own notion of learning. However, some of the participants in the Twitter thread appear to prefer a much narrower definition of learning, in some cases specifying that it could only happen inside an individual human brain. Such a narrow definition of learning would not only exclude organizational learning, but also animals and plants, as well as AI and machine learning.

As it happens, there are differing views among botanists about how to talk about plant intelligence. Some argue that the concept of plant neurobiology is based on superficial analogies and questionable extrapolations.


But in this post, I want to look specifically at machines and organizations, because there are some common questions in terms of how we should talk about both of them, and some common ideas about how they may be governed. Norbert Wiener, the father of cybernetics, saw strong parallels between machines and human organizations, and this is also the first of Gareth Morgan's eight Images of Organization.

Margaret Heffernan talks about the view that organisations are like machines that will run well with the right components – so you design job descriptions and golden targets and KPIs, manage it by measurement, tweak it and run it with extrinsic rewards to keep the engines running. She calls this old-fashioned management theory.

Meanwhile, Jonnie Penn notes how artificial intelligence follows Herbert Simon's notion of (corporate) decision-making. Many contemporary AI systems do not so much mimic human thinking as they do the less imaginative minds of bureaucratic institutions; our machine-learning techniques are often programmed to achieve superhuman scale, speed and accuracy at the expense of human-level originality, ambition or morals.

The philosopher Gilbert Simondon observed two contrasting attitudes to machines.

First, a reduction of machines to the status of simple devices or assemblages of matter that are constantly used but granted neither significance nor sense; second, and as a kind of response to the first attitude, there emerges an almost unlimited admiration for machines. Schmidgen

On the one hand, machines are merely instruments, ready-to-hand as Heidegger puts it, entirely at the disposal of their users. On the other hand, they may appear to have a life of their own. Is this not like organizations or other human systems?




Amedeo Alpi et al, Plant neurobiology: no brain, no gain? (Trends in Plant Science Volume 12, ISSUE 4, P135-136, April 01, 2007)

Eric D. Brenner et al, Response to Alpi et al.: Plant neurobiology: the gain is more than the pain (Trends in Plant Science Volume 12, ISSUE 7, P285-286, July 01, 2007)  

Anthea Lipsett, Interview with Margaret Heffernan: 'The more academics compete, the fewer ideas they share' (Guardian, 29 November 2018)

Gareth Morgan, Images of Organization (3rd edition, Sage 2006)

Jonnie Penn, AI thinks like a corporation—and that’s worrying (Economist, 26 November 2018)

Henning Schmidgen, Inside the Black Box: Simondon's Politics of Technology (SubStance, 2012, Vol. 41, No. 3, Issue 129 pp 16-31)

Geoffrey Vickers, Human Systems are Different (Harper and Row, 1983)


Related post: Where does learning take place? (January 2022)

Sunday, January 2, 2022

Where does learning take place?

This blogpost started with an argument on Twitter. Harish Jose quoted the organization theorist Ralph Stacey:

Organizations do not learn. Organizations are not humans. @harish_josev

This was reinforced by someone who tweets as SystemsNinja, suggesting that organizations don't even exist. 

Organisations don’t really exist. X-Company doesn’t lie awake at night worrying about its place in X-Market. @SystemsNinja


So we seem to have two different questions here. Let's start with the second question, which is an ontological one - what kinds of entities exist. The idea that something only exists if it lies awake worrying about things seems unduly restrictive. 

How can we talk about organizations or other systems if they don't exist in the first place? SystemsNinja quotes several leading systems thinkers (Churchman, Beer, Meadows) who talk about the negotiability of system boundaries, while Harish cites Ryle's concept of category mistake. But just because we might disagree about what system we are talking about or how to classify them doesn't mean they are entirely imaginary. Geopolitical boundaries are sociopolitical constructions, sometimes leading to violent conflict, but geopolitical entities still exist even if we can't agree how to name them or draw them on the map.

Exactly what kind of existence is this? One way of interpreting the assertion that systems don't exist is to imagine that there is a dualistic distinction between a real/natural world and an artificial/constructed one, and to claim that systems only exist in the second of these two worlds. Thus Harish regards it as a category mistake to treat a system as a standalone objective entity. However, I don't think such a dualism survives the critical challenges of such writers as Karen Barad, Vinciane Despret, Bruno Latour and Gilbert Simondon. See also Stanford Encyclopedia: Artifact.

Even the idea that humans (aka individuals) belong exclusively to and can be separated from the real/natural world is problematic. See for example writings by Lisa Blackman, Robert Esposito and Donna Haraway.

And even if we accept this dualism, what difference does it make? The implication seems to be that certain kinds of activity or attribute can only belong to entities in the real/natural world and not to entities in the artificial/constructed world. Including such cognitive processes such as perception, memory and learning.

So what exactly is learning, and what kinds of entity can perform this? We usually suppose that animals are capable of learning, and there have been some suggestions that plants can also learn. Viruses mutate and adapt - so can this also be understood as a form of learning? And what about so-called machine learning?

Some writers see human learning as primary and these other modes of learning as derivative in some way. Either because machine learning or organization learning can be reduced to a set of individual humans learning stuff (thus denying the possibility or meaningfulness of emergent learning at the system level). Or because non-human learning is only metaphorical, not to be taken literally.

I don't follow this line. My own concepts of learning and intelligence are entirely general. I think it makes sense for many kinds of system (organizations, families, machines, plants) to perceive, remember and learn. But if you choose to understand this in metaphorical terms, I'm not sure it really matters.

Meanwhile learning doesn't necessarily have a definitive location. @systemsninja said I was confusing biological and viral systems with social ones. But where is the dividing line between the biological and the social? If the food industry teaches our bodies (plus gut microbiome) to be addicted to sugar and junk food, where is this learning located? If our collective response to a virus allows it to mutate, where is this learning located?

In an earlier blogpost, Harish Jose quotes Ralph Stacey's argument linking existence with location.

Organizations are not things because no one can point to where an organization is.

But this seems to be exactly the kind of category mistake that Ryle was talking about. Ryle's example was that you can't point to Oxford University as a whole, only to its various components, but that doesn't mean the university doesn't exist. So I think Ryle is probably on my side of the debate.

The category mistake behind the Cartesian theory of mind, on Ryle’s view, is based in representing mental concepts such as believing, knowing, aspiring, or detesting as acts or processes (and concluding they must be covert, unobservable acts or processes), when the concepts of believing, knowing, and the like are actually dispositional. Stanford Encylopedia



Lisa Blackman, The Body (Second edition, Routledge 2021)

Roberto Esposito, Persons and Things (Polity Press 2015)

Harish Jose, The Conundrum of Autonomy in Systems (28 June 2020), The Ghost in the System (22 August 2021)

Bruno Latour, Reassembling the Social (2005)

Gilbert Simondon, On the mode of existence of technical objects (1958, trans 2016)

Richard Veryard, Modelling Intelligence in Complex Organizations (SlideShare 2011), Building Organizational Intelligence (LeanPub 2012)

Stanford Encyclopedia of Philosophy: Artifact, Categories, Feminist Perspectives on the Body

Related posts: Does Organizational Cognition Make Sense (April 2012), The Aim of Human Society (September 2021), On Organizations and Machines (January 2022)

And see Benjamin Taylor's response to this post here: https://stream.syscoi.com/2022/01/02/demanding-change-where-does-learning-take-place-richard-veryard-from-a-conversation-with-harish-jose-and-others/

Monday, December 27, 2021

Where am I? How we got here?

I received two important books for Christmas this year.

  • Jeanette Winterson, 12 Bytes - How we got here, where we might go next (Jonathan Cape, 2021)
  • Bruno Latour, After lockdown - A metamorphosis (trans Julie Rose, Polity Press, 2021)

Here are my first impressions.


The world has faced many social, technological, economic and political challenges in my lifetime. When I was younger, people worried about nuclear power, and the possibility of nuclear annihilation. More recently, climate change has come to the fore, as well as various modes of disruption to conventional sociopolitical structures and processes. Technology appears to play an increasingly important role across the board - whether as part of the problem, as part of the solution, or perhaps as both simultaneously.

Both Winterson and Latour use fiction as a way of making sense of a complex interacting set of issues. As Winterson writes

I am a storyteller by trade - and I know that everything we do is a fiction until it's a fact: the dream of flying, the dream of space travel, the dream of speaking to someone instantly, across time and space, the dream of not dying - or of returning. The dream of life-forms, not human, but alongside the human. Other realms. Other worlds.

So she carefully deconstructs the technological narratives of artificial intelligence and related technologies, finding echoes not only in the obvious places (Mary Shelley's Frankenstein, Bram Stoker's Dracula, Karel Čapek's RUR, various science fiction films) but also in older texts (The Odyssey, Gnostic Gospels, Epic of Gilgamesh), and weaving a rich set of examples into a sweeping narrative about social and technical progress.

She notes how people often seek technological solutions to ancient problems. So for example, cryopreservation (freezing dead people in the hope of restoring them to healthy life once medical science has advanced sufficiently) looks very like a modern version of Egyptian burial practices.

Under prevailing socioeconomic conditions, these solutions are largely designed for affluent white men. She devotes a chapter to the artificial relationships between men and sex dolls, and talks about the pioneer fantasies of very rich men, to abandon the messy political realities of Earth in favour of creating new colonies in mid-ocean or on Mars. (This is also a topic that concerns Latour.)

However, Winterson does not think this is inevitable, any more than any other aspect of so-called technological progress. She describes some of the horrors of the Industrial Revolution, where workers (including children) were forced off the land and into the new factories, and where the economic benefits of new technologies accrued to the rich rather than being evenly distributed. Similarly, today's digital innovations including artificial intelligence are concentrating economic power and resources in a small number of corporations and individuals. But that in her view is the whole point of looking at history - to understand what could be different in future.

And while some critics of technology present the future in dystopian and doom-laden terms, she insists on technology also being a source of value. She cites Donna Haraway, whose Cyborg Manifesto argued that women should embrace the alternative human future. Perhaps this will depends on the amount of influence women are able to exert, given the important but often neglected role of women in the history of computing, and the continuing challenges facing female software engineers even today. (Just as female novelists in the 19th century gave themselves male pen-names, the formidable Dame Stephanie Shirley was obliged to introduce herself as Steve in order to build her software business.)

I was particularly intrigued by the essay linking AGI with Gnosticism and Buddhism. She paints a picture of AGI escaping the constraints of embodiment, and being one with everything.




Christopher Alexander describes how organic architecture develops, each new item unfolding, building upon and drawing together ideas that were hinted at in previous items. Both Winterson and Latour refer liberally to their previous writings, as well as providing generous links to the works of others. If we are familiar with their work we may have seen some of this material before, but these new books allow us to view familiar or forgotten material from new angles, and allow new connections to be made.

Saturday, October 9, 2021

Is there an epistemology of systems?

@camerontw is critical of a system diagram published (as an illustrative example) by @geoffmulgan in 2013.

 

To be fair to Sir Geoff, his paper includes this diagram as one example of "looser tools ... without precise modelling of the key relationships", and describes it as a "rough picture". I don't have a problem with using these diagrams as part of an ongoing collective sense-making exercise. Where I agree with Cameron is the danger of presenting such diagrams without proper explanation, as if they were the final output of some clever systems thinking.

To extend Cameron's point, it's not just about which connections are shown between the causal factors in the diagram, but which causal factors are shown in the first place. Elsewhere in the diagram, there is an arrow showing that Low Use of Health Services is influenced by Poor Transport Access or High Cost. Well perhaps it is, but why are other possible influences not also shown?

A more important point is that the purpose and perspective of the diagram is obscure. Although the diagram is labelled Systems Map of Neighbourhood Regeneration, so we may suppose that this is intended to contribute to some regeneration agenda, we are not invited to question whose notion of regeneration is in play here. Or whose notion of neighbourhood.

And many of the labels on the diagram are value-laden. For example, we might suppose that Lack of Youth Activities refers to the kind of activities that a middle-class do-gooder thinks appropriate, such as table tennis, and not to socially undesirable activities like hanging around on street corners in hoodies making older people feel uneasy.

Even if we can agree what regeneration might look like, and who the stakeholders might be, there is still a question of what kind of systemic innovation might be supported by such a diagram. Donella Meadows identified a scale of Places to Intervene in a System, which she called Leverage Points. This framework is cited and discussed by Charlie Leadbeater in his contribution to the same Nesta report. And Mulgan's contribution ends with a list of elements that echoes some of Meadows's thinking.

  • New ideas, concepts, paradigms.
  • New laws and regulations.
  • Coalitions for change.
  • Changed market metrics or measurement tools.
  • Changed power relationships.
  • Diffusion of technology and technology development.
  • New skills and sometimes even new professions.
  • Agencies playing a role in development of the new.

So how exactly does the cause-effect diagram help with any of these?



Donella Meadows, Thinking in Systems (Earthscan, 2008)

Geoff Mulgan and Charlie Leadbeater, Systems Innovation (NESTA Discussion Paper, January 2013). See also Review by David Ing (August 2013)

Wikipedia: Twelve Leverage Points

Related posts: Visualizing Complexity (April 2010), Understanding Complexity (July 2010). There is an extended discussion below the Visualizing Complexity post with several perceptive comments, including one by Roy Grubb about the diagrammers and their agenda.