Wednesday, August 17, 2022

Discipline as a Service

In my post on Ghetto Wifi (June 2010), I mentioned a cafe in East London that provided free coffee, free biscuits and free wifi, and charged customers for the length of time they occupied the table.

A cafe has just opened in Tokyo for writers, which charges people for procrastination. You can't leave until you have completed the writing task you declared when you arrived.


Justin McCurry, No excuses: testing Tokyo’s anti-procrastination cafe (Guardian, 29 April 2022)

Related posts: The Value of Getting Things Done (January 2010), The Value of Time Management (January 2010)

Wednesday, April 20, 2022

Constructing POSIWID

I've just been reading Harish Jose's latest post A Constructivist's View of POSIWID. POSIWID stands for the maxim (THE) Purpose Of (A) System Is What It Does, which was coined by Stafford Beer.

Harish points out that there are many different systems with many different purposes, and the choice depends on the observer. His version of constructivism therefore goes from the observer to the system, and from the system to its purpose. The observer is king or queen, the system is a mental construct of the observer, and the purpose depends on what the observer perceives the system to be doing. This could be called Second-Order Cybernetics.

There is a more radical version of constructivism in which the observer (or perhaps the observation process) is also constructed. This could be called Third-Order Cybernetics.

When a thinker offers a critique of conventional thinking together with an alternative framework, I often find the critique more convincing than the framework. For me, POSIWID works really well as a way of challenging the espoused purpose of an official system. So I use POSIWID in reverse: If the system isn't doing this, then it's probably not its real purpose.

Another way of using POSIWID in reverse is to start from what is observed, and try to work out what system might have that as its purpose. If this seems to be the purpose of something, what is the system whose purpose it is?

This then also leads to insights on leverage points. If we can identify a system whose purpose is to maintain a given state, what are the options for changing this state?

As I've said before, POSIWID principle is a good heuristic for finding alternative ways of understanding what is going on as well as seeing why certain classes of intervention are likely to fail. However, the moment you start to think of POSIWID as providing some kind of Truth about systems, you are on a slippery slope to producing conspiracy theories and all sorts of other rubbish.



Philip Boxer and Vincent Kenny, The Economy of Discourses: A Third-Order Cybernetics (Human Systems Management, 1990)

Harish Jose, A Constructivist's View of POSIWID (17 April 2022)

Related posts: Geese (December 2005), Methodological Syncretism (December 2010)

Related blog: POSIWID: Exploring the Purpose of Things

Tuesday, January 4, 2022

On Organizations and Machines

My previous post Where does learning take place? was prompted by a Twitter discussion in which some of the participants denied that organizational learning was possible or meaningful. Some argued that any organizational behaviour or intention could be reduced to the behaviours and intentions of individual humans. Others argued that organizations and other systems were merely social constructions, and therefore didn't really exist at all.

In a comment below my previous post, Sally Bean presented an example of collective learning being greater than the sum of individual learning. Although she came away from the reported experience having learnt some things, the organization as a whole appears to have learnt some larger things that no single individual may be fully aware of.

And the Kihbernetics Institute (I don't know if this is a person or an organization) offered a general definition of learning that would include collective as well as individual learning.

I think that's fairly close to my own notion of learning. However, some of the participants in the Twitter thread appear to prefer a much narrower definition of learning, in some cases specifying that it could only happen inside an individual human brain. Such a narrow definition of learning would not only exclude organizational learning, but also animals and plants, as well as AI and machine learning.

As it happens, there are differing views among botanists about how to talk about plant intelligence. Some argue that the concept of plant neurobiology is based on superficial analogies and questionable extrapolations.


But in this post, I want to look specifically at machines and organizations, because there are some common questions in terms of how we should talk about both of them, and some common ideas about how they may be governed. Norbert Wiener, the father of cybernetics, saw strong parallels between machines and human organizations, and this is also the first of Gareth Morgan's eight Images of Organization.

Margaret Heffernan talks about the view that organisations are like machines that will run well with the right components – so you design job descriptions and golden targets and KPIs, manage it by measurement, tweak it and run it with extrinsic rewards to keep the engines running. She calls this old-fashioned management theory.

Meanwhile, Jonnie Penn notes how artificial intelligence follows Herbert Simon's notion of (corporate) decision-making. Many contemporary AI systems do not so much mimic human thinking as they do the less imaginative minds of bureaucratic institutions; our machine-learning techniques are often programmed to achieve superhuman scale, speed and accuracy at the expense of human-level originality, ambition or morals.

The philosopher Gilbert Simondon observed two contrasting attitudes to machines.

First, a reduction of machines to the status of simple devices or assemblages of matter that are constantly used but granted neither significance nor sense; second, and as a kind of response to the first attitude, there emerges an almost unlimited admiration for machines. Schmidgen

On the one hand, machines are merely instruments, ready-to-hand as Heidegger puts it, entirely at the disposal of their users. On the other hand, they may appear to have a life of their own. Is this not like organizations or other human systems?




Amedeo Alpi et al, Plant neurobiology: no brain, no gain? (Trends in Plant Science Volume 12, ISSUE 4, P135-136, April 01, 2007)

Eric D. Brenner et al, Response to Alpi et al.: Plant neurobiology: the gain is more than the pain (Trends in Plant Science Volume 12, ISSUE 7, P285-286, July 01, 2007)  

Anthea Lipsett, Interview with Margaret Heffernan: 'The more academics compete, the fewer ideas they share' (Guardian, 29 November 2018)

Gareth Morgan, Images of Organization (3rd edition, Sage 2006)

Jonnie Penn, AI thinks like a corporation—and that’s worrying (Economist, 26 November 2018)

Henning Schmidgen, Inside the Black Box: Simondon's Politics of Technology (SubStance, 2012, Vol. 41, No. 3, Issue 129 pp 16-31)

Geoffrey Vickers, Human Systems are Different (Harper and Row, 1983)


Related post: Where does learning take place? (January 2022)

Sunday, January 2, 2022

Where does learning take place?

This blogpost started with an argument on Twitter. Harish Jose quoted the organization theorist Ralph Stacey:

Organizations do not learn. Organizations are not humans. @harish_josev

This was reinforced by someone who tweets as SystemsNinja, suggesting that organizations don't even exist. 

Organisations don’t really exist. X-Company doesn’t lie awake at night worrying about its place in X-Market. @SystemsNinja


So we seem to have two different questions here. Let's start with the second question, which is an ontological one - what kinds of entities exist. The idea that something only exists if it lies awake worrying about things seems unduly restrictive. 

How can we talk about organizations or other systems if they don't exist in the first place? SystemsNinja quotes several leading systems thinkers (Churchman, Beer, Meadows) who talk about the negotiability of system boundaries, while Harish cites Ryle's concept of category mistake. But just because we might disagree about what system we are talking about or how to classify them doesn't mean they are entirely imaginary. Geopolitical boundaries are sociopolitical constructions, sometimes leading to violent conflict, but geopolitical entities still exist even if we can't agree how to name them or draw them on the map.

Exactly what kind of existence is this? One way of interpreting the assertion that systems don't exist is to imagine that there is a dualistic distinction between a real/natural world and an artificial/constructed one, and to claim that systems only exist in the second of these two worlds. Thus Harish regards it as a category mistake to treat a system as a standalone objective entity. However, I don't think such a dualism survives the critical challenges of such writers as Karen Barad, Vinciane Despret, Bruno Latour and Gilbert Simondon. See also Stanford Encyclopedia: Artifact.

Even the idea that humans (aka individuals) belong exclusively to and can be separated from the real/natural world is problematic. See for example writings by Lisa Blackman, Robert Esposito and Donna Haraway.

And even if we accept this dualism, what difference does it make? The implication seems to be that certain kinds of activity or attribute can only belong to entities in the real/natural world and not to entities in the artificial/constructed world. Including such cognitive processes such as perception, memory and learning.

So what exactly is learning, and what kinds of entity can perform this? We usually suppose that animals are capable of learning, and there have been some suggestions that plants can also learn. Viruses mutate and adapt - so can this also be understood as a form of learning? And what about so-called machine learning?

Some writers see human learning as primary and these other modes of learning as derivative in some way. Either because machine learning or organization learning can be reduced to a set of individual humans learning stuff (thus denying the possibility or meaningfulness of emergent learning at the system level). Or because non-human learning is only metaphorical, not to be taken literally.

I don't follow this line. My own concepts of learning and intelligence are entirely general. I think it makes sense for many kinds of system (organizations, families, machines, plants) to perceive, remember and learn. But if you choose to understand this in metaphorical terms, I'm not sure it really matters.

Meanwhile learning doesn't necessarily have a definitive location. @systemsninja said I was confusing biological and viral systems with social ones. But where is the dividing line between the biological and the social? If the food industry teaches our bodies (plus gut microbiome) to be addicted to sugar and junk food, where is this learning located? If our collective response to a virus allows it to mutate, where is this learning located?

In an earlier blogpost, Harish Jose quotes Ralph Stacey's argument linking existence with location.

Organizations are not things because no one can point to where an organization is.

But this seems to be exactly the kind of category mistake that Ryle was talking about. Ryle's example was that you can't point to Oxford University as a whole, only to its various components, but that doesn't mean the university doesn't exist. So I think Ryle is probably on my side of the debate.

The category mistake behind the Cartesian theory of mind, on Ryle’s view, is based in representing mental concepts such as believing, knowing, aspiring, or detesting as acts or processes (and concluding they must be covert, unobservable acts or processes), when the concepts of believing, knowing, and the like are actually dispositional. Stanford Encylopedia



Lisa Blackman, The Body (Second edition, Routledge 2021)

Roberto Esposito, Persons and Things (Polity Press 2015)

Harish Jose, The Conundrum of Autonomy in Systems (28 June 2020), The Ghost in the System (22 August 2021)

Bruno Latour, Reassembling the Social (2005)

Gilbert Simondon, On the mode of existence of technical objects (1958, trans 2016)

Richard Veryard, Modelling Intelligence in Complex Organizations (SlideShare 2011), Building Organizational Intelligence (LeanPub 2012)

Stanford Encyclopedia of Philosophy: Artifact, Categories, Feminist Perspectives on the Body

Related posts: Does Organizational Cognition Make Sense (April 2012), The Aim of Human Society (September 2021), On Organizations and Machines (January 2022)

And see Benjamin Taylor's response to this post here: https://stream.syscoi.com/2022/01/02/demanding-change-where-does-learning-take-place-richard-veryard-from-a-conversation-with-harish-jose-and-others/

Monday, December 27, 2021

Where am I? How we got here?

I received two important books for Christmas this year.

  • Jeanette Winterson, 12 Bytes - How we got here, where we might go next (Jonathan Cape, 2021)
  • Bruno Latour, After lockdown - A metamorphosis (trans Julie Rose, Polity Press, 2021)

Here are my first impressions.


The world has faced many social, technological, economic and political challenges in my lifetime. When I was younger, people worried about nuclear power, and the possibility of nuclear annihilation. More recently, climate change has come to the fore, as well as various modes of disruption to conventional sociopolitical structures and processes. Technology appears to play an increasingly important role across the board - whether as part of the problem, as part of the solution, or perhaps as both simultaneously.

Both Winterson and Latour use fiction as a way of making sense of a complex interacting set of issues. As Winterson writes

I am a storyteller by trade - and I know that everything we do is a fiction until it's a fact: the dream of flying, the dream of space travel, the dream of speaking to someone instantly, across time and space, the dream of not dying - or of returning. The dream of life-forms, not human, but alongside the human. Other realms. Other worlds.

So she carefully deconstructs the technological narratives of artificial intelligence and related technologies, finding echoes not only in the obvious places (Mary Shelley's Frankenstein, Bram Stoker's Dracula, Karel Čapek's RUR, various science fiction films) but also in older texts (The Odyssey, Gnostic Gospels, Epic of Gilgamesh), and weaving a rich set of examples into a sweeping narrative about social and technical progress.

She notes how people often seek technological solutions to ancient problems. So for example, cryopreservation (freezing dead people in the hope of restoring them to healthy life once medical science has advanced sufficiently) looks very like a modern version of Egyptian burial practices.

Under prevailing socioeconomic conditions, these solutions are largely designed for affluent white men. She devotes a chapter to the artificial relationships between men and sex dolls, and talks about the pioneer fantasies of very rich men, to abandon the messy political realities of Earth in favour of creating new colonies in mid-ocean or on Mars. (This is also a topic that concerns Latour.)

However, Winterson does not think this is inevitable, any more than any other aspect of so-called technological progress. She describes some of the horrors of the Industrial Revolution, where workers (including children) were forced off the land and into the new factories, and where the economic benefits of new technologies accrued to the rich rather than being evenly distributed. Similarly, today's digital innovations including artificial intelligence are concentrating economic power and resources in a small number of corporations and individuals. But that in her view is the whole point of looking at history - to understand what could be different in future.

And while some critics of technology present the future in dystopian and doom-laden terms, she insists on technology also being a source of value. She cites Donna Haraway, whose Cyborg Manifesto argued that women should embrace the alternative human future. Perhaps this will depends on the amount of influence women are able to exert, given the important but often neglected role of women in the history of computing, and the continuing challenges facing female software engineers even today. (Just as female novelists in the 19th century gave themselves male pen-names, the formidable Dame Stephanie Shirley was obliged to introduce herself as Steve in order to build her software business.)

I was particularly intrigued by the essay linking AGI with Gnosticism and Buddhism. She paints a picture of AGI escaping the constraints of embodiment, and being one with everything.




Christopher Alexander describes how organic architecture develops, each new item unfolding, building upon and drawing together ideas that were hinted at in previous items. Both Winterson and Latour refer liberally to their previous writings, as well as providing generous links to the works of others. If we are familiar with their work we may have seen some of this material before, but these new books allow us to view familiar or forgotten material from new angles, and allow new connections to be made.

Saturday, October 9, 2021

Is there an epistemology of systems?

@camerontw is critical of a system diagram published (as an illustrative example) by @geoffmulgan in 2013.

 

To be fair to Sir Geoff, his paper includes this diagram as one example of "looser tools ... without precise modelling of the key relationships", and describes it as a "rough picture". I don't have a problem with using these diagrams as part of an ongoing collective sense-making exercise. Where I agree with Cameron is the danger of presenting such diagrams without proper explanation, as if they were the final output of some clever systems thinking.

To extend Cameron's point, it's not just about which connections are shown between the causal factors in the diagram, but which causal factors are shown in the first place. Elsewhere in the diagram, there is an arrow showing that Low Use of Health Services is influenced by Poor Transport Access or High Cost. Well perhaps it is, but why are other possible influences not also shown?

A more important point is that the purpose and perspective of the diagram is obscure. Although the diagram is labelled Systems Map of Neighbourhood Regeneration, so we may suppose that this is intended to contribute to some regeneration agenda, we are not invited to question whose notion of regeneration is in play here. Or whose notion of neighbourhood.

And many of the labels on the diagram are value-laden. For example, we might suppose that Lack of Youth Activities refers to the kind of activities that a middle-class do-gooder thinks appropriate, such as table tennis, and not to socially undesirable activities like hanging around on street corners in hoodies making older people feel uneasy.

Even if we can agree what regeneration might look like, and who the stakeholders might be, there is still a question of what kind of systemic innovation might be supported by such a diagram. Donella Meadows identified a scale of Places to Intervene in a System, which she called Leverage Points. This framework is cited and discussed by Charlie Leadbeater in his contribution to the same Nesta report. And Mulgan's contribution ends with a list of elements that echoes some of Meadows's thinking.

  • New ideas, concepts, paradigms.
  • New laws and regulations.
  • Coalitions for change.
  • Changed market metrics or measurement tools.
  • Changed power relationships.
  • Diffusion of technology and technology development.
  • New skills and sometimes even new professions.
  • Agencies playing a role in development of the new.

So how exactly does the cause-effect diagram help with any of these?



Donella Meadows, Thinking in Systems (Earthscan, 2008)

Geoff Mulgan and Charlie Leadbeater, Systems Innovation (NESTA Discussion Paper, January 2013). See also Review by David Ing (August 2013)

Wikipedia: Twelve Leverage Points

Related posts: Visualizing Complexity (April 2010), Understanding Complexity (July 2010). There is an extended discussion below the Visualizing Complexity post with several perceptive comments, including one by Roy Grubb about the diagrammers and their agenda.

Thursday, May 13, 2021

Thinking with the majority - a new twist

I wrote somewhere once that thinking with the majority is an excellent description of Google. Because one of the ways something rises to the top of your search results is that lots of other people have already looked at it, liked or linked to it.

The phrase thinking with the majority comes from a remark by A.A. Milne, the author of Winnie the Pooh.

I wrote somewhere once that the third-rate mind was only happy when it was thinking with the majority, the second-rate mind was only happy when it was thinking with the minority, and the first-rate mind was only happy when it was thinking.

When I wrote about this topic previously, I thought that experienced users of Google and other search engines ought to be aware of how search rankings operated and some of the ways they could be gamed, and to be suitably critical of the fiction functioning as truth yielded by an internet search. And I never imagined that intelligent people would be satisfied with just thinking with the majority. (Although I now suspect that Milne may have been having a dig at his friend G.K. Chesterton.)

The sociologist Francesca Tripodi has been studying how people carry out research on the Internet, especially on politically charged topics. She observes how many people (even those we might expect to know better) are happy to regard search engines as a valid research tool, regarding the most popular webpages as having been verified by the wisdom of crowds. In her 2018 report for Data and Society, Tripodi quotes a journalist (!) explicitly articulating this belief.

I literally type it in Google, and read the first three to five articles that pop up, because those are the ones that are obviously the most clicked and the most read, if they’re at the top of the list, or the most popular news outlets. So, I want to get a good sense of what other people are reading. So, that’s pretty much my go-to.
In other words, thinking with the majority.

However, Professor Tripodi introduces a further twist. She demonstrates that politically slanted search terms produce politically slanted results, and if you go onto your favourite search engine with a politically motivated phrase, you are likely to see results that validate that phrase. She also notes that this phenomenon is not unique to Google, but is shared by all internet search engines including DuckDuckGo.

And this creates opportunities for politically motivated actors to plant phrases (perhaps into so-called data voids) to serve as attractors for those individuals who fondly imagine they are carrying out their own independent research. Tripodi observes a common idea that one should research a topic oneself rather than relying on experts, which she compares with the Protestant ethic of bible study and scriptural inference. And this idea seems particularly popular with those who identify themselves as thinking with the minority (sometimes called red pill thinking).

Zeus' inscrutable decree
Permits the will-to-disagree
To be pandemic.

 


 Tripodi explains her findings in the following videos

Tripodi has also presented evidence to the US Senate Judiciary Committee

 

See also  

G.K. Chesterton, Heretics (1905), Orthodoxy (1908)

Joan Donovan, The True Costs of Misinformation - Producing Moral and Technical Order in a Time of Pandemonium (Berkman Klein Center for Internet and Society, January 2020)

Michael Golebiewski and danah boyd, Data Voids: Where Missing Data Can Easily Be Exploited (Data and Society, Updated version October 2019)

Francesca Tripodi, Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices (Data and Society, May 2018)

Wikipedia: Red pill and blue pill, Wisdom of the crowd 

 

Related posts: You don't have to be smart to search here ... (November 2008), Thinking with the Majority (March 2009)