NOW AVAILABLE The draft of my book on Organizational Intelligence is now available on LeanPub http://leanpub.com/orgintelligence. Please support this development by subscribing and commenting. Thanks.

Thursday, November 6, 2014

Corporate Grind

#QTWTAIN @lucykellaway asks if those workers who stay for years with the same companies (are) unambitious and mediocre, or does the corporate grind make them so?

Her article addresses the perception that people seemed to get dimmer the higher they went in the organization. If this perception is correct then there are several possible explanations
  • increased quality of intake
  • higher turnover of more talented and ambitious people (who may expect to get better opportunities elsewhere)
  • dulling effect of corporate life
If it is true that organizations systematically lose the best people and/or turn good people into mediocrities, then according to Stafford Beer's POSIWID principle, this is effect reveals the defacto purpose of the organization.


But perhaps the perception that people get dimmer as they get more experienced is wrong. Perhaps they simply display different forms of intelligence that are associated with collective excellence rather than individual brilliance. Clearly it would be natural for organizations to promote those kinds of intelligence that produce good corporate outcomes. However, it is likely that not everyone (especially fresh graduates) would see or appreciate these forms of intelligence.

According to the conventional metaphor, the corporate grind turns people into round pegs. When I was young, I used to think there was some kind of virtue in being a square peg: now I'm not so sure. However, there is undoubtedly a problem for any organization that cannot accommodate a few brilliant square pegs.




Lucy Kellaway, Why firms don't want you to be brilliant at your job (BBC Magazine 20 October 2014)

Working for the Machine

#orgintelligence The recent appointment of an algorithm to a Board of Directors raises the spectre of science fiction becoming fact. Although many commentators regarded the appointment as a publicity stunt, there has always been an undercurrent of fear about machine intelligence. Even the BBC (following Betteridge's Law of Headlines) succumbed to the alarmist headline Could a big data-crunching machine be your boss one day?

There are several useful ways that an algorithm might contribute to the collective intelligence of a Board of Directors. One is to provide an automated judgement on some topic, which can be put into the pot together with a number of human judgements. This is what seems to be planned by the company Deep Knowledge Ventures, whose Board of Directors is faced with a series of important investment decisions. Although each decision is unique, there are some basic similarities in the decision process that may be amenable to automation and machine learning.

Another possible contribution is to evaluate other board members. According to the BBC article, IBM Watson could be programmed to analyse the contributions made by each board member for usefulness and accuracy. There are several ways such a feedback loop could enhance the collective intelligence of the Board.

  • Retrain individuals to improve their contributions in specific contexts.
  • Identify and eliminate individuals whose contribution is weak.
  • Identify and eliminate individuals whose contribution is similar to other members. In other words, promote greater diversity.
  • Enable trial membership of individuals from a wider range of backgrounds, to see whether they can make a valuable contribution.


Organizational Intelligence is about an effective combination of human/social intelligence and machine intelligence. Remember this when people try to develop an either-us-or-them narrative.



#QTWTAIN

Jamie Bartlett, Will Artificial Intelligence put my job at risk? (Spectator 6 June 2014)

Adrian Chen, Can an Algorithm Solve Twitter’s Credibility Problem? (New Yorker 5 May 2014)

John Rentoul, Will Artificial Intelligence put my job at risk? (Independent 6 June 2014)

Richard Veryard, Does Cameron's Dashboard App Improve the OrgIntelligence of Government? (23 January 2013)

Matthew Wall, Could a big data-crunching machine be your boss one day? (BBC News 9 October 2014)


Other Sources

Algorithm appointed board director (BBC News 16 May 2014)

Wednesday, June 25, 2014

Defensive denial

... often follows what Freud called "kettle logic".

"The problem doesn't exist, and anyway it isn't a problem for us, and anyway we're already dealing with it."

Imagine we ask a manager whether her organization experiences any of the Symptoms of Organizational Stupidity. Suppose she denies it, or says it doesn't matter. Guess what - defensive denial is one of the symptoms on the list.

In 2008, Blockbuster CEO Jim Keyes expressed some bewilderment at his competitor’s success.

"I’ve been frankly confused by this fascination that everybody has with Netflix. ... Netflix doesn’t really have or do anything that we can’t or don’t already do ourselves." 

Denial is a key phase in the hype curve for new ideas (especially but not exclusively technological ones). A concept is rejected as meaningless, dangerous and/or unnecessary, while simultaneously being bundled together with earlier concepts.

"That concept doesn't make sense, and even if it did it wouldn't be technologically feasible, and anyway we already have a perfectly good word for it and lots of people are already doing it so we don't need a new word."


Brian Klapper, When Corporations Cannot Adapt (aka Fear the Kid in the Black t-shirt) (January 2013)

Related Posts: The Dynamics of Hype (Feb 2013)

Sunday, May 18, 2014

National Decision Model

@antlerboy asks does anyone know theoretical underpinning of the (rather good) police decision-making model?

The National Decision Model (NDM) is a risk assessment framework, or decision making process, that is used by police forces across the UK. It replaces the former Conflict Management Model. Some sources refer to it as the National Decision-Making Model.

Looking around the Internet, I have found two kinds of description of the model - top-down and bottom-up.

The top-down (abstract) description was published by the Association of Chief Police Officers (ACPO) sometime in 2012, and has been replicated in various items of police training material including a page on the College of Policing website. It is fairly abstract, and provides five different stages that officers can follow when making any type of decision - not just conflict management.

Some early responses from the police force regarded the NDM as an ideal model, only weakly connected to the practical reality of decision-making on the ground. See for example The NDM and decision making – what’s the reality? (Inspector Juliet Bravo, April 2012).

In contrast, the bottom-up (context-specific) description emerges when serving police officers discuss using the NDM. According to Mr Google, this discussion tends to focus on one decision in particular - to Taser or not to Taser.

"For me the Taser is a very important link in the National Decision Making Model . It bridges that gap between the baton and the normal firearm that has an almost certain risk of death when used." (The Peel Blog, July 2012). See also Use of Force - Decision Making (Police Geek, July 2012).

ACPO itself adopts this context-specific perspective in its Questions and Answers on Taser (February 2012, updated July 2013), where it is stated that Taser may be deployed and used as one of a number of tactical options only after application of the National Decision Model (NDM).

Of course, the fact that Taser-related decisions have a high Google ranking doesn't imply that these decisions represent the most active use of the National Decision Model. The most we can infer from this is that these are the decisions that police and others are most interested in discussing.

(Argyris and Schön introduced the distinction between Espoused Theory and Theory-In-Use. Perhaps we need a third category to refer to what people imagine to be the central or canonical examples of the theory. We might call it Theory-in-View or Theory-in-Gaze.)

In a conflict situation, a police officer often has to decide how much force to use. The officer needs to have a range of tools at his disposal and the ability to select the appropriate tool - in policing, this is known as a use-of-force continuum. More generally, it is an application of the principle of Requisite Variety.

In a particular conflict situation, the police can only use the tools they have at their disposal. The decision to use a Taser can only be taken if the police have the Taser and the training to use it properly. In which case the operational decision must follow the NDM.

More strategic decisions operate on a longer timescale - whether to equip police with certain equipment and training, what rules of engagement to apply, and so on. A truly abstract decision-making model would provide guidance for these strategic decisions as well as the operational decisions.

And that's exactly what the top-down description of NDM asserts. "It can be applied to spontaneous incidents or planned operations by an individual or team of people, and to both operational and non-operational situations."

Senior police officers have described the use of the NDM for non-conflict situations. For example, Adrian Lee (Chief Constable of Northants) gave a presentation on the Implications of NDM for Roads Policing (January 2012).

The NDM has also been adapted for use in other organizations. For example, Paul Macfarlane (ex Strathclyde Police) has used the NDM to produced a model aimed at Business Continuity and Risk Management. which he calls Defensive Decision-Making.



How does the NDM relate to other decision-making models? According to Adrian Lee's presentation, the NDM is based on three earlier models:

  • The Conflict Management Model (CMM). For a discussion from 2011, see Police Oracle.
  • The SARA model (Scan, Analyze, Respond, Assess) - which appears to be similar to the OODA loop.
  • Something called the PLANE model. (I tried Googling this, and I just got lots of Lego kits. If anyone has a link, please send.)

There is considerable discussion in the USA about the relevance of the OODA loop to policing, and this again focuses on conflict management situations (the "Active Shooter"). There are two important differences between combat (the canonical use of OODA) and conflict management. Firstly, the preferred outcome is not to kill the offender but to disarm him (either physically or psychologically). This means that you sometimes need to give the offender time to calm down, orienting himself into making the right decision. So it's not just about having a faster OODA loop than the other guy (although clearly some American cops think this is important). And secondly, there is a lot of talk about situation awareness and anticipation. For example, Dr. Mike Asken, who is a State Police psychologist, has developed a model called AAADA (Anticipating, Alerting, Assessing, Deciding and Acting). There is also a Cognitive OODA model I need to look into.

However, I interpret @antlerboy's request for theoretical underpinning as not just a historical question (what theories of decision-making were the creators of NDM consciously following) but a methodological question (what theories of decision-making would be relevant to NDM and any other decision models). But this post is already long enough, and the sun is shining outside, so I shall return to this topic another day.


Sources

Michael J. Asken, From OODA to AAADA ― A cycle for surviving violent police encounters (Dec 2010)

Adrian Lee, Implications of NDM for Roads Policing (January 2012).

Erik P. Blasch et al, User Information Fusion Decision-Making Analysis with the C-OODA Model (Jan 2011)

National Decision Model (ACPO, 2012?)

National Decision Model (College of Policing, 2013)

SARA model (Center for Problem-Oriented Policing)


Updated 19 May 2014

Saturday, April 26, 2014

On the true nature of knowledge

@pickover suggests that these two books, in theory, contain the sum total of all human knowledge. "The Joy of Logic", he remarks (via @DavidFCox).


"What they teach you at Harvard Business School" + "What they don't teach you at Harvard Business School"


Why is this wrong? Because knowledge doesn't follow the laws of elementary arithmetic. Adding two lots of knowledge together doesn't give you twice as much knowledge. (Does anyone really think that teaching children creationism as well as evolution will double their education?)

Knowledge is like light. When you add two light beams together, you may sometimes get more light. But you may also get puzzling patches of darkness. This is called interference. In high-school physics we learn that this is because light is a wave. If the two waves are out of phase, they cancel each other out.

(Curiously, uncertainty is also like light. When you add two pieces of uncertainty together, you may get less uncertainty. This is called hedging. Works best when the uncertainty is out of phase.)


Obviously these two books are out of phase.


Related posts

Does Big Data Release Information Energy? (April 2014)

Thursday, April 18, 2013

We Ought to Know the Difference

Is systems thinking really possible? Here's one reason why it might not be.

One of the concerns of systems thinking is the need to avoid the so-called environmental fallacy - the blunder of ignoring or not understanding the effects of the environment of a system. This is why, when systems thinkers are asked to tackle a concrete situation in detail, they often hesitate, insisting that it is wrong to look at the detail before understanding the context.

The trouble with this is that there is always a larger context, so this hesitation leads to an infinite regress and inability to formulate practical inroads into a complex situation. Many years ago, I read a brilliant essay by J.P. Eberhard called "We Ought to Know the Difference", which contains a widely quoted example of a doorknob. As I recall, Eberhard's central question is a practical one - how do we know when to expand the scope of the problem, and how do we know when to stop.

C West Churchman went more deeply into this question. In his book The Systems Approach and its Enemies (1979), he presents an ironic picture of the systems thinker as hero.

If the intellect is to engage in the heroic adventure of securing improvement in the human condition, it cannot rely on “approaches,” like politics and morality, which attempt to tackle problems head-on, within the narrow scope. Attempts to address problems in such a manner simply lead to other problems, to an amplification of difficulty away from real improvement. Thus the key to success in the hero’s attempt seems to be comprehensiveness. Never allow the temptation to be clear, or to use reliable data, or to “come up to the standards of excellence,” divert you from the relevant, even though the relevant may be elusive, weakly supported by data, and requiring loose methods.

Like Eberhard, Churchman seeks to reconcile the heroic stance of the systems thinker with the practical stance of other approaches. But we ought to know the difference.



This is an extract from my eBook on Next Practice Enterprise Architecture. Draft available from LeanPub.


John P. Eberhard, "We Ought to Know the Difference," Emerging Methods in Environmental Design and Planning, Gary T. Moore, ed. (MIT Press, 1970) pp 364-365

See extract here - The Warning of the Doorknob. The same extract can be found in many places, including Ed Yourdon's Modern Structured Analysis (first published 1989).

See also

Nicholas Berente, C West Churchman: Champion of the Systems Approach

Jeff Lindsay, Avoiding environmental fallacy with systems thinking (December 2012)


Updated May 14 2013

Saturday, March 30, 2013

From Enabling Prejudices to Sedimented Principles

In my post From Sedimented Principles to Enabling Prejudices (March 2013)  I distinguished the category of design heuristics from other kinds of principle. Following Peter Rowe, I call these Enabling Prejudices.

Rowe also uses the concept of Sedimented Principles, which he attributes to the French philosopher Maurice Merleau-Ponty, one of the key figures of phenomenology. As far as I can make out, Merleau-Ponty never used the exact term "sedimented principles", but he does talk a great deal about "sedimentation".
In phenomenology, the word "sedimentation" generally refers to cultural habitations that settle out of awareness into prereflective practices. Something like the "unconscious". (Professor James Morley, personal communication)
"On the basis of past experience, I have learned that doorknobs are to be turned. This ‘knowledge’ has sedimentated into my habitual body. While learning to play the piano, or to dance, I am intensely focused on what I am doing, and subsequently, this ability to play or to dance sedimentates into an habitual disposition." (Stanford Encyclopedia of Philosophy: Merleau-Ponty)

This relates to some notions of tacit knowledge, which is attributed to Michael Polyani. There are two models that are used in the knowledge management world that talk about tacit/explicit knowledge, and present two slightly different notions of internalization. 

Some critics (notably Wilson) regard the SECI model as flawed, because Nonaka has confused Polyani's notion of tacit knowledge with the much weaker concept of implicit knowledge. There are some deep notions of "unconscious" here, which may produce conceptual traps for the unwary.

Conceptual quibbles aside, there are several important points here. Firstly, enabling prejudices may start as consciously learned patterns, but can gradually become internalized, and perhaps not just implicit and habitual but tacit and unconscious. (The key difference here is how easily the practitioner can explain and articulate the reasoning behind some design decision.)

Secondly, to extent that these learned patterns are regarded as "best practices", it may be necessary to bring them back into full consciousness (whatever that means) so they can be replaced by "next practices". 




Bryan Lawson, How Designers Think (1980, 4th edition 2005)

Peter Rowe, Design Thinking (MIT Press 1987)

Wilson, T.D. (2002) "The nonsense of 'knowledge management'" Information Research, 8(1), paper no. 144

 Thanks to my friend Professor James Morley for help with Merleau-Ponty and sedimentation.