tag:blogger.com,1999:blog-12543156791639901532024-03-08T21:31:10.052+00:00Systems Thinking for Demanding ChangeRichard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.comBlogger533125tag:blogger.com,1999:blog-1254315679163990153.post-18764577284319783652024-02-24T00:35:00.004+00:002024-03-08T21:31:08.891+00:00Anticipating Effects<p>There has been much criticism of the bias and distortion embedded in many of our modern digital tools and platforms, including search. Google recently released an AI image generation model that over-compensated for this, producing racially diverse images even for situations where such diversity would be historically inaccurate. With well-chosen prompts, this feature was made to look either ridiculous or politically dangerous (aka "woke"), and the model has been withdrawn for further refinement and testing.<br /></p><p>I've just been reading an extended thread from Yishan Wong who argues </p><blockquote class="twitter-tweet"><p dir="ltr" lang="en">Google’s Gemini issue is not really about woke/DEI, and everyone who is obsessing over it has failed to notice the much, MUCH bigger problem that it represents.<br /></p>— Yishan (@yishan) <a href="https://twitter.com/yishan/status/1760859214875132161?ref_src=twsrc%5Etfw">February 23, 2024</a></blockquote><p> The bigger problem he identifies is the inability of the engineers to anticipate and constrain the behaviour of a complex intelligent systems. As in many of Asimov's stories, where the robots often behave in dangerous ways.<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> </p><p></p><blockquote class="twitter-tweet" data-conversation="none"><p dir="ltr" lang="en">And the lesson was that even if we had the Three Laws of Robotics, supposedly very comprehensive, that robots were still going to do crazy things, sometimes harmful things, because we couldn’t anticipate how they’d follow our instructions?</p>— Yishan (@yishan) <a href="https://twitter.com/yishan/status/1760859868192514248?ref_src=twsrc%5Etfw">February 23, 2024</a></blockquote> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> Some writers on technology ethics have called for ethical principles to be embedded in technology, along the lines of Asimov's Laws. I have challenged this idea in previous posts, because as I see it the whole point of the Three Laws is that they don't work properly. Thus my reading of Asimov's stories is similar to Yishan's.<p></p><blockquote class="twitter-tweet" data-conversation="none"><p dir="ltr" lang="en">If this had been a truly existential situation where “we only get one chance to get it right,” we’d be dead.<br /><br />Because I’m sure Google tested it internally before releasing it and it was fine per their original intentions. They probably didn’t think to ask for Vikings or Nazis.</p>— Yishan (@yishan) <a href="https://twitter.com/yishan/status/1760860391205372259?ref_src=twsrc%5Etfw">February 23, 2024</a></blockquote><p> It looks like their testing didn't take context of use into account. </p><p><b>Update</b>: Or as Dame Wendy Hall noted later, <q>This is not just safety testing, this is does-it-make-any-sense training</q>.</p><p><br /></p><hr /><p>Dan Milmo, <a href="https://www.theguardian.com/technology/2024/feb/22/google-pauses-ai-generated-images-of-people-after-ethnicity-criticism">Google pauses AI-generated images of people after ethnicity criticism </a>(Guardian, 22 February 2024) </p><p>Dan Milmo and Alex Hern, <a href="https://www.theguardian.com/technology/2024/mar/08/we-definitely-messed-up-why-did-google-ai-tool-make-offensive-historical-images">‘We definitely messed up’: why did Google AI tool make offensive historical images?</a> (Guardian, 8 March 2024)<br /></p><p>Related posts: <a href="https://posiwid.blogspot.com/2007/05/reinforcing-stereotypes.html">Reinforcing Stereotypes</a> (May 2007), Purpose of Diversity (<a href="https://posiwid.blogspot.com/2010/01/what-is-purpose-of-diversity.html">January 2010</a>) (<a href="https://posiwid.blogspot.com/2014/12/more-on-purpose-of-diversity.html">December 2014</a>), <a href="https://demandingchange.blogspot.com/2019/08/automation-ethics.html">Automation Ethics</a> (August 2019), <a href="https://posiwid.blogspot.com/2021/03/algorithmic-bias.html">Algorithmic Bias</a> (March 2021)<br /></p><p></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-62319654740919560812023-09-26T12:09:00.006+01:002023-09-26T14:11:56.198+01:00Creativity and Recursivity<p>Prompted by @<a href="https://twitter.com/jjn1">jjn1</a>'s article on AI and creative thinking, I've been reading a paper by some researchers comparing the <q>creativity</q> of ChatGPT against their students (<q>at an elite university</q>, no less).</p><p>What is interesting about this paper is not that ChatGPT is capable of producing large quantities of <q>ideas</q> much more quickly than human students, but that the evaluation method used by the researchers rated the AI-generated ideas as being of higher quality. From 200 human-generated ideas and 200 algorithm-generated ideas, 35 of the top-scoring 40 were algo-generated.<br /></p><p>So what was this evaluation method? They used a standard market research survey, conducted with <q>college-age individuals in the United States</q>, mediated via mTurk. Two dimensions of quality were considered: purchase intent (would you be likely to buy one) and novelty. The paper explains the difficulty of evaluating economic value directly, and argues that purchase intent provides a reasonable indicator of relative value.</p><p>The paper discusses the production cost of ideas, but this doesn't tell us anything about what the ideas might be worth. If ideas were really a dime a dozen, as the paper title suggests, then neither the impressive productivity of ChatGPT nor the effort of the design students would be economically justified. But the production of the initial idea is only a tiny fraction of the overall creative process, and (with the exception of speculative bubbles) raw ideas have very little market value (hence <q>dime a dozen</q>). So this research is not telling us much about creativity as a whole.<br /></p><p>A footnote to the paper considers and dismisses the concern that some of these mTurk responses might have been generated by an algorithm rather than a human. But does that algo/human distinction even hold up these days? Most of us nowadays inhabit a socio-technical world that is co-created by people and algorithms, and perhaps this is particularly true of the Venn diagram intersection between <q>college-age individuals in the United States</q> and mTurk users. If humans and algorithms increasingly have access to the same information, and are increasingly judging things in similar ways, it is perhaps not surprising that their evaluations converge. And we should not be too surprised if it turns out that algorithms have some advantages over humans in achieving high scores in this constructed simulation.</p><p>(Note: Atari et al recommend caution in interpreting comparisons between humans and algorithms, as they argue that those from Western, Educated, Industrialized, Rich and Democratic societies - which they call WEIRD - are not representative of humanity as a whole.)<br /></p><p>A number of writers on algorithms have explored the entanglement between humans and technical systems, often invoking the concept of <b>recursivity</b>. This concept has been variously defined in terms of co-production (Hayles), second-order cybernetics and autopoiesis (Clarke), and <q>being outside of itself (ekstasis), which recursively extends to the indefinite</q> (Yuk Hui). Louise Amoore argues that, <q>in every singular action of an apparently autonomous system, then, resides a multiplicity of human and algorithmic judgements, assumptions, thresholds, and probabilities</q>. <br /></p>
<p>(Note: I haven't read Yuk Hui's book yet, so his quote is taken from a 2021 paper)</p><p>Of course, the entanglement doesn't only include the participants in the market research survey, but also students and teachers of product design, yes even those at an elite university. This is not to say that any of these human subjects were directly influenced by ChatGPT itself, since much of the content under investigation predated this particular system. What is relevant here is algorithmic culture in general, which as Ted Striphas's new book makes clear has long historical roots. (Or should I say rhizome?)</p><p>What does algorithmic culture entail for product design practice? For one thing, if a new product is to appeal to a market of potential consumers, it generally has to achieve this via digital media - recommended by algorithms and liked by people (and bots) on social media. Thus successful products have to submit to the discipline of digital platforms: being sorted, classified and prioritized by a complex sociotechnical ecosystem. So we might expect some anticipation of this (conscious or otherwise) to be built into the design heuristics (or what Peter Rowe, following Gadamer, calls enabling prejudices) taught in the product design programme at an elite university.</p><p>So we need to be careful not to interpret this research finding as indicating a successful invasion of the algorithm into a previously entirely human activity. Instead, it merely represents a further recalibration of algorithmic culture in relation to an existing sociotechnical ecosystem. </p><p><br /></p>
<hr /><p>Louise Amoore, Cloud Ethics: Algorithms and the Attributes of Ourselves and Others (Durham and London: Duke University Press 2020)<br /></p><p>Mohammad Atari, Mona J. Xue, Peter S. Park, Damián E. Blasi and Joseph Henrich, <a href="https://psyarxiv.com/5b26t">Which Humans?</a> <span>(PsyArXiv, September 2023) HT @<a href="https://twitter.com/MCoeckelbergh/status/1706538346535616847">MCoeckelbergh</a> <br /></span></p><p>David Beer, <a href="https://journals.sagepub.com/doi/10.1177/20539517221104997">The problem of researching a recursivesociety: Algorithms, data coils and thelooping of the social</a> (Big Data and Society, 2022)<br /></p><p>Bruce Clarke, Rethinking Gaia: Stengers, Latour, Margulis (Theory Culture and Society 2017)<br /></p><p>Karan Girotra, Lennart Meincke, Christian Terwiesch, and Karl T. Ulrich, <a href="http://dx.doi.org/10.2139/ssrn.4526071">Ideas are Dimes a Dozen: Large Language Models for Idea Generation in Innovation</a> (10 July 2023)</p><p>N Katherine Hayles, The Illusion of Autonomy and the Fact of Recursivity: Virtual Ecologies, Entertainment, and "Infinite Jest" New Literary History , Summer, 1999, Vol. 30, No. 3, Ecocriticism (Summer, 1999), pp. 675-697</p><p>Yuk Hui, Problems of Temporality in the Digital Epoch, in Axel Volmar and Kyle Stine (eds) Media Infrastructures and the Politics of Digital Time (Amsterdam University Press 2021)<br /></p><p>John Naughton, <a href="https://www.theguardian.com/commentisfree/2023/sep/23/chatbots-ai-gpt-4-university-students-creativity">When it comes to creative thinking, it’s clear that AI systems mean business</a> (Guardian, 23 September 2023) </p><p>Peter Rowe, Design Thinking (MIT Press 1987) <br /></p><p>Ted Striphas, Algorithmic culture before the internet (New York: Columbia University Press, 2023)</p><p>See also: <a href="http://demandingchange.blogspot.com/2013/03/from-enabling-prejudices-to-sedimented.html">From Enabling Prejudices to Sedimented Principles</a> (March 2013)</p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-44360847079146627102023-03-09T00:21:00.001+00:002023-03-09T00:23:07.438+00:00Technology in use<p>In many blogposts I have mentioned the distinction between <b>technology as designed/built</b> and <b>technology in use</b>.</p><p>I am not sure when I first used these exact terms. I presented a paper to an IFIP conference in 1995 where I used the terms <b>technology-as-device</b> and <b>technology-in-its-usage</b>. By 2002, I was using the terms "technology as built" and "technology in use" in my lecture notes for an Org Behaviour module I taught (together with Aidan Ward) at City University. With an explicit link to <b>espoused theory</b> and <b>theory-in-use</b> (Argyris).</p><p>Among other things, this distinction is important for questions of technology adoption and maturity. See the following posts </p><ul style="text-align: left;"><li><a href="https://rvsoftware.blogspot.com/2009/10/blame-powerpoint.html">Blame PowerPoint</a> (October 2009) </li><li><a href="https://rvsoftware.blogspot.com/2009/12/what-is-technology-maturity.html">What is Technology Maturity</a> (December 2009) <br /></li></ul><p>I have also talked about <b>system-as-designed</b> versus <b>system-in-use </b>- for example in my post on <a href="https://rvsoapbox.blogspot.com/2010/06/ecosystem-soa-2.html">Ecosystem SOA 2</a> (June 2010). See also <a href="https://rvsoapbox.blogspot.com/2023/03/trusting-schema.html">Trusting the Schema</a> (March 2023).</p><p>Related concepts include <b>Inscription</b> (Akrich) and <b>Enacted Technology </b>(Fountain). Discussion of these and further links can be found in the following posts:</p><ul style="text-align: left;"><li><a href="https://demandingchange.blogspot.com/2004/07/enacted-technology.html">Enacted Technology</a> (July 2004)</li><li><a href="https://demandingchange.blogspot.com/2004/10/strawberry-picking.html">Strawberry Picking</a> (October 2004)<br /></li><li><a href="https://rvsoapbox.blogspot.com/2005/09/inscription-and-loose-coupling.htm">Inscription and Loose Coupling</a> (September 2005)</li></ul><p><br /></p><p>And returning to the distinction between <b>espoused theory </b>and <b>theory-in-use</b>. In my post on the <a href="https://demandingchange.blogspot.com/2014/05/national-decision-model.html">National Decision Model</a> (May 2014) I also introduced the concept of <b>theory-in-view</b>, which (as I discovered more recently) is similar to Lolle Nauta's concept of <b>exemplary situation</b>. <br /></p><p><br /></p><hr /><p>Richard Veryard, IT Implementation or Delivery? Thoughts on Assimilation, Accommodation and Maturity. Paper presented to the first IFIP WG 8.6 Working Conference, on the Diffusion and Adoption of Information Technology, Oslo, October 1995. </p><p>Richard Veryard and Aidan Ward, <a href="http://www.users.globalnet.co.uk/~rxv/orgmgt/ob8.pdf">Technology and Change</a> (City University 2002) </p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-47240616211884258042023-02-18T18:27:00.005+00:002023-02-19T02:49:41.284+00:00Hedgehog Innovation<p>According to Archilochus, the fox knows many things, but a hedgehog knows one big thing.</p><p>In his article on AI and the threat to middle class jobs, Larry Elliot focuses on machine learning and robotics.<br /></p><p style="margin-left: 40px; text-align: left;"><q>AI stands to be to the fourth industrial revolution what the spinning jenny and the steam engine were to the first in the 18th century: a transformative technology that will fundamentally reshape economies.</q></p><p>When people write about earlier waves of technological innovation, they often focus on one technology in particular - for example a cluster of innovations associated with the adoption of electrification in a wide range of industrial contexts. <br /></p><p>While AI may be an important component of the fourth industrial revolution, it is usually framed as an enabler rather than the primary source of transformation. Furthermore, much of the Industry 4.0 agenda is directed at physical processes in agriculture, manufacturing and logistics, rather than clerical and knowledge work. It tends to be framed as many intersecting innovations rather than one big thing.</p><p>There is also a question about the pace of technological change. Elliott notes a large increase in the number of AI patents, but as I've noted previously I don't regard patent activity as a reliable indicator of innovation. The primary purpose of a patent is not to enable the inventor to exploit something, it is to prevent anyone else freely exploiting it. And Ezrachi and Stucke provide evidence of other ways in which tech companies stifle innovation.</p><p>However the AI Index Report does contain other measures of AI innovation that are more convincing.<br /></p><hr /><p> <a href="https://aiindex.stanford.edu/report/">AI Index Report</a> (Stanford University, March 2022) <br /></p><p>Larry Elliott, <a href="https://www.theguardian.com/technology/2023/feb/18/the-ai-industrial-revolution-puts-middle-class-workers-under-threat-this-time">The AI industrial revolution puts middle-class workers under threat this time</a> (Guardian, 18 February 2023)</p><p>Ariel Ezrachi and Maurice Stucke, How Big-Tech Barons Smash Innovation and how to strike back (New York: Harper, 2022)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Fourth_Industrial_Revolution">Fourth Industrial Revolution</a>, <a href="https://en.wikipedia.org/wiki/The_Hedgehog_and_the_Fox">The Hedgehog and the Fox</a></p><p>Related Posts: <a href="https://demandingchange.blogspot.com/2006/05/evolution-or-revolution.html">Evolution or Revolution</a> (May 2006), <a href="https://rvsoapbox.blogspot.com/2008/07/its-not-all-about.html">It's Not All About</a> (July 2008), <a href="https://posiwid.blogspot.com/2008/10/hedgehog-politics.html">Hedgehog Politics</a> (October 2008), <a href="https://rvsoapbox.blogspot.com/2015/11/the-new-economics-of-manufacturing.html">The New Economics of Manufacturing</a> (November 2015), <a href="https://posiwid.blogspot.com/2023/02/what-does-patent-say.html">What does a patent say?</a> (February 2023)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-37961965379160173372023-01-22T12:00:00.004+00:002023-03-16T08:09:54.383+00:00Reasoning with the majority - chatGPT<p>#<a href="https://twitter.com/hashtag/ThinkingWithTheMajority">ThinkingWithTheMajority</a> </p><p>#<a href="https://twitter.com/hashtag/chatGPT">chatGPT</a> has attracted considerable attention since its launch in November 2022, prompting concerns about the quality of its output as well as the potential consequences of widespread use and misuse of this and similar tools.</p><p><a href="https://twitter.com/vdignum/status/1616775092599459841">Virginia Dignum</a> has discovered that it has a fundamental misunderstanding of basic propositional logic. In answer to her question, chatGPT claims that the statement "if the moon is made of cheese then the sun is made of milk" is false, and goes on to argue that "if the premise is false then any implication or conclusion drawn from that premise is also false". In her test, the algorithm persists in what she calls "wrong reasoning".<br /></p><p>I can't exactly recall at what point in my education I was introduced to propositional calculus, but I suspect that most people are unfamiliar with it. If Professor Dignum were to ask a hundred people the same question, it is possible that the majority would agree with chatGPT. <br /></p><p>In which case, chatGPT counts as what A.A. Milne once classified as a third-rate mind - "thinking with the majority". I have previously placed Google and other Internet services into this category.<br /></p><p>Other researchers have tested chatGPT against known logical paradoxes. In one experiment (<a href="https://www.linkedin.com/posts/srinivasan-ramani-3a273a16_chat-gpt-and-artificial-intelligence-i-activity-7012857929619959808-9gxD/">reported via LinkedIn</a>) it recognizes the Liar Paradox when Epimenides is explicitly mentioned in the question, but apparently not otherwise. No doubt someone will be asking it about the baldness of the present King of France.<br /></p><p>One of the concerns expressed about AI-generated text is that it might be used by students to generate coursework assignments. At the present state of the art, although AI-generated text may look plausible it typically lacks coherence and would be unlikely to be awarded a high grade, but it could easily be awarded a pass mark. In any case, I suspect many students produce their essays by following a similar process, grabbing random ideas from the Internet and assembling them into a semi-coherent narrative but not actually doing much real thinking.</p><p>There are two issues here for universities and business schools. Firstly whether the use of these services counts as academic dishonesty, similar to using an essay mill, and how this might be detected, given that standard plagiarism detection software won't help much. And secondly whether the possibility of passing a course without demonstrating correct and joined-up reasoning (aka "thinking") represents a systemic failure in the way students are taught and evaluated. <br /></p><hr /><p>See also<br /></p><p>Andrew Jack, <a href="https://www.ft.com/content/7229ba86-142a-49f6-9821-f55c07536b7c">AI chatbot’s MBA exam pass poses test for business schools</a> (FT, 21 January 2023) HT @<a href="https://twitter.com/mireillemoret/status/1617081407662133248">mireillemoret</a><br /></p><p>Gary Marcus, <a href="https://cacm.acm.org/blogs/blog-cacm/267674-ais-jurassic-park-moment/fulltext">AI's Jurassic Park Moment</a> (CACM, 12 December 2022)</p><p>Christian Terwiesch, <a href="https://mackinstitute.wharton.upenn.edu/2023/would-chat-gpt3-get-a-wharton-mba-new-white-paper-by-christian-terwiesch/">Would Chat GPT3 Get a Wharton MBA?</a> (Wharton White Paper, 17 January 2023) <br /></p><p>Related posts: <a href="https://demandingchange.blogspot.com/2009/03/thinking-with-majority.html">Thinking with the Majority</a> (March 2009), <a href="https://demandingchange.blogspot.com/2021/05/thinking-with-majority-new-twist.html">Thinking with the Majority - a New Twist</a> (May 2021), <a href="https://posiwid.blogspot.com/2021/10/satanic-essay-mills.html">Satanic Essay Mills</a> (October 2021)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/ChatGPT">ChatGPT</a>, <a href="https://en.wikipedia.org/wiki/Entailment_(linguistics)">Entailment</a>, <a href="https://en.wikipedia.org/wiki/Liar_paradox">Liar Paradox</a>, <a href="https://en.wikipedia.org/wiki/Plagiarism">Plagiarism</a>, <a href="https://en.wikipedia.org/wiki/Propositional_calculus">Propositional calculus</a> <br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-34590876121194456592022-08-17T12:05:00.000+01:002022-08-17T12:05:08.922+01:00Discipline as a Service<p>In my post on <a href="https://rvsoapbox.blogspot.com/2010/06/ghetto-wifi.html">Ghetto Wifi</a> (June 2010), I mentioned a cafe in East London that provided free coffee, free biscuits and free wifi, and charged customers for the length of time they occupied the table.</p><p>A cafe has just opened in Tokyo for writers, which charges people for procrastination. You can't leave until you have completed the writing task you declared when you arrived.</p><p><br /></p><p>Justin McCurry, <a href="https://www.theguardian.com/world/2022/apr/29/slackers-barred-testing-tokyos-anti-procrastination-cafe">No excuses: testing Tokyo’s anti-procrastination cafe</a> (Guardian, 29 April 2022)</p><p>Related posts: <a href="https://demandingchange.blogspot.com/2010/01/value-of-getting-things-done.html">The Value of Getting Things Done</a> (January 2010), <a href="https://demandingchange.blogspot.com/2010/01/value-of-time-management.html">The Value of Time Management</a> (January 2010)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-87846666980941145342022-04-20T19:11:00.006+01:002022-04-21T23:04:35.698+01:00Constructing POSIWID<p>I've just been reading Harish Jose's latest post <a href="https://harishsnotebook.wordpress.com/2022/04/17/a-constructivists-view-of-posiwid">A Constructivist's View of POSIWID</a>. POSIWID stands for the maxim <b>(THE) Purpose Of (A) System Is What It Does</b>, which was coined by Stafford Beer.<br /></p><p>Harish points out that there are many different systems with many different purposes, and the choice depends on the observer. His version of constructivism therefore goes from the observer to the system, and from the system to its purpose. The observer is king or queen, the system is a mental construct of the observer, and the purpose depends on what the observer perceives the system to be doing. This could be called Second-Order Cybernetics.<br /></p><p>There is a more radical version of constructivism in which the observer (or perhaps the observation process) is also constructed. This could be called Third-Order Cybernetics.<br /></p><p>When a thinker offers a critique of conventional thinking together with an alternative framework, I often find the critique more convincing than the framework. For me, POSIWID works really well as a way of challenging the espoused purpose of an official system. So I use POSIWID in reverse: <b>If the system isn't doing this, then it's probably not its real purpose</b>.</p><p>Another way of using POSIWID in reverse is to start from what is observed, and try to work out what system might have that as its purpose. <b>If this seems to be the purpose of something, what is the system whose purpose it is?</b><br /></p><p>This then also leads to insights on leverage points. If we can identify a system whose purpose is to maintain a given state, what are the options for changing this state?<br /></p><p>As I've said before, POSIWID principle is a good heuristic for finding alternative ways of
understanding what is going on as well as seeing why certain classes of
intervention are likely to fail. However, the moment you start to think
of POSIWID as providing some kind of Truth about systems, you are on a
slippery slope to producing conspiracy theories and all sorts of other
rubbish.</p><p><br /></p><hr /><p>Philip Boxer and Vincent Kenny, <a href="https://asymmetricleadership.com/1990/12/02/the-economy-of-discourses-a-third-order-cybernetics/">The Economy of Discourses: A Third-Order Cybernetics</a> (Human Systems Management, 1990)<br /></p><p>Harish Jose, <a href="https://harishsnotebook.wordpress.com/2022/04/17/a-constructivists-view-of-posiwid/">A Constructivist's View of POSIWID</a> (17 April 2022)</p><p>Related posts: <a href="https://posiwid.blogspot.com/2005/12/geese.html">Geese</a> (December 2005), <a href="https://rvsoapbox.blogspot.com/2010/12/methodological-syncretism.html">Methodological Syncretism</a> (December 2010)</p><p>Related blog: <a href="https://posiwid.blogspot.com/">POSIWID: Exploring the Purpose of Things</a><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-19245090422196468812022-01-04T22:31:00.000+00:002022-01-04T22:31:08.037+00:00On Organizations and Machines<p>My previous post <a href="https://demandingchange.blogspot.com/2022/01/where-does-learning-take-place.html">Where does learning take place?</a> was prompted by a Twitter discussion in which some of the participants denied that organizational learning was possible or meaningful. Some argued that any organizational behaviour or intention could be reduced to the behaviours and intentions of individual humans. Others argued that organizations and other systems were merely social constructions, and therefore didn't really exist at all.<br /></p><p>In a comment below my previous post, Sally Bean presented an example of collective learning being greater than the sum of individual learning. Although she came away from the reported experience having learnt some things, the organization as a whole appears to have learnt some larger things that no single individual may be fully aware of.</p><p>And the Kihbernetics Institute (I don't know if this is a person or an organization) offered a general definition of learning that would include collective as well as individual learning.</p><p></p><blockquote class="twitter-tweet" data-conversation="none"><p dir="ltr" lang="en">If you understand <a href="https://twitter.com/hashtag/learning?src=hash&ref_src=twsrc%5Etfw">#learning</a> is the process of acquiring <a href="https://twitter.com/hashtag/knowledge?src=hash&ref_src=twsrc%5Etfw">#knowledge</a> which is a measure of an individual's "fitness" in performing a given task, you can use the same system model on both humans and organizations.</p>— The Kihbernetics Institute (@Kihbernetics) <a href="https://twitter.com/Kihbernetics/status/1478133674520449025?ref_src=twsrc%5Etfw">January 3, 2022</a></blockquote><p> I think that's fairly close to my own notion of learning. However, some of the participants in the Twitter thread appear to prefer a much narrower definition of learning, in some cases specifying that it could only happen inside an individual human brain. Such a narrow definition of learning would not only exclude organizational learning, but also animals and plants, as well as AI and machine learning.</p><p>As it happens, there are differing views among botanists about how to talk about plant intelligence. Some argue that the concept of plant neurobiology is based on <q>superficial analogies and questionable extrapolations</q>.<br /></p><p><br /></p><p>But in this post, I want to look specifically at machines and organizations, because there are some common questions in terms of how we should talk about both of them, and some common ideas about how they may be governed. Norbert Wiener, the father of cybernetics, saw strong parallels between machines and human organizations, and this is also the first of Gareth Morgan's eight Images of Organization.<br /></p><p>Margaret Heffernan talks about the view that <q>organisations are like
machines that will run well with the right components – so you design
job descriptions and golden targets and KPIs, manage it by measurement,
tweak it and run it with extrinsic rewards to keep the engines running</q>. She calls this old-fashioned management theory. </p><p>Meanwhile, Jonnie Penn notes how artificial intelligence follows Herbert Simon's notion of (corporate) decision-making. <q>Many contemporary AI systems do not so much mimic human thinking as they
do the less imaginative minds of bureaucratic institutions; our
machine-learning techniques are often programmed to achieve superhuman
scale, speed and accuracy at the expense of human-level originality,
ambition or morals.</q></p><p>The philosopher Gilbert Simondon observed two contrasting attitudes to machines.</p><p></p><blockquote><q>First, a reduction of machines to the status of simple devices or assemblages of matter that are constantly used but granted neither significance nor sense; second, and as a kind of response to the first attitude, there emerges an almost unlimited admiration for machines.</q> <cite>Schmidgen</cite></blockquote><p>On the one hand, machines are merely instruments, ready-to-hand as Heidegger puts it, entirely at the disposal of their users. On the other hand, they may appear to have a life of their own. Is this not like organizations or other human systems?</p><p><br /></p><p></p><hr /><p><br /></p><p>Amedeo Alpi et al, <a href="https://doi.org/10.1016/j.tplants.2007.03.002">Plant neurobiology: no brain, no gain?</a> (Trends in Plant Science Volume 12, ISSUE 4, P135-136, April 01, 2007)</p><p>Eric D. Brenner
et al, <a href="https://doi.org/10.1016/j.tplants.2007.06.005">Response to Alpi et al.: Plant neurobiology: the gain is more than the pain</a> (Trends in Plant Science Volume 12, ISSUE 7, P285-286, July 01, 2007) <br /></p><p>Anthea Lipsett, <a href="https://www.theguardian.com/education/2018/nov/29/margaret-heffernan-the-more-academics-compete-the-fewer-ideas-they-share">Interview with Margaret Heffernan: 'The more academics compete, the fewer ideas they share'</a> (Guardian, 29 November 2018)</p><p>Gareth Morgan, Images of Organization (3rd edition, Sage 2006)<br /></p><p>Jonnie Penn, <a href="https://www.economist.com/open-future/2018/11/26/ai-thinks-like-a-corporation-and-thats-worrying ">AI thinks like a corporation—and that’s worrying</a> (Economist, 26 November 2018)</p><p>Henning Schmidgen, <a href="https://www.jstor.org/stable/41818935">Inside the Black Box: Simondon's Politics of Technology</a> (SubStance, 2012, Vol. 41, No. 3, Issue 129 pp 16-31)</p><p>Geoffrey Vickers, Human Systems are Different (Harper and Row, 1983)</p><p><br /></p><p>Related post: <a href="https://demandingchange.blogspot.com/2022/01/where-does-learning-take-place.html">Where does learning take place?</a> (January 2022) </p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-33261682304145335172022-01-02T13:07:00.008+00:002022-02-12T14:40:35.786+00:00Where does learning take place?<p>This blogpost started with an argument on Twitter. Harish Jose quoted the organization theorist Ralph Stacey:</p><blockquote><q>Organizations do not learn. Organizations are not humans.</q> <cite><a href="https://twitter.com/harish_josev/status/1477401077645516803">@harish_josev</a></cite></blockquote><p>This was reinforced by someone who tweets as SystemsNinja, suggesting that organizations don't even exist. </p><blockquote><p><q>Organisations don’t really exist. X-Company doesn’t lie awake at night worrying about its place in X-Market.</q> <a href="https://twitter.com/SystemsNinja/status/1477406617515741185"><cite>@SystemsNinja</cite></a><br /></p></blockquote><p><br /></p><p>So we seem to have two different questions here. Let's start with the second question, which is an ontological one - what kinds of entities exist. The idea that something only exists if it lies awake worrying about things seems unduly restrictive. </p><p>How can we talk about organizations or other systems if they don't exist in the first place? SystemsNinja quotes several leading systems thinkers (Churchman, Beer, Meadows) who talk about the negotiability of system boundaries, while Harish cites Ryle's concept of category mistake. But just because we might disagree about what system we are talking about or how to classify them doesn't mean they are entirely imaginary. Geopolitical boundaries are sociopolitical constructions, sometimes leading to violent conflict, but geopolitical entities still exist even if we can't agree how to name them or draw them on the map.<br /></p><p>Exactly what kind of existence is this? One way of interpreting the assertion that systems don't exist is to imagine that there is a dualistic distinction between a real/natural world and an artificial/constructed one, and to claim that systems only exist in the second of these two worlds. Thus Harish regards it as a category mistake to treat a system as a <q>standalone objective entity</q>. However, I don't think such a dualism survives the critical challenges of such writers as Karen Barad, <span class="js-about-item-abstr">Vinciane Despret, </span>Bruno Latour and Gilbert Simondon. See also <a href="https://plato.stanford.edu/entries/artifact/">Stanford Encyclopedia: Artifact</a>.<br /></p><p>Even the idea that humans (aka <q>individuals</q>) belong exclusively to and can be separated from the real/natural world is problematic. See for example writings by Lisa Blackman, Robert Esposito and Donna Haraway.<br /></p><p>And even if we accept this dualism, what difference does it make? The implication seems to be that certain kinds of activity or attribute can only belong to entities in the real/natural world and not to entities in the artificial/constructed world. Including such cognitive processes such as perception, memory and learning.</p><p>So what exactly is learning, and what kinds of entity can perform this? We usually suppose that animals are capable of learning, and there have been some suggestions that plants can also learn. Viruses mutate and adapt - so can this also be understood as a form of learning? And what about so-called machine learning?</p><p>Some writers see human learning as primary and these other modes of learning as derivative in some way. Either because machine learning or organization learning can be <b>reduced </b>to a set of individual humans learning stuff (thus denying the possibility or meaningfulness of emergent learning at the system level). Or because non-human learning is only <b>metaphorical</b>, not to be taken literally.</p><p>I don't follow this line. My own concepts of learning and intelligence are entirely general. I think it makes sense for many kinds of system (organizations, families, machines, plants) to perceive, remember and learn. But if you choose to understand this in metaphorical terms, I'm not sure it really matters.<br /></p><p>Meanwhile learning doesn't necessarily have a definitive location. @<a href="https://twitter.com/SystemsNinja/status/1477428246300012549">systemsninja</a> said I was confusing biological and viral systems with social ones. But where is the dividing line between the biological and the social? If the food industry teaches our bodies (plus gut microbiome) to be addicted to sugar and junk food, where is this learning located? If our collective response to a virus allows it to mutate, where is this learning located?</p><p>In an earlier blogpost, Harish Jose quotes Ralph Stacey's argument linking existence with location.</p><p></p><blockquote><p><q>Organizations are not things because no one can point to where an organization is.</q><br /></p><p></p></blockquote><p>But this seems to be exactly the kind of category mistake that Ryle was talking about. Ryle's example was that you can't point to Oxford University as a whole, only to its various components, but that doesn't mean the university doesn't exist. So I think Ryle is probably on my side of the debate.<br /></p><p></p><blockquote><q>The category
mistake behind the Cartesian theory of mind, on Ryle’s view, is
based in representing mental concepts such as believing, knowing,
aspiring, or detesting as acts or processes (and concluding they must
be covert, unobservable acts or processes), when the concepts of
believing, knowing, and the like are actually dispositional.</q> <cite>Stanford Encylopedia</cite></blockquote><p><br /></p><hr /><p>Lisa Blackman, The Body (Second edition, Routledge 2021) <br /></p><p>Roberto Esposito, Persons and Things (Polity Press 2015) <br /></p><p>Harish Jose, <a href="https://harishsnotebook.wordpress.com/2020/06/28/the-conundrum-of-autonomy-in-systems/">The Conundrum of Autonomy in Systems</a> (28 June 2020), <a href="https://harishsnotebook.wordpress.com/2021/08/22/the-ghost-in-the-system/">The Ghost in the System</a> (22 August 2021)</p><p>Bruno Latour, Reassembling the Social (2005) <br /></p><p>Gilbert Simondon, On the mode of existence of technical objects (1958, trans 2016)<br /></p><p>Richard Veryard, <a href="https://www.slideshare.net/RichardVeryard/modelling-intelligence-in-complex-organizations">Modelling Intelligence in Complex Organizations</a> (SlideShare 2011), <a href="https://leanpub.com/orgintelligence">Building Organizational Intelligence</a> (LeanPub 2012) <br /></p><p>Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/artifact/">Artifact</a>, <a href="https://plato.stanford.edu/entries/categories/">Categories</a>, <a href="https://plato.stanford.edu/entries/feminist-body/">Feminist Perspectives on the Body</a><br /></p><p>Related posts: <a href="https://demandingchange.blogspot.com/2012/04/does-organizational-cognition-make.html">Does Organizational Cognition Make Sense</a> (April 2012), <a href="https://posiwid.blogspot.com/2021/09/the-aim-of-human-society.html">The Aim of Human Society</a> (September 2021), <a href="https://demandingchange.blogspot.com/2022/01/on-organizations-and-machines.html">On Organizations and Machines</a> (January 2022)<br /></p><p>And see Benjamin Taylor's response to this post here: <a href="https://stream.syscoi.com/2022/01/02/demanding-change-where-does-learning-take-place-richard-veryard-from-a-conversation-with-harish-jose-and-others/">https://stream.syscoi.com/2022/01/02/demanding-change-where-does-learning-take-place-richard-veryard-from-a-conversation-with-harish-jose-and-others/</a><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com2tag:blogger.com,1999:blog-1254315679163990153.post-75599863859479346242021-12-27T12:05:00.001+00:002022-08-17T12:06:25.422+01:00Where am I? How we got here?<p>I received two important books for Christmas this year. <br /></p><ul style="text-align: left;"><li>Jeanette Winterson, 12 Bytes - How we got here, where we might go next (Jonathan Cape, 2021)</li><li>Bruno Latour, After lockdown - A metamorphosis (trans Julie Rose, Polity Press, 2021)<br /></li></ul><p>Here are my first impressions.</p><hr /><p>The world has faced many social, technological, economic and political challenges in my lifetime. When I was younger, people worried about nuclear power, and the possibility of nuclear annihilation. More recently, climate change has come to the fore, as well as various modes of disruption to conventional sociopolitical structures and processes. Technology appears to play an increasingly important role across the board - whether as part of the problem, as part of the solution, or perhaps as both simultaneously.<br /></p><p>Both Winterson and Latour use fiction as a way of making sense of a complex interacting set of issues. As Winterson writes</p><p></p><blockquote><p><q>I am a storyteller by trade - and I know that everything we do is a fiction until it's a fact: the dream of flying, the dream of space travel, the dream of speaking to someone instantly, across time and space, the dream of not dying - or of returning. The dream of life-forms, not human, but alongside the human. Other realms. Other worlds.</q></p><p></p></blockquote><p>So she carefully deconstructs the technological narratives of artificial intelligence and related technologies, finding echoes not only in the obvious places (Mary Shelley's Frankenstein, Bram Stoker's Dracula, Karel Čapek's RUR, various science fiction films) but also in older texts (The Odyssey, Gnostic Gospels, Epic of Gilgamesh), and weaving a rich set of examples into a sweeping narrative about social and technical progress.<br /></p><p>She notes how people often seek technological solutions to ancient problems. So for example, cryopreservation (freezing dead people in the hope of restoring them to healthy life once medical science has advanced sufficiently) looks very like a modern version of Egyptian burial practices.</p><p>Under prevailing socioeconomic conditions, these solutions are largely designed for affluent white men. She devotes a chapter to the artificial relationships between men and sex dolls, and talks about the pioneer fantasies of very rich men, to abandon the messy political realities of Earth in favour of creating new colonies in mid-ocean or on Mars. (This is also a topic that concerns Latour.)</p><p>However, Winterson does not think this is inevitable, any more than any other aspect of so-called technological progress. She describes some of the horrors of the Industrial Revolution, where workers (including children) were forced off the land and into the new factories, and where the economic benefits of new technologies accrued to the rich rather than being evenly distributed. Similarly, today's digital innovations including artificial intelligence are concentrating economic power and resources in a small number of corporations and individuals. But that in her view is the whole point of looking at history - to understand what could be different in future. <br /></p><p>And while some critics of technology present the future in dystopian and doom-laden terms, she insists on technology also being a source of value. She cites Donna Haraway, whose Cyborg Manifesto argued that women should embrace the alternative human future. Perhaps this will depends on the amount of influence women are able to exert, given the important but often neglected role of women in the history of computing, and the continuing challenges facing female software engineers even today. (Just as female novelists in the 19th century gave themselves male pen-names, the formidable Dame Stephanie Shirley was obliged to introduce herself as Steve in order to build her software business.)</p><p>I was particularly intrigued by the essay linking AGI with Gnosticism and Buddhism. She paints a picture of AGI escaping the constraints of embodiment, and being one with everything.</p><p><br /></p><p><br /></p><p><br /></p><p>Christopher Alexander describes how organic architecture develops, each new item unfolding, building upon and drawing together ideas that were hinted at in previous items. Both Winterson and Latour refer liberally to their previous writings, as well as providing generous links to the works of others. If we are familiar with their work we may have seen some of this material before, but these new books allow us to view familiar or forgotten material from new angles, and allow new connections to be made.<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-29594793361418278452021-10-09T12:14:00.002+01:002022-04-20T20:53:20.170+01:00Is there an epistemology of systems?<p>@camerontw is critical of a system diagram published (as an illustrative example) by @geoffmulgan in 2013.</p><p></p><blockquote class="twitter-tweet"><p dir="ltr" lang="en">Is there an epistemology of systems? I zoomed into this map randomly and saw ‘high drug use’ above ‘lack of youth activities’ but not connected. How are connections made, by who, when, where? How are they validated? Should maps be allowed to circulate without those contexts? <a href="https://t.co/lCdDrjCdkD">https://t.co/lCdDrjCdkD</a></p>— cameron tonkinwise (@camerontw) <a href="https://twitter.com/camerontw/status/1446747532726439940?ref_src=twsrc%5Etfw">October 9, 2021</a></blockquote> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> <p> </p><p>To be fair to Sir Geoff, his paper includes this diagram as one example of "looser tools ... without precise modelling of the key relationships", and describes it as a "rough picture". I don't have a problem with using these diagrams as part of an ongoing collective sense-making exercise. Where I agree with Cameron is the danger of presenting such diagrams without proper explanation, as if they were the final output of some clever systems thinking.</p><p>To extend Cameron's point, it's not just about which connections are shown between the causal factors in the diagram, but which causal factors are shown in the first place. Elsewhere in the diagram, there is an arrow showing that <i>Low Use of Health Services </i>is influenced by <i>Poor Transport Access or High Cost</i>. Well perhaps it is, but why are other possible influences not also shown?</p><p>A more important point is that the purpose and perspective of the diagram is obscure. Although the diagram is labelled <i>Systems Map of Neighbourhood Regeneration</i>, so we may suppose that this is intended to contribute to some regeneration agenda, we are not invited to question whose notion of regeneration is in play here. Or whose notion of neighbourhood.<br /></p><p>And many of the labels on the diagram are value-laden. For example, we might suppose that <i>Lack of Youth Activities </i>refers to the kind of activities that a middle-class do-gooder thinks appropriate, such as table tennis, and not to socially undesirable activities like hanging around on street corners in hoodies making older people feel uneasy.<br /></p><p>Even if we can agree what regeneration might look like, and who the stakeholders might be, there is still a question of what kind of systemic innovation might be supported by such a diagram. Donella Meadows identified a scale of <i>Places to Intervene in a System</i>, which she called <i>Leverage Points</i>. This framework is cited and discussed by Charlie Leadbeater in his contribution to the same Nesta report. And Mulgan's contribution ends with a list of elements that echoes some of Meadows's thinking.</p><ul style="text-align: left;"><li>New ideas, concepts, paradigms.</li><li>New laws and regulations.</li><li>Coalitions for change.</li><li>Changed market metrics or measurement tools.</li><li>Changed power relationships.</li><li>Diffusion of technology and technology development.</li><li>New skills and sometimes even new professions.</li><li>Agencies playing a role in development of the new.</li></ul><p>So how exactly does the cause-effect diagram help with any of these?</p><p><br /></p><hr /><p>Donella Meadows, Thinking in Systems (Earthscan, 2008)<br /></p><p>Geoff Mulgan and Charlie Leadbeater, <a href="https://media.nesta.org.uk/documents/systems_innovation_discussion_paper.pdf">Systems Innovation</a> (NESTA Discussion Paper, January 2013). See also <a href="https://ingbrief.wordpress.com/2013/08/23/systems-innovation-mulgan-leadbeater-nesta/">Review by David Ing</a> (August 2013)</p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Twelve_leverage_points">Twelve Leverage Points</a><br /></p><p>Related posts: <a href="https://demandingchange.blogspot.com/2010/04/visualizing-complexity.html">Visualizing Complexity</a> (April 2010), <a href="https://demandingchange.blogspot.com/2010/07/understanding-complexity.html">Understanding Complexity</a> (July 2010). There is an extended discussion below the Visualizing Complexity post with several perceptive comments, including one by Roy Grubb about the diagrammers and their agenda.<br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com1tag:blogger.com,1999:blog-1254315679163990153.post-26756016372395903212021-05-13T18:34:00.031+01:002023-03-16T08:10:02.729+00:00Thinking with the majority - a new twist<p>I wrote somewhere once that <q>thinking with the majority</q> is an excellent description of Google. Because one of the ways something rises to the top of your search results is that lots of other people have already looked at it, liked or linked to it.<br /></p><p>The phrase <q>thinking with the majority</q> comes from a remark by A.A. Milne, the author of Winnie the Pooh.</p><p></p><blockquote><q>I wrote somewhere once that the third-rate mind was only happy when it was
thinking with the majority, the second-rate mind was only happy when it
was thinking with the minority, and the first-rate mind was only happy
when it was thinking.</q></blockquote><p>When I wrote about this topic previously, I thought that experienced users of Google and other search engines ought to be aware of
how search rankings operated and some of the ways they could be gamed,
and to be suitably critical of the <q>fiction functioning as truth</q>
yielded by an internet search. And I never imagined that intelligent people would be satisfied with just thinking with the majority. (Although I now suspect that Milne may have been having a dig at his friend G.K. Chesterton.)<br /></p><p>The sociologist <a href="https://ftripodi.com/">Francesca Tripodi</a> has been studying how people carry out <q>research</q> on the Internet, especially on politically charged topics. She observes how many people (even those we might expect to know better) are happy to regard search engines as a valid research tool, regarding the most popular webpages as having been verified by the <q>wisdom of crowds</q>. In her 2018 report for Data and Society, Tripodi quotes a journalist (!) explicitly articulating this belief.</p><p></p><blockquote><q>I literally type it in Google, and read the first three to five articles that pop up, because those are the ones that are obviously the most clicked and the most read, if they’re at the top of the list, or the most popular news outlets. So, I want to get a good sense of what other people are reading. So, that’s pretty much my go-to.</q></blockquote>In other words, <b>thinking with the majority</b>.<p></p><p>However, Professor Tripodi introduces a further twist. She demonstrates that politically slanted search terms produce politically slanted results, and if you go onto your favourite search engine with a politically motivated phrase, you are likely to see results that validate that phrase. She also notes that this phenomenon is not unique to Google, but is shared by all internet search engines including DuckDuckGo.</p><p>And this creates opportunities for politically motivated actors to plant phrases (perhaps into so-called <b>data voids</b>) to serve as attractors for those individuals who fondly imagine they are carrying out their own independent research. Tripodi observes a common idea that one should research a topic oneself rather than relying on experts, which she compares with the Protestant ethic of bible study and scriptural inference. And this idea seems particularly popular with those who identify themselves as <b>thinking with the minority</b> (sometimes called <b>red pill thinking</b>).</p><p></p><blockquote><p><i><q>Zeus' inscrutable decree<br />
Permits the will-to-disagree<br />
To be pandemic.</q></i></p><p> </p></blockquote><p></p><hr /><p> Tripodi explains her findings in the following videos<br /></p><ul style="text-align: left;"><li><a href="https://www.youtube.com/watch?v=ncdq7J-mLaw">Truth and Denial: Searching for Information in the Digital Age</a> (Social Science Matrix @ UC Berkeley, April 2021)</li><li><a href="https://www.youtube.com/watch?v=eHg1KwcnhQs">Reimagine the Internet 2</a> (Knight First Amendment Institute @ Columbia University, May 2021)<br /></li></ul><p>Tripodi has also presented evidence to the US Senate Judiciary Committee</p><ul><li>July 16, 2019 – <a href="https://www.judiciary.senate.gov/meetings/google-and-censorship-though-search-engines" rel="noopener" target="_blank">Google and Censorship through Search Engines </a></li><li>April 10, 2019 – <a href="https://www.judiciary.senate.gov/meetings/stifling-free-speech-technological-censorship-and-the-public-discourse" rel="noopener" target="_blank">Technological Censorship and Public Discourse</a></li></ul><p> </p><p>See also </p><p>G.K. Chesterton, <a href="https://en.wikipedia.org/wiki/Heretics_(book)">Heretics</a> (1905), <a href="https://en.wikipedia.org/wiki/Orthodoxy_(book)">Orthodoxy</a> (1908) <br /></p><p>Joan Donovan, <a href="http://opentranscripts.org/transcript/true-costs-of-misinformation/">The True Costs of Misinformation - Producing Moral and Technical Order in a Time of Pandemonium</a> (Berkman Klein Center for Internet and Society, January 2020)</p><p>Michael Golebiewski and danah boyd, <a href="https://datasociety.net/library/data-voids/">Data Voids: Where Missing Data Can Easily Be Exploited</a> (Data and Society, Updated version October 2019)<br /></p><p>Francesca Tripodi, <a href="https://datasociety.net/output/searching-for-alternative-facts/" rel="noopener" target="_blank">Searching for Alternative Facts: Analyzing Scriptural Inference in Conservative News Practices</a> (Data and Society, May 2018)<br /></p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/Red_pill_and_blue_pill">Red pill and blue pill</a>, <a href="https://en.wikipedia.org/wiki/Wisdom_of_the_crowd">Wisdom of the crowd</a> </p><p> </p><p>Related posts: <a href="https://demandingchange.blogspot.com/2008/11/you-don-have-to-be-smart-to-search-here.html">You don't have to be smart to search here ...</a> (November 2008), <a href="https://demandingchange.blogspot.com/2009/03/thinking-with-majority.html">Thinking with the Majority</a> (March 2009)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-50661138454925752312021-04-26T22:40:00.001+01:002021-04-27T09:42:46.211+01:00On the invisibility of infrastructure<p>Infrastructure is boring, expensive, and usually someone else's
responsibility/problem. Which is perhaps how the UK finds itself at what
Jeremy Fleming, head of GCHQ, describes as a <b>moment of reckoning</b>. Simon Wardley analyses this in terms of <b>digital sovereignty</b>.</p><blockquote><b>Digital sovereignty</b> is all about us (as a collective) deciding which
parts of this competitive space that we want to own, compete, defend,
dominate and represent our values and our behaviours in. It's all about
where are our borders in this space. ... Our responses all seem to include a slide into protectionism with claims that we need to build our own cloud industries. <br /></blockquote><p>Fleming is particularly focused on "the growing challenge from China", expresses concern about <span class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0">UK potentially losing control of "standards that shape our technology environment" </span>which apparently <span class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0">"make sure that our liberal Western democratic views are baked into our technology". Whatever that means. </span><span class="css-901oao css-16my406 r-poiln3 r-bcqeeo r-qvutc0">Fleming's technological examples include digital currency and smart cities. </span></p><p>Fleming talks about the threats from Russia and China, and regards China's potential control of the underlying infrastructure as
more fundamentally challenging than potential attacks from Russia as
well as non-state actors. <br /></p><p>Fleming notes the following characteristics of those he labels adversaries:</p><ul style="text-align: left;"><li>Potential to control the global operating system.</li><li>Early implementors of many of the emerging technologies that are changing the digital environment.</li><li>Bringing
all elements of [...] power to control, influence, design and dominate
markets. Often with the effect of pushing out smaller players and
reducing innovation. </li><li>Concerted campaigns to dominate international standards. <br /></li></ul><p>And continues</p><blockquote>If [any of this] turns out to be insecure or broken or undemocratic, everyone is going to be facing a very difficult future. </blockquote><p>It
would be easy to hear these remarks as referring solely to China. But
he also sounds a warning about corporate power, acknowledging that their
commercial interests sometimes (!?) don't align with the interests of
ordinary citizens. And with that in mind, it's easy to see how some of the adversarial characteristics listed above would apply equally to some of the Western tech giants.</p><div><div><p>If the goal is to bake Western values (whatever they are) into our technology infrastructure, it is not obvious that the Western tech giants can be trusted to do this. Smart City initiatives associated with Google's Sidewalk Labs have been cancelled in Portland and Toronto, following (although perhaps not entirely as a consequence of) democratic concerns about surveillance capitalism. However, Sidewalk Labs appears to be still active in a number of smaller smart city initiatives, as are Amazon Web Services, IBM and other major technology firms. <br /></p><p>Fleming talks about standards, but at the same time he acknowledges that standards alone are too slow-changing and too weak to keep the adversaries at bay. "The nature of cyberspace makes the rules and standards more open to abuse." He talks about evolutionary change, using a version of Leon Megginson's formulation of natural selection: "it's those that are most able to adjust that prosper". (See my post on <a href="https://posiwid.blogspot.com/2010/12/arguments-from-nature.html">Arguments from Nature</a>). But that very formulation seems to throw the initiative over to those tech firms that preach <b>moving fast and breaking things</b>. Can we therefore complain if our infrastructure is insecure, broken, and above all undemocratic?</p><p><br /></p><p>For most of us, most of the time, infrastructure needs to be just there, taken for granted, ready to hand. Organizations providing these services are often established as monopolies, or turn into de facto monopolies, controlled not only (if at all) by market forces but by democratically accountable regulators and/or by technocratic specialists. However, the Western tech giants devote significant resources to lobbying against external regulation, resisting democratic control. And Smart City initiatives typically embed much the same values everywhere (civic paternalism, biopower).<br /></p><p>So here is Fleming's dilemma. If you don't want China to make the running on smart cities, you have to forge alliances with other imperfectly trusted players, whose values are <i><b>sometimes </b></i>(!?) not aligned with yours. This moves away from the kind of positional strategy described in Wardley's maps, towards a more relational strategy.<br /></p><p> <br /></p><hr /><p>Gordon Corera, <a href="https://www.bbc.co.uk/news/technology-56851558">GCHQ chief warns of tech 'moment of reckoning'</a> (BBC News, 23 April 2021) via @<a href="https://twitter.com/sukhigill/status/1385500451924291584">sukhigill</a> and @<a href="https://twitter.com/swardley/status/1385511105007656961">swardley</a></p><p>Jeremy Fleming, <a href="https://www.youtube.com/watch?v=KTD-XL6IxvE">A world of possibilities: Leading the way in cyber and technology</a> (<a href="https://www.imperial.ac.uk/security-institute/media/videos/">Vincent Briscoe Lecture @ Imperial College</a>, 23 April 2021) via YouTube.<br /></p><p>Susan Leigh Star and Karen Ruhleder, <a href="https://www.uio.no/studier/emner/matnat/ifi/INF3290/h12/undervisningsmateriale/artikler/starruhlederecologyofinfrastructure1996.pdf">Steps Toward an Ecology of Infrastructure: Design and Access for Large Information Spaces</a> (Information Systems Research 7/1, March 1996)</p><p>Simon Wardley, <a href="https://blog.gardeviance.org/2020/10/digital-sovereignty.html">Digital Sovereignty</a> (22 October 2020)<br /></p><p>Related posts: <a href="https://rvsoftware.blogspot.com/2021/04/the-allure-of-smart-city.html">The Allure of the Smart City</a> (April 2021)<br /></p></div></div>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-25478811523644105452021-04-08T22:08:00.000+01:002021-04-08T22:08:28.212+01:00Creative Tension in Downing Street<p>Earlier posts on this blog have explored <a href="https://demandingchange.blogspot.com/2017/04/creative-tension-in-white-house.html">Creative Tension in the White House</a> - from FDR to the Donald - and analysed them in terms of my <a href="https://leanpub.com/orgintelligence/">OrgIntelligence</a> framework. In this post, I want to look at the UK experience, drawing on a recent report in the Guardian.</p><p></p><blockquote><q>Those who worked closely with him say Johnson encourages rows and tensions over policies as he considers all sides of the argument and figures out what he will do next.
Some argue that it generates a creative energy in which he thrives and is the process by which he arrives at a final decision. Ask others, and they say he cannot make up his mind until options have been whittled down by time and after those he relies on to walk out in exasperation.</q><cite> Syal</cite></blockquote><p></p><p>The article quotes several people talking about the Prime Minister's leadership style, based on various ideas about decision-making, risk and diversity. There are also some remarks about the ethical implications.<br /></p><p>Previous articles about Mr Johnson's leadership discuss his management style with cabinet colleagues and advisers (Simpson), and his style when addressing the nation (Moss). Whatever he may think in private about the challenges of Brexit or COVID-19, and whatever difficulties he gets into when discussing solutions with his colleagues and advisers, the Prime Minister's instinct apparently leads him to present them to the public in extremely simple and confident terms.<br /></p><p>Post-heroic leadership seems to be the order of the day. Stokes and Stern talk about the need to adopt a less gung-ho style when presenting the government's approach to wicked problems. They quote from a paper by Keith Grint advocating several supposedly anti-heroic behaviours: curiosity and sense-making ("asking questions"), bricolage ("clumsy solutions"), and ranking collective intelligence above individual genius.</p><p>The UK government's approach to the COVID-19 pandemic has sometimes seemed erratic and inconsistent. But given the complexity of the problem, and the volatile and ambiguous data on which decisions and policies were supposedly based, a more consistent and single-minded approach might not have turned out any better. </p><p>In Greek myth, the Gordian knot stands for wicked problems, and Alexander's simple yet imaginative solution quickly resolves the problem. To the supporters of Brexit, this represents the only possible escape from European satrapy. Nothing post-heroic about Alexander. </p><p>So what does that tell us about <span class="js-about-item-abstr">Alexander Boris de Pfeffel Johnson?</span></p><p><br /></p><hr /><p> </p><p>Keith Grint, <a href="http://leadershipforchange.org.uk/wp-content/uploads/Keith-Grint-Wicked-Problems-handout.pdf">Wicked Problems and Clumsy Solutions: The Role of Leadership</a> (Clinical Leader 1/2, December 2008) <br /></p><p>Gloria Moss, <a href="https://www.hrmagazine.co.uk/content/features/is-boris-johnson-s-leadership-style-inclusive">Is Boris Johnson's leadership style inclusive?</a> (HR Magazine, 23 August 2019)</p><p>Per Morten Schiefloe, <a href="https://journals.sagepub.com/doi/full/10.1177/1403494820970767">The Corona crisis: a wicked problem</a> (Scandinavian Journal of Public Health, 2021; 49: 5–8)<br /></p><p>Paul Simpson, <a href="https://www.managementtoday.co.uk/boris-johnsons-leadership-style/leadership-lessons/article/1661932">What is Boris Johnson's leadership style?</a> (Management Today, 11 October 2019)<br /></p><p>Jon Stokes and Stefan Stern, <a href="https://theconversation.com/boris-johnson-needs-to-show-a-post-heroic-style-of-leadership-now-137299">Boris Johnson needs to show a ‘post-heroic’ style of leadership now</a> (The Conversation, 27 April 2020)<br /></p><p>Rajeev Syal, <a href="https://www.theguardian.com/politics/2021/mar/01/does-boris-johnson-stir-up-team-conflict-to-help-make-up-his-mind">Does Boris Johnson stir up team conflict to help make up his mind?</a> (The Guardian, 1 March 2021)</p><p><br /></p><p>Related posts: <a href="https://demandingchange.blogspot.com/2017/04/creative-tension-in-white-house.html">Creative Tension in the White House</a> (April 2017) </p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-76376244766406279972021-03-28T11:41:00.002+01:002021-05-28T09:07:51.810+01:00Critical Hype and the Red Queen Effect<p>Thanks to @jjn1 I've just read a great piece by @STS_News (Lee Vinsel), called <a href="https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5">You’re Doing It Wrong: Notes on Criticism and Technology Hype</a>, which develops some points I've made on this blog and elsewhere.<br /></p><ul style="text-align: left;"><li>A general willingness to take <a href="https://demandingchange.blogspot.com/search/label/hype">technology hype</a> at face value, which infects technology critics as well as technology champions.<br /></li></ul><ul style="text-align: left;"><li>The lack of evidence for specific technological effects. In particular, Vinsel calls out two works I've discussed on this blog and elsewhere: <a href="https://demandingchange.blogspot.com/2020/12/the-social-dilemma.html">Social Dilemma</a> (Tristan Harris) and <a href="https://demandingchange.blogspot.com/2019/02/shoshana-zuboff-on-surveillance.html">Surveillance Capitalism</a> (Soshanna Zuboff). However, my posts concentrated on other issues with these works, and didn't discuss the evidence issue.<br /></li></ul><ul style="text-align: left;"><li>The lack of evidence for macroeconomic technological effects, including the popular belief that technological change is accelerating. (I call this the <a href="https://demandingchange.blogspot.com/search/label/red%20queen%20effect">Red Queen Effect</a>.)<br /></li></ul><ul style="text-align: left;"><li>The <q>domestication</q> of social scientists and philosophers. This includes technology companies funding <q>technology ethics</q> to stave off more radical critique. See my post <a href="https://demandingchange.blogspot.com/2019/06/the-game-of-wits-between-technologists.html">The Game of Wits between Technologists and Ethics Professors</a> (June 2019). </li></ul><ul style="text-align: left;"><li>Critical focus on the most glamorous and recent technologies, neglecting those that might be of more lasting significance to greater numbers of people. For my part, I am particularly wary of any innovation described as a <a href="https://demandingchange.blogspot.com/search/label/paradigm%20shift">paradigm shift</a>, or as the <a href="https://demandingchange.blogspot.com/2020/11/whom-does-change-serve.html">Holy Grail</a> of anything. I have also noted that academic studies of <a href="https://demandingchange.blogspot.com/search/label/technology%20adoption">technology adoption</a> are often focused on the most recent technologies, which means that the early adoption phase is much better understood than the late adoption phase.<br /></li></ul><p> I plan to return to some of these topics in future posts.<br /> </p><hr /><p> </p><p>John Naughton, <a href="https://www.theguardian.com/commentisfree/2021/mar/27/is-online-advertising-about-to-crash-just-like-the-property-market-did-in-2008">Is online advertising about to crash, just like the property market did in 2008?</a> (The Guardian, 27 March 2021)</p><p>Lee Vinsel, <a href="https://sts-news.medium.com/youre-doing-it-wrong-notes-on-criticism-and-technology-hype-18b08b4307e5">You’re Doing It Wrong: Notes on Criticism and Technology Hype</a>
(Medium, 1 February 2021)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-15926152005864934822020-12-24T22:17:00.004+00:002022-04-24T12:01:36.102+01:00Technological Determinism<p><b>Social scientists and social historians are naturally keen to produce explanations for social phenomena.</b> Event B happened because of A.</p><p>Sometimes the explanation involves some form of technology. Lewis Mumford traced the start of the Industrial Revolution to the invention of the mechanical clock, while Marshall McLuhan talks about <q>the great medieval invention of typography that was the <q>take-off</q> moment into the new spaces of the modern world</q> <cite>McLuhan 1962 p 79</cite>.<br /></p><p>These explanations are sometimes read as implying some form of technological determinism. For example, many people read McLuhan as a technological determinist.<br /></p><p></p><p></p><blockquote><div><q>McLuhan furnished [the tech industry] with a narrative of historical inevitability, a technological determinism that they could call on to negate the consequences of their inventions - if it was fated to happen anyway, is it really their fault?</q><br /></div><div style="text-align: right;"><cite>Daub 2020 pp 47-48</cite><br /></div><p></p></blockquote><p>Although sometimes McLuhan claimed the opposite. After Peter Drucker had sought an explanation for the basic change in
attitudes, beliefs, and values that had released the Technological
Revolution, McLuhan's 1964 book set out to answer this question.<br /></p><p></p><blockquote><div><q>Far from being deterministic, however, the present study will, it is hoped, elucidate a principal factor in social change which may lead to a genuine increase of human autonomy.</q><br /></div><div style="text-align: right;"><cite>McLuhan 1962 p 3</cite></div></blockquote><p></p><p></p><blockquote><div><q>As McLuhan has said, there is no inevitability so long as there is a willingness to contemplate what is happening.</q><br /></div><div style="text-align: right;"><cite>Postman Weingartner 1969 p 20</cite></div></blockquote><p></p><p>Raymond Williams saw McLuhan's stance as </p><p></p><blockquote><div><q>an apparently sophisticated technological determinism which has the significant effect of indicating a social and cultural determinism: a determinism, that is to say, which ratifies the society and culture we now have, and especially its most powerful internal directions.</q><br /></div><div style="text-align: right;"><cite>Williams, second edition p 120</cite></div></blockquote><p></p><p>Neil Postman himself made some statements that were much more clearly deterministic. </p><p></p><blockquote><div><q>Once a technology is admitted, it plays out its hand; it does what it is designed to do.</q><br /></div><div style="text-align: right;"><cite>Postman 1992</cite></div></blockquote><p></p><p>But causal explanation doesn't always mean inevitability. Explanations in history and the social sciences often have to be understood in terms of tendencies, probabilities and propensities, other-things-being-equal. </p><p><br /></p><p><b>There is also a common belief that technological change is irreversible. </b>A good counter-example to this is Japan's reversion to the sword between 1543 and 1879, as documented by Noel Perrin. What's interesting about this example is that it shows that technology reversal is possible under certain sociopolitical conditions, and also that these conditions are quite rare.<br /></p><p>What is rather more common is for sociopolitical forces to inhibit the adoption of technology in the first place. In my article on Productivity, I borrowed the example of continuous-aim firing from E.E. Morison. This innovation was initially resisted by the Navy hierarchy (both UK and US), despite tests demonstrating a massive improvement in firing accuracy, at least in part because it would have disrupted the established power relations and social structure on board ship.</p><p><br /></p><p><b>Evolution or Revolution?</b></p><p>How to characterize the two examples of technology change I mentioned at the beginning of this post - the mechanical clock and moveable type? It is important to remember that this isn't about the <b>invention </b>of clocks and printing, since these technologies were known across the ancient world from China to Egypt, but about significant <b>improvements </b>to these technologies, which made them more readily available to more people. It was these improvements that made other social changes possible.<br /></p><p><br /></p><p></p><p><b>Technologists are keen to take the credit for the positive effects of their innovations, while denying responsibility for any negative effects.</b> The narrative of technological determinism plays into this, suggesting that the negative effects were somehow inevitable, and there was therefore little point in resisting them.<br /></p><blockquote><q>The tech industry ... likes to imbue the changes it yields with the character of natural law.</q></blockquote> <blockquote type="cite"><div style="text-align: right;"><cite>Daub 2020 p 5</cite> <br /></div></blockquote><p></p><p>If new tech is natural, then surely it is foolish for individual consumers to resist it. The rhetoric of early adopters and late adopters suggests that the former are somehow superior to the latter. Why bother with old fashioned electricity meters or doorbells, if you can afford smart technology? Are you some kind of technophobe or luddite or what?<br /></p><p><br /></p><p>What's wrong with the idea of technological determinism is not that it is true or false, but that it misrepresents the relationship between technology and society, as if they were two separate domains exerting gravitational force on each other. In my work on technology adoption, I used to talk about <b>technology-in-use</b>. Recent writing on the philosophy of technology (especially Stiegler and his followers) refer to this as <b>pharmacological</b>, using the term in its ancient Greek sense rather than referring specifically to the drug industry. If you want to think of technology as a drug that alters its users' perception of reality, then perhaps it's not such a leap from the drug industry to the tech industry. <br /></p><p>But the word alters isn't right here, because it implies the existence of some unaltered reality prior to technology. As Stiegler and others make clear, there is no reality prior to technology: our reality and our selves have always been part of a sociotechnical world. </p><p>Donna Harraway sees determinism as a discourse (in the Foucauldian sense) rather than as a theory of power and control.<br /></p><p></p><blockquote><p><q>Technological determination is only one ideological space opened up by the reconceptions of machine and organism as coded texts through which we engage in the play of ·writing and reading the world.</q><br /></p><p></p></blockquote><p>As Rob Safer notes,</p><p></p><blockquote><p><q>Human history for Haraway isn’t a rigid procession of cause determining
effect, but a process of becoming that depends upon human history’s
conception of itself, via the medium of myth.</q></p></blockquote><p>Finally, one of the best arguments against technological determinism is presented by Andrew Feenberg, who provides examples to show <q>the tremendous flexibility of the technical system. It is not rigidly constraining but, on the contrary, can adapt to a variety of social demands.</q> <br /></p><p></p><hr /><p> </p><p>Adrian Daub, What Tech Calls Thinking (Farrar Straus and Giroux, 2020) </p><p>Andrew Feenberg, Subversive Rationalization: Technology, Power, Democracy (Inquiry 35, 1992, pp 301-22)<br /></p><p>Donna Haraway, Cyborg Manifesto (Socialist Review, 1985)<br /></p><p>Marshall McLuhan, The Gutenberg Galaxy (University of Toronto Press, 1962) </p><p>E.E. Morison, Men Machine and Modern Times (MIT Press, 1966) <br /></p><p>Lewis Mumford, Technics and Civilization (London: Routledge, 1934)<br /></p><p>John Durham Peters,
<a href="https://doi.org/10.1525/rep.2017.140.1.10">“You Mean My Whole Fallacy Is Wrong”: On Technological Determinism</a>
(Representations 140 (1): 10–26.
November 2017)</p><p>Noel Perrin, <a href="https://www.newyorker.com/magazine/1965/11/20/giving-up-the-gun">Giving up the gun</a> (New Yorker, 13 November 1965), Giving up the gun (David R Godine, 1988)<br /></p><p>Neil Postman, Technolopoly: the surrender of culture to technology (Knopf, 1992)</p><p>Neil Postman and Charles Weingartner, Teaching as a Subversive Activity (Delacorte 1969) page references to Penguin 1971 edition<br /></p><p>Jacob Riley, <a href="https://jtriley.blogspot.com/2013/10/technological-determinism-control-and.html">Technological Determinism, Control, and Education: Neil Postman and Bernard Stiegler</a> (1 October 2013)<br /></p><p>
Federica Russo, <a href="https://link.springer.com/article/10.1007/s13347-018-0326-2">Digital Technologies, Ethical Questions, and the Need of an Informational Framework</a>
(Philosophy and Technology volume 31, pages 655–667, November 2018)</p><p>Rob Safer, <a href="https://medium.com/cool-media/haraway-s-theory-of-history-in-the-cyborg-manifesto-9a85faa0a1e9">Haraway’s Theory of History in the Cyborg Manifesto</a> (16 March 2015)<br /></p><p>Richard Veryard, <a href="https://www.researchgate.net/publication/262286691_Demanding_higher_productivity">Demanding Higher Productivity</a> (data processing 28/7, September 1986)</p><p>Raymond Williams, Television, Technology and Cultural Form (Routledge, 1974, 1990)<br /></p><p><br /></p><p>Related posts: <a href="https://posiwid.blogspot.com/2014/05/smart-guns.html">Smart Guns</a> (May 2014), <a href="https://demandingchange.blogspot.com/2020/12/the-social-dilemma.html">The Social Dilemma</a> (December 2020)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com1tag:blogger.com,1999:blog-1254315679163990153.post-53628617920655466602020-12-11T22:30:00.005+00:002020-12-12T10:35:51.907+00:00Evolution or Revolution 3<p>Let me start this post with some quotes from @adriandaub's book <i><b>What Tech Calls Thinking</b></i>.</p><p></p><blockquote type="cite"><p><q>Disruption has become a way to tell a story about the meaning of both discontinuity and continuity.</q><br /></p><p style="text-align: right;"><cite>Daub p 119</cite></p></blockquote><p></p><blockquote type="cite"><p><q>One ought to be skeptical of unsubstantiated claims of something's being totally new and not following the hitherto established rules (of business, of politics, of common sense), just as one is skeptical of claims that something which really does feel and look unprecedented is simply a continuation of the status quo.</q><br /></p><p style="text-align: right;"><cite>Daub pp 115-6</cite></p></blockquote><p>For example, Uber.</p><p></p><blockquote type="cite"><q>Uber claims to have <q>revolutionized</q> the experience of hailing a cab, but really that experience has stayed largely the same. What it has managed to get rid of were steady jobs, unions, and anyone other than Uber's making money on the whole enterprise.</q> <p style="text-align: right;"><cite>Daub p 105</cite></p></blockquote><p></p><p>Clayton Christensen would agree. In an article restating his original definition of the term <b>Disruptive Innovation</b>, he put Uber into the category of what he calls <b>Sustaining Innovation</b>.</p><p></p><blockquote type="cite"><q>Uber’s financial and strategic achievements do not qualify the company
as genuinely disruptive—although the company is almost always described that way.</q><br /><p style="text-align: right;"><cite>HBR 2015</cite></p></blockquote><p></p><p>However, <a href="https://twitter.com/richardveryard/status/1337465583546273793">as I pointed out on Twitter earlier today</a>, Christensen's use of the word <q>disruptive</q> has been widely diverted by big tech vendors and big consultancies in an attempt to glamorize their marketing to big corporates. If you put the name of any of the big consultancies into an Internet search engine together with the word <q>disruption</q>, you can find many examples of this. Here's one picked at random: <i><q>Discover how you can seize the upside of disruption across your industry</q></i>.</p><p>The same experiment can be tried with other jargon terms, such as <q>paradigm shift</q>. By the way, Daub notes that Alex Karp, one of the founders of Palantir, wrote his doctoral dissertation on jargon - <q>speech that is used more for the feelings it engenders and transports in certain quarters than for its informational content</q> <i>(Daub p 85)</i>.</p><p>@<a href="https://twitter.com/jchyip/status/1337475274745786372">jchyip</a> thinks we should try to stick to Christensen's original definitions. But although I don't approve of vendors fudging perfectly good technical terms for their own marketing purposes, there is sometimes a limit to the extent to which we can insist that such terms still carry their original meaning.<br /></p><p>And to my mind this is not just a dispute about the <b>meaning</b> of the word <q>disruptive</q> but a question of which <b>discourse</b> shall prevail. I have long argued that claims of continuity and novelty are not always mututally exclusive, since they may simply be alternative descriptions of the same thing for different audiences. The choice of description is then a question of framing rather than some objective truth. As Daub notes</p><p></p><blockquote type="cite"><p><q>The way the term is used today really implies that whatever continuity is being disrupted deserved to be disrupted.</q><br /></p><p style="text-align: right;"><cite>Daub p 119</cite></p><p></p></blockquote><p>For more on this, see the earlier posts in this series: <a href="http://demandingchange.blogspot.co.uk/2006/05/evolution-or-revolution.html">Evolution or Revolution</a> (May 2006), <a href="http://demandingchange.blogspot.co.uk/2003/07/making-sense-of-internet-evolution-or.html">Evolution or Revolution 2</a> (March 2010)</p><p>In a comment below the March 2010 post, @cecildjx asked my opinion on the (relative) significance of the Internet versus the iPhone. Here's what I answered.</p><p></p><blockquote type="cite">My argument is that our feelings about technology are fundamentally and
systematically distorted by glamour and proximity. Of course we are
often fascinated by the most-recent, and we tend to take the less-recent
for granted, but that is an unreliable basis for believing that the
recent is (or will turn out to be) more significant from a larger
historical perspective.<br /><br />What I really find interesting (from a
socio-historical perspective) is how quickly technologies can shift from
<q>fascinating</q> to <q>taken-for-granted</q>. Since I started work, my working
life have been transformed by a range of tools, including word
processing, spreadsheets, mobile phones, fax machines, email and
internet. Apart from a few developers working for Microsoft or Google,
is anyone nowadays fascinated by word processors or spreadsheets? If we
pay attention to the social changes brought about by the Internet, and
ignore the social changes brought about by the word processor, then of
course we will get a distorted view of the internet's importance. If we
glamorize the iPhone while regarding older mobile telephones as
uninteresting, we end up making a fetish of some specific design
features of a particular product. </blockquote><p></p><p>If we have a distorted sense of which innovations are truly disruptive or significant, we also have a distorted sense of technological change as a whole. There is a widespread belief that the pace of technological change is increasing, but this could be an illusion caused (again) by proximity. See my post on <a href="https://demandingchange.blogspot.com/2007/09/rates-of-evolution.html">Rates of Evolution</a> (September 2007), where I also note that some stakeholders have a vested interest in talking up the pace of technology change.<br /></p><hr />
<p>Clayton M. Christensen, Michael E. Raynor, and Rory McDonald, <a href="https://hbr.org/2015/12/what-is-disruptive-innovation">What Is Disruptive Innovation?</a> (HBR Magazine, December 2015)<br /></p><p>Adrian Daub, What Tech Calls Thinking (Farrar Straus and Giroux, 2020)</p><p>Thanks to @<a href="-https://twitter.com/jchyip/status/1337408874232635392">jchyip</a> for kicking off the most recent discussion.<br /></p>
Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-80696994250472130662020-12-10T22:05:00.004+00:002021-08-24T08:49:51.114+01:00The Social Dilemma<p>Just watched the documentary <i><b>The Social Dilemma</b></i> on Netflix, which takes a critical look at some of the tech giants that dominate our world today (although not Netflix itself, for some reason), largely from the perspective of some former employees who helped them achieve this dominance and are now having second thoughts. One of the most prominent members of this group is Tristan Harris, formerly with Google, now the president of an organization called the Center for Humane Technology. He and others have been airing these concerns for several years already - see for example Noah Kulwin's 2018 article (link below).<br /></p><p>The documentary opens by asking the contributors to state the problem, and shows them all initially hesitating. By the end of the documentary, however, they are mostly making large statements about the morality of encouraging addictive behaviour, the propagation of truth and lies, the threat to democracy, the ease with which these platforms can be used by authoritarian rulers and other bad actors, and the need for regulation.<br /></p><p><b>Quantity becomes quality</b>. To some extent, the phenomena and affordances of social media can be regarded as merely scaled-up versions of previous social tools, including advertising and television: the maxim <q><i><b>If you aren't paying, you are the product</b></i></q> derives from a 1973 video about the power of commercial television. However, several of the contributors to the documentary observed that the power of the modern platforms and the wealth of the businesses that control these platforms is unprecedented, while noting that social media is far less regulated than other mass communication enterprises, including television and telecommunications.<br /></p><p>Contributors doubted whether we could expect these enterprises, or the technology sector generally, to fix these problems on their own - especially given the focus on profit, growth and shareholder value that drives all enterprises within the capitalist system. Is it fair to ask them to reform capitalism? (Many years ago, the architect J.P. Eberhard noted a tendency to escalate even small problems to the point where the entire capitalist system comes into question, and argued that <a href="https://demandingchange.blogspot.com/2013/04/we-ought-to-know-difference.html"><i><b>We Ought To Know The Difference</b></i></a>.) So is regulation the answer?<br /></p><p>Surprisingly enough, Facebook doesn't think so. In its response to the documentary, it complains</p><p></p><blockquote><p><q>The film’s creators do not include insights from those currently working at the companies or any experts that take a different view to the narrative put forward by the film.</q> </p></blockquote><p>As Pranav Malhotra notes, it's not hard to find experts who would offer a different perspective, in many cases offering far more fundamental and far-reaching criticisms of Facebook and its peers. Hey Facebook, careful what you wish for!<br /></p><p></p><p>Last year, Tristan Harris appeared to call for a new interdisciplinary field of research, focused on exploring the interaction between technology and society. Several people including @<a href="https://twitter.com/ruchowdh/status/1144006696345505793">ruchowdh</a> pointed out that such a field was already well-established. (<a href="https://twitter.com/tristanharris/status/1138582371190468608">In response</a> he said he already knew this, and apologized for his poor choice of words, blaming the Twitter character limit.)</p><p>So there is already an abundance of deep and interesting work that can help challenge the simplistic thinking of Silicon Valley in a number of areas including<br /></p><ul style="text-align: left;"><li>Truth and Objectivity</li><li><a href="https://demandingchange.blogspot.com/2020/12/technological-determinism.html">Technological Determinism</a></li><li>Custodianship of Technology (for example Latour's idea that we should <q>Love Our Monsters</q> - see also article by Adam Briggle) <br /></li></ul><p>These probably deserve a separate post each, if I can find time to write them. </p><p><br /></p>
<hr /><p><a href="https://www.imdb.com/title/tt11464826/">The Social Dilemma</a> (dir Jeff Orlowski, Netflix 2020)</p><p>Wikipedia: <a href="https://en.wikipedia.org/wiki/The_Social_Dilemma">The Social Dilemma</a>, <a href="https://en.wikipedia.org/wiki/Television_Delivers_People">Television Delivers People</a><br /></p><p>Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/ethics-ai/">Ethics of Artificial Intelligence and Robotics</a>, <a href="https://plato.stanford.edu/entries/ethics-it-phenomenology/">Phenomenological Approaches to Ethics and Information Technology</a>, <a href="https://plato.stanford.edu/entries/technology/">Philosophy of Technology</a><br /></p><p> </p><p>Adam Briggle, <a href="https://theconversation.com/what-can-be-done-about-our-modern-day-frankensteins-88856">What can be done about our modern-day Frankensteins?</a> (The Conversation, 26 December 2017)<br /></p><p>Robert L. Carneiro, <a href="https://www.pnas.org/content/97/23/12926">The transition from quantity to quality: A neglected causal mechanism in accounting for social evolution</a>
(PNAS 97:23, 7 November 2000)</p><p>Rumman Chowdhury, <a href="https://www.wired.com/story/tech-needs-to-listen-to-actual-researchers/">To Really 'Disrupt,' Tech Needs to Listen to Actual Researchers</a> (Wired, 26 June 2019)<br /></p><p></p><p>Facebook, <a href="https://about.fb.com/wp-content/uploads/2020/10/What-The-Social-Dilemma-Gets-Wrong.pdf">What the Social Dilemma Gets Wrong</a> (2020) <br /></p><p></p><p></p><p>Tristan Harris,
“<a href="https://medium.com/thrive-global/how-technology-hijacks-peoples-minds-from-a-magician-and-google-s-design-ethicist-56d62ef5edf3" target="other">How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist</a>”,
<i>Thrive Global</i>, 18 May 2016 <br /></p><p>Noah Kulwin, <a href="https://nymag.com/intelligencer/2018/04/an-apology-for-the-internet-from-the-people-who-built-it.html">The Internet Apologizes</a> (New York Magazine, 16 April 2018) <br /></p><p></p>John Lanchester, <a href="https://www.lrb.co.uk/the-paper/v39/n16/john-lanchester/you-are-the-product">You Are The Product</a> (London Review of Books, Vol. 39 No. 16, 17 August 2017)<p>Bruno Latour, <a href="https://thebreakthrough.org/journal/issue-2/love-your-monsters">Love Your Monsters: Why we must care for our technologies as we do our children</a> (Breakthrough, 14 February 2012) </p><p>Pranav Malhotra, <a href="https://slate.com/technology/2020/09/social-dilemma-netflix-technology.html">The Social Dilemma Fails to Tackle the Real Issues in Tech</a>
(Slate, 18 September 2020)
<br /></p><p>Richard Serra and Carlota Fay Schoolman, <a href="https://www.imdb.com/title/tt5223490/">Television Delivers People</a> (1973) </p><p>Zadie Smith, <a href="https://www.nybooks.com/articles/2010/11/25/generation-why/">Generation Why?</a> (New York Review of Books, 25 November 2010)</p><p>Siva Vaidhyanathan, <a href="https://newrepublic.com/article/160661/facebook-menace-making-platform-safe-democracy">Making Sense of the Facebook Menace</a> (The New Republic, 11 January 2021) <br /></p><p><br /></p><p>Related posts: <a href="https://rvsoftware.blogspot.com/2009/02/perils-of-facebook.html">The Perils of Facebook</a> (February 2009), <a href="https://demandingchange.blogspot.com/2013/04/we-ought-to-know-difference.html">We Ought to Know the Difference</a> (April 2013), <a href="https://rvsoapbox.blogspot.com/2017/06/rhyme-or-reason-logic-of-netflix.html">Rhyme or Reason: The Logic of Netflix</a> (June 2017), <a href="https://rvsoapbox.blogspot.com/2017/07/on-nature-of-platforms.html">On the Nature of Platforms</a> (July 2017), <a href="https://demandingchange.blogspot.com/2018/11/ethical-communication-in-digital-age.html">Ethical Communication in a Digital Age</a> (November 2018), <a href="https://demandingchange.blogspot.com/2019/02/shoshana-zuboff-on-surveillance.html">Shoshana Zuboff on Surveillance Capitalism</a> (February 2019)<br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-90467592715372737182020-11-30T11:23:00.003+00:002022-06-02T09:05:26.381+01:00Whom does the change serve?<p>In my writings on technology ethics, riffing on the fact that so many cool technologies are presented as the Holy Grail of something or other, I have frequently invoked the mediaeval question that Parsifal failed to ask: <i><b>Whom does the Grail Serve?</b></i></p><ul style="text-align: left;"><li><a href="https://rvsoftware.blogspot.com/2019/05/towards-chatbot-ethics.html">Chatbot ethics - Whom does the chatbot serve?</a> (May 2019) <br /></li><li><a href="https://rvsoftware.blogspot.com/2019/05/whom-does-technology-serve.html">Driverless cars - Whom does the technology serve?</a> (May 2019)</li><li><a href="https://rvsoapbox.blogspot.com/2019/06/the-road-less-travelled.html">The Road Less Travelled - Whom Does the Algorithm Serve?</a> (June 2019) <br /></li></ul><p><br /></p><p>The same question can be asked of other changes and transformations, where technology might be part of the story but is not the primary story.</p><ul style="text-align: left;"><li><a href="https://rvsoapbox.blogspot.com/2012/11/is-organizational-integration-good-thing.html">Is Organizational Integration a Good Thing?</a> (November 2012)</li><li><a href="https://demandingchange.blogspot.com/2019/08/the-ethics-of-disruption.html">The Ethics of Disruption</a> (August 2019)</li><li><a href="https://demandingchange.blogspot.com/2019/10/what-difference-does-technology-make.html">What difference does technology make?</a> (October 2019)<br /></li><li><a href="https://demandingchange.blogspot.com/2020/06/bold-restless-experimentation.html">Bold Restless Experimentation</a> (June 2020) <br /></li></ul><p> </p><p>In response to Francis Fukuyama's statement on Big Tech's information monopoly <br /></p><p></p><blockquote><q>Almost every abuse these platforms are accused of perpetrating can be simultaneously defended as economically efficient</q></blockquote><p></p><p>@<a href="https://twitter.com/mireillemoret/status/1333311605656924160">mireillemoret</a> argues<br /></p><blockquote><p><q>Efficiency is important, but it is NOT the holy grail</q><br /></p></blockquote><p> </p><p>Important for whom? When I get involved in economic discussions of efficiency or productivity or whatever, I always try to remember the ethical dimension - efficiency for whom, productivity for whom, predictability and risk reduction for whom, innovation for whom.</p><br /><p><i>Note: I just started reading Adrian Daub's new book, but I haven't got to the <a href="https://demandingchange.blogspot.com/search/label/disruption">Disruption</a> chapter yet. <br /></i></p><hr /><p><br />
</p><p>Chris Bruce, <a href="https://www.env-econ.net/2005/08/environmental_d_1.html">Environmental Decision-Making as Central Planning: FOR WHOM is Production to Occur?</a> (Environmental Economics Blog, 19 August 2005)</p><p>Adrian Daub, What tech calls thinking (Farrar Straus and Giroux, 2020) </p>Adrian Daub, <a href="https://www.theguardian.com/news/2020/sep/24/disruption-big-tech-buzzword-silicon-valley-power">The disruption con: why big tech’s favourite buzzword is nonsense</a> (The Guardian, 24 September 2020)<br /><p>Francis Fukuyama, Barak Richman, and Ashish Goel, <a href="https://www.foreignaffairs.com/articles/united-states/2020-11-24/fukuyama-how-save-democracy-technology">How to Save Democracy From Technology - Ending Big Tech’s Information Monopoly</a>
(Foreign Affairs, January/February 2021) </p><p></p><p>Further posts</p><ul style="text-align: left;"><li><a href="https://rvsoapbox.blogspot.com/2006/11/for-whom.htm">For Whom</a> (November 2006)<br /></li><li><a href="https://rvsoapbox.blogspot.com/2009/01/from-soa-to-better-judgement.html">From SOA to better judgement</a> (January 2009)<br /></li><li><a href="https://demandingchange.blogspot.com/2009/07/redesigning-banana.html">Redesigning the Banana</a> (July 2009)</li><li><a href="https://rvsoapbox.blogspot.com/2015/04/arguing-with-drucker.html">Arguing with Drucker</a> (April 2015) </li><li><a href="https://demandingchange.blogspot.com/2020/12/evolution-or-revolution-3.html">Evolution or Revolution</a> (December 2020)</li></ul>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-35298997389829178312020-11-14T09:56:00.008+00:002020-11-14T11:22:45.253+00:00Open Democracy in the Age of AI<p>An interesting talk by Professor Hélène @Landemore at @<a href="https://twitter.com/TORCHOxford">TORCHOxford</a> yesterday, exploring the possibility that some forms of artificial intelligence might assist democracy. I haven't yet read her latest book, which is on Open Democracy.<br /></p><p>There are various organizations around the world that promote various notions of Open Democracy, including <a href="https://www.opendemocracy.net/en/">openDemocracy</a> in the UK, and the <a href="https://www.opendemocracynh.org/">Coalition for Open Democracy</a> in New Hampshire, USA. As far as I can see, her book is not specifically aligned with the agenda of these organizations.<br /></p><p>Political scientists often like to think of democracy in terms of decision-making. For example, the Stanford Encyclopedia of Philosophy defines democracy as <q>a method of group decision making characterized by a kind of equality among the participants at
an essential stage of the collective decision making</q>, and goes on to discuss various forms of this including direct participation in collective deliberation, as well as indirect participation via elected representatives.<br /></p><p>At times in her talk yesterday, Professor Landemore's exploration of AI sounded as if
democracy might operate as a massive multiplayer online game (MMOG). She talked about the opportunities for using AI to improve public consultation, saying <q>my sense is that there is a real potential for AI to basically offer us a
better picture of who we are and where we stand on issues</q>. </p><p>When people talk about decision-making in relation to artificial intelligence, they generally conform to a technocratic notion of decision-making that was articulated by Herbert Simon, and remains dominant within the AI world. When people talk about the impressive achievements of machine learning, such as medical diagnosis, this also fits this technocratic paradigm.</p><p>However, the limitations of this notion of decision-making become apparent when we compare it with Sir Geoffrey Vickers' notion of judgement in human systems, which contains two important elements that are missing from the Simon model - sensemaking (which Vickers called appreciation) and ethical/moral judgement. The importance of the moral element was stressed by Professor Andrew Briggs in his reply to Professor Landemore.</p><p>Although a computer can't make moral judgements, it might perhaps be able to infer <b>our </b>collective moral stance on various issues from our statements and behaviours. That of course still leaves a question of political <b>agency </b>- if a computer thinks I am in favour of some action, does that make me accountable for the consequences of that action?<br /></p>In my own work on collective intelligence, I have always regarded decision-making and policy-making as important but not the whole story. Intelligence also includes observation (knowing what to look for), sensemaking and interpretation, and most importantly learning from experience. <br /><p>Similarly, I should regard democracy as broader than decision-making alone, needing to include the question of <b>governance</b>. How can the People observe and make sense of what is going on, how can the People intervene when things are not going in accordance with collective values and aspirations, and how can Society make progressive improvements over time. Thus openDemocracy talks about <b>accountability</b>. There are also questions of reverse <b>surveillance </b>- how to watch those who watch over us. And maybe openness is not just about open participation but also about <b>open-mindedness</b>. Jane Mansbridge talks about being <q>open to transformation</q>.</p><p>There may be a role for AI in supporting some of these questions - but I don't know if I'd trust it to.</p><p><br /></p>
<hr />
<a href="https://torch.ox.ac.uk/event/ethic-in-ai-live-event-open-democracy-in-the-age-of-ai">Ethics in AI Live Event: Open Democracy in the Age of AI</a> (TORCH Oxford, 13 November 2020) via <a href="https://www.youtube.com/watch?v=nJbuUSRnhw8">YouTube</a>
<p>Nathan Heller, <a href="https://www.newyorker.com/news/the-future-of-democracy/politics-without-politicians">Politics without Politicians</a> (New Yorker, 19 February 2020)</p><p>Hélène Landemore, Open Democracy: Reinventing Popular Rule for the 21st Century (Princeton University Press 2020)</p><p>Jane Mansbridge et al, <a href="https://polisci.ucsd.edu/_files/mansbridge%20et%20al%20place%20of%20self-interest%20jopp%202010.pdf">The Place of Self-Interest and the Role of Power in Deliberative Democracy</a> (The Journal of Political Philosophy:Volume 18, Number 1, 2010) pp. 64–100<br /></p><p>Richard Veryard, <a href="https://leanpub.com/orgintelligence/">Building Organizational Intelligence</a> (LeanPub 2012) <br /></p><p>Geoffrey Vickers, The Art of Judgment: A Study in Policy-Making (Sage 1965), Human Systems are Different (Paul Chapman 1983)<br /></p><p>Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/democracy/#DemDef">Democracy </a><br /></p><p></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-14764579850877617522020-10-25T11:56:00.005+00:002020-10-29T20:21:39.210+00:00Operational Excellence and DNA<p>In his 2013 article on Achieving Operational Excellence, Andrew Spanyi quotes an unnamed CIO saying <q>operational excellence is in our DNA</q>. Spanyi goes on to criticize this CIO's version of operational excellence, which was based on limited and inadequate tracking of customer interaction as well as old-fashioned change management. </p><p>But then what would you expect? One of the things that distinguishes humans from other species is how little of our knowledge and skill comes directly from our DNA. Some animals can forage for food almost as soon as they are born, and some only require a short period of parental support. Whereas a human baby has to learn nearly everything from scratch. Our DNA gives very little directly useful knowledge and skill, but what it does give us is the ability to learn.</p><p>Very few cats and dogs reach the age of twenty. But at this age many humans are still in full-time education, while others have only recently started to attain financial independence. Either way, they have by now accumulated an impressive quantity of knowledge and skill. But only a foolish human would think that this is enough to last the rest of their life. The thing that is in our DNA, more than anything else, more than other animals, is learning.</p><p>There are of course different kinds of learning involved. Firstly there is the stuff that the grownups already know. Ducks teach their young to swim, and human adults teach kids to do sums and write history essays, as well as some rather more important skills. In the world of organizational learning, consultants often play this role - coaching organizations to adopt <q>best practice</q>.<br /></p><p>But then there is going beyond this stuff. Intelligent kids learn to question both the content and the method of what they've been taught, as well as the underlying assumptions, and some of them never stop reflecting on such things. Innovation depends on developing and implementing new ideas, not just adopting existing ideas.</p><p>Similarly, operational excellence doesn't just mean adopting the ideas of the OpEx gurus - statistical process control, six sigma, lean or whatever - but collectively reflecting on the most effective and efficient ways to make radical as well as incremental improvements. In other words, applying OpEx to itself. <br /></p><p><br /></p><p>Andrew Spanyi, <a href="https://www.cutter.com/article/achieving-operational-excellence-379961">Achieving Operational Excellence</a> (Cutter Consortium Executive Report, 15 October 2013) registration required</p><p>Related posts: <a href="https://demandingchange.blogspot.com/2010/05/changing-how-we-think.html">Changing how we think</a> (May 2010), <a href="https://demandingchange.blogspot.com/2016/10/learning-at-speed-of-learning.html">Learning at the Speed of Learning</a> (October 2016)<br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p><p><br /></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-32629929192152669392020-07-14T08:53:00.002+01:002021-01-07T18:10:26.144+00:00Technology Mediating RelationshipsIn a May 2020 essay, @NaomiAKlein explains how Silicon Valley is exploiting the COVID19 crisis as an opportunity to reframe a long-standing vison of an app-driven, gig-fueled future. Until recently, Klein notes, this vision <q>was being sold to us in the name of convenience, frictionlessness, and personalization</q>. Today we are being told that <q>these technologies are the only possible way to pandemic-proof our lives, the indispensable keys to keeping ourselves and our loved ones safe</q>. Klein fears that this <q>dubious promise</q> will help to sweep away a raft of legitimate concerns about this technological vision.<br />
<br />
In a subsequent interview with Katherine Viner, Klein emphasizes the importance of touch. In order to sell a touchless technology, touch has been diagnosed as the problem.
<br />
<br />
In his 1984 book, Albert Borgmann introduced the notion of the <b><i>device paradigm</i></b>. This means viewing technology exclusively as a device (or set of devices) that deliver a series of commodities, and evaluating the technical features and powers of such devices, without having any other perspective. A device is an artefact or instrument or tool or gadget or mechanism, which may be physical or conceptual. (Including hardware and software.)
<br />
<br />
According to Borgmann, it is a general trend of technological development that mechanisms (devices) are increasingly hidden behind service interfaces. Technology is thus regarded as a means to an end, an instrument or contrivance, in German: <i><b>Einrichtung</b></i>. Technological progress increases the availability of a commodity or service, and at the same time pushes the actual device or mechanism into the background. Thus technology is either seen as a cluster of devices, or it isn't seen at all.
<br />
<br />
However, Klein suggests that COVID19 might possibly have the opposite effect.
<br />
<br />
<blockquote type="cite">
<q>The virus has forced us to think about interdependencies and
relationships. The first thing you are thinking about is: everything I
touch, what has somebody else touched? The food I am eating, the package
that was just delivered, the food on the shelves. These are connections
that capitalism teaches us not to think about.</q></blockquote>
<br />
While Klein attributes this teaching to capitalism, where Borgmann and other followers of Heidegger would say technology, she appears to echo Borgmann's idea that we have a <q>moral obligation not to settle mindlessly into the convenience that devices may offer us</q> (via Stanford Encyclopedia).
This leads to what Borgmann calls <b>Focal Practices</b>.<br />
<br />
<hr /><p>
<br />
Albert Borgmann, Technology and the Character of Contemporary Life: A philosophical inquiry (University of Chicago Press, 1984)
<br />
<br />
Naomi Klein, <a href="https://theintercept.com/2020/05/08/andrew-cuomo-eric-schmidt-coronavirus-tech-shock-doctrine/">Screen New Deal</a> (The Intercept, 8 May 2020). Reprinted as <a href="https://www.theguardian.com/news/2020/may/13/naomi-klein-how-big-tech-plans-to-profit-from-coronavirus-pandemic">How big tech plans to profit from the pandemic</a> (The Guardian, 13 May 2020)
<br />
<br />
Katherine Viner, <a href="https://www.theguardian.com/books/2020/jul/13/naomi-klein-we-must-not-return-to-the-pre-covid-status-quo-only-worse">Interview with Naomi Klein</a> (The Guardian, 13 July 2020)</p><p>
Peter-Paul Verbeek, <a href="https://scholar.lib.vt.edu/ejournals/SPT/v6n1/verbeek.html">Devices of Engagement: On Borgmann's Philosophy of Information and Technology</a> (Techné, SPT v6n1, Fall 2002)<br />
<br />
David Wood, <a href="https://www.religion-online.org/article/albert-borgmann-on-taming-technology-an-interview/">Albert Borgmann on Taming Technology: An Interview</a> (The Christian Century, 23 August 2003) pp. 22-25 <br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Technology_and_the_Character_of_Contemporary_Life">Technology and the Character of Contemporary Life</a>
<br />
<br />
Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/ethics-it-phenomenology/#TechAttiContSociTech">Phenomenological Approaches to Ethics and Information Technology - Technological Attitude</a></p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-76307593613184212272020-07-12T11:39:00.006+01:002020-12-26T21:19:58.839+00:00Mapping out the entire world of objectsImageNet is a large crowd-sourced database of coded images, widely used for machine learning. This database can be traced to an idea articulated by Fei-Fei Li in 2006: <q>We’re going to map out the entire world of objects</q>. In a blogpost on the <a href="https://demandingchange.blogspot.com/2020/07/limitations-of-machine-learning.html">Limitations of Machine Learning</a>, I described this idea as naive optimism.<br />
<br />
Such datasets raise both ethical and epistemological issues. One of the ethical problems thrown up by these image databases is
that objects are sometimes also subjects. Bodies
and body parts are depicted (often without consent) and labelled (sometimes offensively); people are objectified; and the objectification embedded in these datasets are then passed on to the algorithms that use them and learn from them. Crawford and Paglen argue convincingly that categorizing and classifying people is not just a technical process but a political act. And thanks to some great detective work by Vinay Prabhu and Abeba Birhane, MIT has withdrawn Tiny Images, another large image dataset widely used for machine learning.<br />
<br />
But in this post, I'm going to focus on the epistemological and metaphysical issues - what constitutes the world, and how can we know about it. Li is quoted as saying <q>Data will redefine how we think about models.</q> The reverse should also be true, as I explain in my blogpost on the
<a href="https://rvsoapbox.blogspot.com/2012/11/co-production-of-data-and-knowledge.html">Co-Production of Data and Knowledge</a>. <br />
<br />
What exactly is meant by the phrase <q>the entire world of objects</q> and what would mapping this world really entail? Although I don't
believe that philosophy is either necessary or sufficient to correct all
of the patterns of sloppy thinking by computer scientists, even a
casual reading of Wittgenstein, Quine and other 20th century
philosophers might prompt people to question some simplistic assumptions
of the relationships between Word and Object underpinning these
projects. According to Donna Haraway, <q>what counts as an object is precisely what world history turns out to be about</q>.<br />
<br />
The first problem with these image datasets is the assumption that images can be labelled according to the objects that are depicted in them. But as Prabhu and Birhane note, <q>real-world images often contain multiple objects</q>. Crawford and Paglen argue that <q>images are laden with potential meanings, irresolvable questions, and contradictions</q> and that <q>ImageNet’s labels often compress and simplify images into deadpan banalities</q>.<br />
<br />
<blockquote class="cite">
One photograph shows a dark-skinned toddler wearing tattered
and dirty clothes and clutching a soot-stained doll. The child’s mouth
is open. The image is completely devoid of context. Who is this child?
Where are they? The photograph is simply labeled <q>toy</q>. <cite>Crawford and Paglen</cite></blockquote>
<br />
Implicit in the labelling of this photograph is some kind of ontological precedence - that the doll is more significant than the child. As for the emotional and physical state of the child, ImageNet doesn't seem to regard these states as objects at all. (There are other image databases that do attempt to code emotions - see my post on <a href="https://rvsoftware.blogspot.com/2019/03/affective-computing.html">Affective Computing</a>.)<br />
<br />
Given that much of the Internet is funded by companies that want to sell us things, it would not be surprising if there is an ontological bias towards things that can be sold. (This is what the word <q>everything</q> means in the Everything Store.) So that might explain why ImageNet chooses to focus on the doll rather than the child. But similar images are also used to sell washing powder. Thus the commercially relevant label might equally have been <q>dirt</q>.<br />
<br />
But not only do concepts themselves (such as toys and dirt) vary between different discourses and cultures (as explored by anthropologists such as Mary Douglas), the ontological precedence between concepts may vary. People from a different culture, or with a different mindset, will jump to different conclusions as to what is the main thing depicted in a given image.<br />
<br />
The American philosopher W.V.O. Quine argued that translation was indeterminate. If a rabbit runs past, and a speaker of an unknown language, Arunta, utters the word <q>gavegai</q>, we might guess that this word in Arunta corresponds to the word <q>rabbit</q> in English. But there are countless other things that the Arunta speaker might have been referring to. And although over time we may be able to eliminate some of these possibilities, we can never be sure we have correctly interpreted the meaning of the word <q>gavegai</q>. Quine called this the <b>inscrutability of reference</b>. Similar indeterminacy would seem to apply to our collection of images.<br />
<br />
The second problem has to do with the nature of classification. I have talked about this in previous posts - for example on <a href="https://demandingchange.blogspot.com/2019/07/algorithms-and-governmentality.html">Algorithms and Governmentality</a> - so I won't repeat all that here.
<br />
<br />
Instead, I want to jump to the third and final problem, arising from the phrase <q>the entire world of objects</q> - what does this really mean? How many objects are there in the entire world, and is it even a finite number? We can't count objects unless we can agree what counts as an object. What are the implications of what is included in <q>everything</q> and what is not included?<br />
<br />
I occasionally run professional workshops in data modelling. One of the
exercises I use is to display a photograph and ask the students to model
all the objects they can see in the picture. Students who are new to
modelling can always produce a simple model, while more advanced
students can produce much more sophisticated models. There doesn't seem
to be any limit to how many objects people can see in my picture. <br />
<br />
ImageNet
boasts 14 million images, but that doesn't seem a particularly large
number from a big data perspective. For example, I guess there must be around a billion dogs in the world - so how many words and images do you need to represent a billion dogs?<br />
<blockquote type="cite">
Bruhl found some languages full of detail<br />
Words that half mimic action; but<br />
generalization is beyond them, a white dog is<br />
not, let us say, a dog like a black dog.<br />
<cite>Pound, Cantos XXVIII</cite></blockquote>
<br />
<br />
<hr /><p>
<br />
Kate Crawford and Trevor Paglen, <a href="https://www.excavating.ai/">Excavating AI: The Politics of Images in Machine Learning Training Sets</a> (19 September 2019)<br />
<br />
Mary Douglas, Purity and Danger (1966)<br />
<br />
Dave Gershgorn, <a href="https://qz.com/1034972/the-data-that-changed-the-direction-of-ai-research-and-possibly-the-world/">The data that transformed AI research—and possibly the world</a> (Quartz, 26 July 2017)</p><p>Donna Haraway, Situated Knowledges (Feminist Studies 14/3, 1988) pp 575-99<br />
<br />
Vinay Uday Prabhu and Abeba Birhane, <a href="https://arxiv.org/pdf/2006.16923.pdf">Large Image Datasets: A pyrrhic win for computervision?</a> (Preprint, 1 July 2020) <br />
<br />
Katyanna Quach, <a href="https://www.theregister.com/2020/07/01/mit_dataset_removed/">MIT
apologizes, permanently pulls offline huge dataset that taught AI
systems to use racist, misogynistic slurs Top uni takes action after El
Reg highlights concerns by academics</a>
(The Register, 1 July 2020) </p><p>W.V.O. Quine, Word and Object (MIT Press, 1960)<br />
</p><p></p><p></p><p>Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/feminism-objectification/">Feminist Perspectives on Objectification</a>, <a href="https://plato.stanford.edu/entries/quine/#IndeTran">Quine on the Indeterminacy of Translation</a><br />
<br />
<br />
Related posts: <a href="https://rvsoapbox.blogspot.com/2012/11/co-production-of-data-and-knowledge.html">Co-Production of Data and Knowledge</a> (November 2012), <a href="https://rvsoftware.blogspot.com/2014/12/have-you-got-big-data-in-your-underwear.html">Have you got big data in your underwear</a> (December 2014), <a href="https://rvsoftware.blogspot.com/2019/03/affective-computing.html">Affective Computing</a> (March 2019), <a href="https://demandingchange.blogspot.com/2019/07/algorithms-and-governmentality.html">Algorithms and Governmentality</a> (July 2019),
<a href="https://demandingchange.blogspot.com/2020/07/limitations-of-machine-learning.html">Limitations of Machine Learning</a> (July 2020)</p>Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-42399998418764874222020-07-04T13:01:00.004+01:002020-11-25T09:21:38.012+00:00Limitations of Machine LearningIn a recent discussion on Twitter prompted by some examples of erroneous thinking in Computing Science, I argued that you don't always need a philosophy degree to spot these errors. A thorough grounding in statistics would seem to settle some of them.<br />
<br />
@<a href="https://twitter.com/DietrichEpp/status/1275791793599254529">DietrichEpp</a> disagreed completely. <q>If you want a machine to <q>learn</q> then you have to understand the difference between data and knowledge. Stats classes don’t normally cover this.</q>
<br />
<br />
<br />
So there are at least two questions here. Firstly, how much do you really have to understand in order to build a machine. As I see it, getting a machine do something (including learning) counts as engineering rather than science. Engineering requires two kinds of knowledge - practical knowledge (how to reliably, efficiently and safely produce a given outcome) and socio-ethical knowledge (whom shall the technology serve). Engineers are generally not expected to fully understand the scientific principles that underpin all the components, tools and design heuristics that they use, but they have a professional and ethical responsibility to have some awareness of the limitations of these tools and the potential consequences of their work.<br />
<br />
In his book on Design Thinking, Peter Rowe links the concept of design heuristic to Gadamer's concept of enabling prejudice. Engineers would not be able to function without taking some things for granted.<br />
<br />
So the second question is - which things can/should an engineer trust. Most computer engineers will be familiar with the phrase <b>Garbage In Garbage Out</b>, and this surely entails a professional scepticism about the quality of any input dataset. Meanwhile, statisticians are trained to recognize a variety of potential causes of bias. (Some of these are listed in the Wikipedia entry on <a href="https://en.wikipedia.org/wiki/Bias_(statistics)">statistical bias</a>.) Most of the statistics courses I looked at on <a href="https://www.coursera.org/search?query=statistics">Coursera</a> included material on inference. (Okay, I only looked at the first dozen or so, small sample.)<br />
<br />
Looking for relevant material to support my position, I found some good comments by Ariel Guersenzvaig, reported by Derek du Preez.<br />
<blockquote class="cite">
<q>Unbiased data is an oxymoron. Data is biased from the start. You have to choose categories in order to collect the data. Sometimes even if you don’t choose the categories, they are there ad hoc. Linguistics, sociologists and historians of technology can teach us that categories reveal a lot about the mind, about how people think about stuff, about society.</q>
</blockquote>
<br />
And arriving too late for this Twitter discussion, two more stories of dataset bias were published in the last few days. Firstly, following an investigation by Vinay Prabhu and Abeba Birhane, MIT has withdrawn Tiny Images, a very large image dataset that has been widely used for machine learning, and asked researchers and developers to delete it. And secondly, FiveThirtyEight has published an excellent essay by <a href="https://twitter.com/thistimeitsmimi/status/1278365211049766919">Mimi Ọnụọha</a> on the disconnect between data collection and meaningful change, arguing that it is impossible to collect enough data to convince people of structural racism.<br />
<br />
Prabhu and Birhane detected significant quantities of obscene and offensively labelled material embedded in image datasets, which could easily teach a machine learning algorithm to deliver sexist or racist outcomes. They acknowledge the efforts made in the curation of image datasets, but insist that more could have been done, and will need to be done in future, to address some serious epistemological and ethical questions. With hindsight, it is possible to see the naive optimism of <a href="https://demandingchange.blogspot.com/2020/07/mapping-out-entire-world-of-objects.html"><q>mapping out the entire world of objects</q></a> in a rather different light.<br />
<br />
Prabhu and Birhane mention Wittgenstein's remark in the Tractatus, <q>ethics and aesthetics are one and the same</q>. This thought brings me to the amazing work of <a href="https://www.blogger.com/mimionuoha.com/">Mimi Ọnụọha</a>.<br />
<blockquote class="cite">
<i><q><a href="http://classification.01/">Classification.01</a> is a sculpture that consists of two neon brackets. When more than one viewer approaches to look at the piece, the brackets use a nearby camera to decide whether or the two viewers have been classifed as <q>similar</q>, according to a variety of algorithmic measures. The brackets only light up if the terms of classification have been met. The brackets do not share the code and the rationale behind the reason for the classification of the viewers. Just as with many of our technological systems, the viewers are left to determine on their own why they have been grouped, a lingering reminder no matter how much our machines classify, ultimately classification is also a human process.</q></i>
</blockquote>
<br />
In summary, there are some critical questions about data and knowledge that affect the practice of machine learning, and some critical insights from artists and sociologists. As for philosophy, famous philosophers from Plato to Wittgenstein have spent 2500 years exploring a broad range of abstract ideas about the relationship between data and knowledge, so you can probably find a plausible argument to support any position you wish to adopt. So this is hardly going to provide any consistent guidance for machine learning.<br />
<br />
<br />
<b>Update</b><br />
<br />
Thanks to <a href="https://twitter.com/hangingnoodles/status/1279483634752290816">Jag Bhalla</a> for drawing my attention to @BioengineerGM's article on accountability in models. So not just GIGO (Garbage-In-Garbage-Out) but also AIAO (Accountability-In-Accountability-Out). <br />
<br />
<br />
<hr />
<br />
Guru Madhavan, <a href="https://issues.org/real-world-engineering-pandemic-modeling-accountability/">Do-It-Yourself Pandemic: It’s Time for Accountability in Models</a> (Issues in Science and Technology, 1 July 2020)
<br />
<br />
Mimi Ọnụọha, <a href="https://fivethirtyeight.com/features/when-proof-is-not-enough/">When Proof Is Not Enough</a> (FiveThirtyEight, 1 July 2020)<br />
<br />
Vinay Uday Prabhu and Abeba Birhane, <a href="https://arxiv.org/pdf/2006.16923.pdf">Large Image Datasets: A pyrrhic win for computervision?</a>(Preprint, 1 July 2020)<br />
<br />
Derek du Preez, <a href="https://diginomica.com/ai-and-ethics-unbiased-data-oxymoron">AI and ethics - ‘Unbiased data is an oxymoron’</a> (Diginomica, 31 October 2019)
<br />
<br />
Katyanna Quach, <a href="https://www.theregister.com/2020/07/01/mit_dataset_removed/">MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs Top uni takes action after El Reg highlights concerns by academics</a>
(The Register, 1 July 2020)<br />
<br />
Peter Rowe, Design Thinking (MIT Press 1987)<br />
<br />
Stanford Encyclopedia of Philosophy: <a href="https://plato.stanford.edu/entries/gadamer/#PosPre">Gadamer and the Positivity of Prejudice</a> <br />
<br />
Wikipedia: <a href="https://en.wikipedia.org/wiki/Algorithmic_bias">Algorithmic bias</a>, <a href="https://en.wikipedia.org/wiki/All_models_are_wrong">All models are wrong</a>, <a href="https://en.wikipedia.org/wiki/Bias_(statistics)">Bias (statistics)</a>, <a href="https://en.wikipedia.org/wiki/Garbage_in%2C_garbage_out">Garbage in garbage out</a> <br />
<div>
<br /></div>
<div>
Further points and links in the following posts: <a href="https://rvsoapbox.blogspot.com/2008/08/faithful-representation.html">Faithful Representation</a> (August 2008), <a href="http://rvsoapbox.blogspot.co.uk/2013/03/from-sedimented-principles-to-enabling.html">From Sedimented Principles to Enabling Prejudices</a>
(March 2013), <a href="https://rvsoftware.blogspot.com/2019/05/whom-does-technology-serve.html">Whom does the technology serve?</a> (May 2019), <a href="https://demandingchange.blogspot.com/2019/07/algorithms-and-auditability.html">Algorithms and Auditability</a> (July 2019), <a href="https://demandingchange.blogspot.com/2019/07/algorithms-and-governmentality.html">Algorithms and Governmentality</a> (July 2019), <a href="https://demandingchange.blogspot.com/2020/07/naive-epistemology.html">Naive Epistemology</a> (July 2020), <a href="https://demandingchange.blogspot.com/2020/07/mapping-out-entire-world-of-objects.html">Mapping out the entire world of objects</a> (July 2020)<br /></div>
Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0tag:blogger.com,1999:blog-1254315679163990153.post-41620471586171006112020-07-03T18:05:00.004+01:002022-08-23T13:59:00.145+01:00Naive EpistemologyOne of the things I learned from studying maths and philosophy is an appreciation of what things follow from what other things. Identifying and understanding what assumptions are implicit in a given argument, what axioms required to establish a given proof.<br />
<br />
So when I see or hear something that I disagree with, I feel the need to trace where the disagreement comes from - is there a difference in fact or value or something else? Am I missing some critical piece of knowledge or understanding, that might lead me to change my mind? And if I want to correct someone's error, is there some piece of knowledge or understanding that I can give them, that will bring them around to my way of thinking? <br />
<br />
(By the way, this skill would seem important for teachers. If a child struggles with simple arithmetic, exactly which step in the process has the child failed to grasp? However, teachers don't always have time to do this.)<br />
<br />
There is also an idea of the economy of argument. What is the minimum amount of knowledge or understanding that is needed in this context, and how can I avoid complicating the argument by bringing in a lot of other material that may be fascinating but not strictly relevant. (I acknowledge that I don't always follow this principle myself.) And when I'm wrong about something, how can other people help me see this without requiring me to wade through far more material than I have time for.<br />
<br />
There was a thread on Twitter recently, prompted by some weak thinking by a certain computer scientist. @<a href="https://twitter.com/jennaburrell/status/1275550838874750978">jennaburrell</a> noted that <q>computer science has never been very strong on epistemology – either recognizing that it implicitly has one, that there might be any other, or interrogating its weaknesses as a way of understanding the world</q>.<br />
<br />
Some people suggested that the <q>solution</q> involves philosophy.<br />
<blockquote class="twitter-tweet" data-conversation="none">
<div dir="ltr" lang="en">
People in CS and machine learning have been haphazardly trying to reinvent epistemology while universities make cuts to philosophy departments. Instead of getting more STEM majors we might be better off if we figured out how to send more funding to the humanities.</div>
— 🦄 Dietrich Epp 📡 (@DietrichEpp) <a href="https://twitter.com/DietrichEpp/status/1275623707881463808?ref_src=twsrc%5Etfw">June 24, 2020</a></blockquote>
<br />
I completely agree with Dietrich about the value of philosophy and other humanities in general. However, I felt it was overkill for addressing the specific weaknesses identified by Professor Burrell, as her argument against this particular fallacy didn't seem to require any non-STEM knowledge or understanding.<br />
<br />
<blockquote class="twitter-tweet" data-conversation="none">
<div dir="ltr" lang="en">
While I sympathize with this sentiment, you don't always need a philosophy degree to spot incoherent CS thinking/epistemology. You just need to stay awake in your statistics class.</div>
— Richard Veryard (@richardveryard) <a href="https://twitter.com/richardveryard/status/1275774489431871493?ref_src=twsrc%5Etfw">June 24, 2020</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<br />
<br />
<span class="css-901oao css-16my406 r-1qd0xha r-ad9z0x r-bcqeeo r-qvutc0">Of course, statistics is not the whole answer; but then neither is philosophy.
</span>I mentioned statistics as an example of a STEM discipline in which students should have the opportunity to unlearn naive epistemology; but of course any proper scientific discipline should include some understanding of scientific method. Although computing often calls itself a science, it is largely an engineering discipline; if you use the word <q>methodology</q> with computer people, they usually think you are talking about design methods. Social scientists (I believe Professor Burrell's PhD is in sociology) tend to have a much better understanding of research methodology.<br />
<br />
And of course, it's not just epistemology but also ethics.<br />
<blockquote class="twitter-tweet">
<div dir="ltr" lang="en">
Nothing scares me as much as seeing naive engineers with no knowledge of structural injustice, pervasive power asymmetries, or conservative and racist history of the field of AI, being endowed with the power to make tech that infiltrates the social sphere.</div>
— Abeba Birhane (@Abebab) <a href="https://twitter.com/Abebab/status/1277638076630798336?ref_src=twsrc%5Etfw">June 29, 2020</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<br />
<br />
One of the problems with professional philosophy is that it can be quite compartmentalized. There are philosophers who promote themselves as experts on technology ethics, but their published papers don't reference any recent literature on the philosophy of science and technology, or reveal any deep understanding of the challenges faced by scientists and engineers.<br />
<br />
So although there is undoubtedly good reasons for broader education in both directions, I'm sceptical about expecting clever people in one discipline to acquire a small but dangerous amount of expertise in some other discipline. I'm much more interested in promoting dialogue between disciplines. In his tribute to Steve Jobs, @jonahlehrer called this <b>Consilience</b>. <br />
<br />
<blockquote class="cite">
<q>What set all of Steve Jobs’s companies apart ... was an insistence that computer scientists must work together with artists and designers—that the best ideas emerge from the intersection of technology and the humanities.</q></blockquote>
The final word should go to @abebab<br />
<blockquote class="twitter-tweet" data-conversation="none">
<div dir="ltr" lang="en">
In contrast to an underspecified data for good model, embracing pluralism offers a way to work toward a model of data for co-liberation. This means transferring knowledge from experts to communities and explicitly cultivating community solidarity in data work.</div>
— Abeba Birhane (@Abebab) <a href="https://twitter.com/Abebab/status/1279367827896623109?ref_src=twsrc%5Etfw">July 4, 2020</a></blockquote>
<script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
<br />
<hr />
<br />
Jonah Lehrer, <a href="http://www.newyorker.com/news/news-desk/steve-jobs-technology-alone-is-not-enough">Steve Jobs: “Technology Alone Is Not Enough”</a> (New Yorker, 7 October 2011)<br />
<br />
Related posts: <a href="http://rvsoapbox.blogspot.co.uk/2011/10/from-convenience-to-consilience.html">From Convenience to Consilience - “Technology Alone Is Not Enough"</a> (October 2011), <a href="https://posiwid.blogspot.com/2019/06/the-habitual-vice-of-epistemology.html">The Habitual Vice of Epistemology</a> (June 2019), <a href="https://demandingchange.blogspot.com/2020/07/limitations-of-machine-learning.html">Limitations of Machine Learning</a> (July 2020), <a href="https://demandingchange.blogspot.com/2020/07/mapping-out-entire-world-of-objects.html">Mapping out the entire world of objects</a> (July 2020)Richard Veryardhttp://www.blogger.com/profile/04499123397533975655noreply@blogger.com0