Thursday, December 24, 2020

Technological Determinism

Social scientists and social historians are naturally keen to produce explanations for social phenomena. Event B happened because of A.

Sometimes the explanation involves some form of technology. Lewis Mumford traced the start of the Industrial Revolution to the invention of the mechanical clock, while Marshall McLuhan talks about the great medieval invention of typography that was the take-off moment into the new spaces of the modern world McLuhan 1962 p 79.

These explanations are sometimes read as implying some form of technological determinism. For example, many people read McLuhan as a technological determinist.

McLuhan furnished [the tech industry] with a narrative of historical inevitability, a technological determinism that they could call on to negate the consequences of their inventions - if it was fated to happen anyway, is it really their fault?
Daub 2020 pp 47-48

Although sometimes McLuhan claimed the opposite. After Peter Drucker had sought an explanation for the basic change in attitudes, beliefs, and values that had released the Technological Revolution, McLuhan's 1964 book set out to answer this question.

Far from being deterministic, however, the present study will, it is hoped, elucidate a principal factor in social change which may lead to a genuine increase of human autonomy.
McLuhan 1962 p 3

As McLuhan has said, there is no inevitability so long as there is a willingness to contemplate what is happening.
Postman Weingartner 1969 p 20

Raymond Williams saw McLuhan's stance as 

an apparently sophisticated technological determinism which has the significant effect of indicating a social and cultural determinism: a determinism, that is to say, which ratifies the society and culture we now have, and especially its most powerful internal directions.
Williams, second edition p 120

Neil Postman himself made some statements that were much more clearly deterministic. 

Once a technology is admitted, it plays out its hand; it does what it is designed to do.
Postman 1992

But causal explanation doesn't always mean inevitability. Explanations in history and the social sciences often have to be understood in terms of tendencies, probabilities and propensities, other-things-being-equal.


There is also a common belief that technological change is irreversible. A good counter-example to this is Japan's reversion to the sword between 1543 and 1879, as documented by Noel Perrin. What's interesting about this example is that it shows that technology reversal is possible under certain sociopolitical conditions, and also that these conditions are quite rare.

What is rather more common is for sociopolitical forces to inhibit the adoption of technology in the first place. In my article on Productivity, I borrowed the example of continuous-aim firing from E.E. Morison. This innovation was initially resisted by the Navy hierarchy (both UK and US), despite tests demonstrating a massive improvement in firing accuracy, at least in part because it would have disrupted the established power relations and social structure on board ship.


Evolution or Revolution?

How to characterize the two examples of technology change I mentioned at the beginning of this post - the mechanical clock and moveable type? It is important to remember that this isn't about the invention of clocks and printing, since these technologies were known across the ancient world from China to Egypt, but about significant improvements to these technologies, which made them more readily available to more people. It was these improvements that made other social changes possible.


Technologists are keen to take the credit for the positive effects of their innovations, while denying responsibility for any negative effects. The narrative of technological determinism plays into this, suggesting that the negative effects were somehow inevitable, and there was therefore little point in resisting them.

The tech industry ... likes to imbue the changes it yields with the character of natural law.
Daub 2020 p 5

If new tech is natural, then surely it is foolish for individual consumers to resist it. The rhetoric of early adopters and late adopters suggests that the former are somehow superior to the latter. Why bother with old fashioned electricity meters or doorbells, if you can afford smart technology? Are you some kind of technophobe or luddite or what?


What's wrong with the idea of technological determinism is not that it is true or false, but that it misrepresents the relationship between technology and society, as if they were two separate domains exerting gravitational force on each other. In my work on technology adoption, I used to talk about technology-in-use. Recent writing on the philosophy of technology (especially Stiegler and his followers) refer to this as pharmacological, using the term in its ancient Greek sense rather than referring specifically to the drug industry. If you want to think of technology as a drug that alters its users' perception of reality, then perhaps it's not such a leap from the drug industry to the tech industry.

But the word alters isn't right here, because it implies the existence of some unaltered reality prior to technology. As Stiegler and others make clear, there is no reality prior to technology: our reality and our selves have always been part of a sociotechnical world. 

Donna Harraway sees determinism as a discourse (in the Foucauldian sense) rather than as a theory of power and control.

Technological determination is only one ideological space opened up by the reconceptions of machine and organism as coded texts through which we engage in the play of ·writing and reading the world.

As Rob Safer notes,

Human history for Haraway isn’t a rigid procession of cause determining effect, but a process of becoming that depends upon human history’s conception of itself, via the medium of myth.

 


 

Adrian Daub, What Tech Calls Thinking (Farrar Straus and Giroux, 2020) 

Donna Haraway, Cyborg Manifesto (Socialist Review, 1985)

Marshall McLuhan, The Gutenberg Galaxy (University of Toronto Press, 1962) 

E.E. Morison, Men Machine and Modern Times (MIT Press, 1966)

Lewis Mumford, Technics and Civilization (London: Routledge, 1934)

John Durham Peters, “You Mean My Whole Fallacy Is Wrong”: On Technological Determinism  (Representations 140 (1): 10–26.  November 2017)

Noel Perrin, Giving up the gun (New Yorker, 13 November 1965), Giving up the gun (David R Godine, 1988)

Neil Postman, Technolopoly: the surrender of culture to technology (Knopf, 1992)

Neil Postman and Charles Weingartner, Teaching as a Subversive Activity (Delacorte 1969) page references to Penguin 1971 edition

Jacob Riley, Technological Determinism, Control, and Education: Neil Postman and Bernard Stiegler (1 October 2013)

Federica Russo, Digital Technologies, Ethical Questions, and the Need of an Informational Framework  (Philosophy and Technology volume 31, pages655–667, November 2018)

Rob Safer, Haraway’s Theory of History in the Cyborg Manifesto (16 March 2015)

Richard Veryard, Demanding Higher Productivity (data processing 28/7, September 1986)

Raymond Williams, Television, Technology and Cultural Form (Routledge, 1974, 1990)


Related posts: Smart Guns (May 2014)

Friday, December 11, 2020

Evolution or Revolution 3

Let me start this post with some quotes from @adriandaub's book What Tech Calls Thinking.

Disruption has become a way to tell a story about the meaning of both discontinuity and continuity.

Daub p 119

One ought to be skeptical of unsubstantiated claims of something's being totally new and not following the hitherto established rules (of business, of politics, of common sense), just as one is skeptical of claims that something which really does feel and look unprecedented is simply a continuation of the status quo.

Daub pp 115-6

For example, Uber.

Uber claims to have revolutionized the experience of hailing a cab, but really that experience has stayed largely the same. What it has managed to get rid of were steady jobs, unions, and anyone other than Uber's making money on the whole enterprise.

Daub p 105

Clayton Christensen would agree. In an article restating his original definition of the term Disruptive Innovation, he put Uber into the category of what he calls Sustaining Innovation.

Uber’s financial and strategic achievements do not qualify the company as genuinely disruptive—although the company is almost always described that way.

HBR 2015

However, as I pointed out on Twitter earlier today, Christensen's use of the word disruptive has been widely diverted by big tech vendors and big consultancies in an attempt to glamorize their marketing to big corporates. If you put the name of any of the big consultancies into an Internet search engine together with the word disruption, you can find many examples of this. Here's one picked at random: Discover how you can seize the upside of disruption across your industry.

The same experiment can be tried with other jargon terms, such as paradigm shift. By the way, Daub notes that Alex Karp, one of the founders of Palantir, wrote his doctoral dissertation on jargon - speech that is used more for the feelings it engenders and transports in certain quarters than for its informational content (Daub p 85).

@jchyip thinks we should try to stick to Christensen's original definitions. But although I don't approve of vendors fudging perfectly good technical terms for their own marketing purposes, there is sometimes a limit to the extent to which we can insist that such terms still carry their original meaning.

And to my mind this is not just a dispute about the meaning of the word disruptive but a question of which discourse shall prevail. I have long argued that claims of continuity and novelty are not always mututally exclusive, since they may simply be alternative descriptions of the same thing for different audiences. The choice of description is then a question of framing rather than some objective truth. As Daub notes

The way the term is used today really implies that whatever continuity is being disrupted deserved to be disrupted.

Daub p 119

For more on this, see the earlier posts in this series: Evolution or Revolution (May 2006), Evolution or Revolution 2 (March 2010)

In a comment below the March 2010 post, @cecildjx asked my opinion on the (relative) significance of the Internet versus the iPhone. Here's what I answered.

My argument is that our feelings about technology are fundamentally and systematically distorted by glamour and proximity. Of course we are often fascinated by the most-recent, and we tend to take the less-recent for granted, but that is an unreliable basis for believing that the recent is (or will turn out to be) more significant from a larger historical perspective.

What I really find interesting (from a socio-historical perspective) is how quickly technologies can shift from fascinating to taken-for-granted. Since I started work, my working life have been transformed by a range of tools, including word processing, spreadsheets, mobile phones, fax machines, email and internet. Apart from a few developers working for Microsoft or Google, is anyone nowadays fascinated by word processors or spreadsheets? If we pay attention to the social changes brought about by the Internet, and ignore the social changes brought about by the word processor, then of course we will get a distorted view of the internet's importance. If we glamorize the iPhone while regarding older mobile telephones as uninteresting, we end up making a fetish of some specific design features of a particular product.

If we have a distorted sense of which innovations are truly disruptive or significant, we also have a distorted sense of technological change as a whole. There is a widespread belief that the pace of technological change is increasing, but this could be an illusion caused (again) by proximity. See my post on Rates of Evolution (September 2007), where I also note that some stakeholders have a vested interest in talking up the pace of technology change.


Clayton M. Christensen, Michael E. Raynor, and Rory McDonald, What Is Disruptive Innovation? (HBR Magazine, December 2015)

Adrian Daub, What Tech Calls Thinking (Farrar Straus and Giroux, 2020)

Thanks to @jchyip for kicking off the most recent discussion.

Thursday, December 10, 2020

The Social Dilemma

Just watched the documentary The Social Dilemma on Netflix, which takes a critical look at some of the tech giants that dominate our world today (although not Netflix itself, for some reason), largely from the perspective of some former employees who helped them achieve this dominance and are now having second thoughts. One of the most prominent members of this group is Tristan Harris, formerly with Google, now the president of an organization called the Center for Humane Technology.

The documentary opens by asking the contributors to state the problem, and shows them all initially hesitating. By the end of the documentary, however, they are mostly making large statements about the morality of encouraging addictive behaviour, the propagation of truth and lies, the threat to democracy, the ease with which these platforms can be used by authoritarian rulers and other bad actors, and the need for regulation.

Quantity becomes quality. To some extent, the phenomena and affordances of social media can be regarded as merely scaled-up versions of previous social tools, including advertising and television: the maxim If you aren't paying, you are the product derives from a 1973 video about the power of commercial television. However, several of the contributors to the documentary observed that the power of the modern platforms and the wealth of the businesses that control these platforms is unprecedented, while noting that social media is far less regulated than other mass communication enterprises, including television and telecommunications.

Contributors doubted whether we could expect these enterprises, or the technology sector generally, to fix these problems on their own - especially given the focus on profit, growth and shareholder value that drives all enterprises within the capitalist system. (Many years ago, the architect J.P. Eberhard noted a tendency to escalate even small problems to the point where the entire capitalist system comes into question, and argued that We Ought To Know The Difference.) So is regulation the answer?

Surprisingly enough, Facebook doesn't think so. In its response to the documentary, it complains

The film’s creators do not include insights from those currently working at the companies or any experts that take a different view to the narrative put forward by the film.

As Pranav Malhotra notes, it's not hard to find experts who would offer a different perspective, in many cases offering far more fundamental and far-reaching criticisms of Facebook and its peers. Hey Facebook, careful what you wish for!

Last year, Tristan Harris appeared to call for a new interdisciplinary field of research, focused on exploring the interaction between technology and society. Several people including @ruchowdh pointed out that such a field was already well-established. (In response he said he already knew this, and apologized for his poor choice of words, blaming the Twitter character limit.)

So there is already an abundance of deep and interesting work that can help challenge the simplistic thinking of Silicon Valley in a number of areas including

  • Truth and Objectivity
  • Technological Determinism
  • Custodianship of Technology (for example Latour's idea that we should Love Our Monsters - see also article by Adam Briggle)

These probably deserve a separate post each, if I can find time to write them.


The Social Dilemma (dir Jeff Orlowski, Netflix 2020)

Wikipedia: The Social Dilemma, Television Delivers People

Stanford Encyclopedia of Philosophy: Ethics of Artificial Intelligence and Robotics, Phenomenological Approaches to Ethics and Information Technology, Philosophy of Technology

 

Adam Briggle, What can be done about our modern-day Frankensteins? (The Conversation, 26 December 2017)

Robert L. Carneiro, The transition from quantity to quality: A neglected causal mechanism in accounting for social evolution  (PNAS 97:23, 7 November 2000)

Rumman Chowdhury, To Really 'Disrupt,' Tech Needs to Listen to Actual Researchers (Wired, 26 June 2019)

Facebook, What the Social Dilemma Gets Wrong (2020)

Tristan Harris, “How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist”, Thrive Global, 18 May 2016. 

John Lanchester, You Are The Product (London Review of Books, Vol. 39 No. 16, 17 August 2017)

Bruno Latour, Love Your Monsters: Why we must care for our technologies as we do our children (Breakthrough, 14 February 2012) 

Pranav Malhotra, The Social Dilemma Fails to Tackle the Real Issues in Tech (Slate, 18 September 2020)

Richard Serra and Carlota Fay Schoolman, Television Delivers People (1973) 

Zadie Smith, Generation Why? (New York Review of Books, 25 November 2010)

Siva Vaidhyanathan, Making Sense of the Facebook Menace (The New Republic, 11 January 2021)


Related posts: The Perils of Facebook (February 2009), We Ought to Know the Difference (April 2013), Rhyme or Reason: The Logic of Netflix (June 2017), On the Nature of Platforms (July 2017), Ethical Communication in a Digital Age (November 2018), Shoshana Zuboff on Surveillance Capitalism (February 2019)

Monday, November 30, 2020

Whom does the change serve?

In my writings on technology ethics, riffing on the fact that so many cool technologies are presented as the Holy Grail of something or other, I have frequently invoked the mediaeval question that Parsifal failed to ask: Whom does the Grail Serve?


The same question can be asked of other changes and transformations, where technology might be part of the story but is not the primary story.

 

In response to Francis Fukuyama's statement on Big Tech's information monopoly

Almost every abuse these platforms are accused of perpetrating can be simultaneously defended as economically efficient

@mireillemoret argues

Efficiency is important, but it is NOT the holy grail

 

Important for whom? When I get involved in economic discussions of efficiency or productivity or whatever, I always try to remember the ethical dimension - efficiency for whom, productivity for whom, predictability and risk reduction for whom, innovation for whom.


Note: I just started reading Adrian Daub's new book, but I haven't got to the Disruption chapter yet.



Chris Bruce, Environmental Decision-Making as Central Planning: FOR WHOM is Production to Occur? (Environmental Economics Blog, 19 August 2005)

Adrian Daub, What tech calls thinking (Farrar Straus and Giroux, 2020) 

Adrian Daub, The disruption con: why big tech’s favourite buzzword is nonsense (The Guardian, 24 September 2020)

Francis Fukuyama, Barak Richman, and Ashish Goel, How to Save Democracy From Technology - Ending Big Tech’s Information Monopoly (Foreign Affairs, January/February 2021) 

Further posts

Saturday, November 14, 2020

Open Democracy in the Age of AI

An interesting talk by Professor Hélène @Landemore at @TORCHOxford yesterday, exploring the possibility that some forms of artificial intelligence might assist democracy. I haven't yet read her latest book, which is on Open Democracy.

There are various organizations around the world that promote various notions of Open Democracy, including openDemocracy in the UK, and the Coalition for Open Democracy in New Hampshire, USA. As far as I can see, her book is not specifically aligned with the agenda of these organizations.

Political scientists often like to think of democracy in terms of decision-making. For example, the Stanford Encyclopedia of Philosophy defines democracy as a method of group decision making characterized by a kind of equality among the participants at an essential stage of the collective decision making, and goes on to discuss various forms of this including direct participation in collective deliberation, as well as indirect participation via elected representatives.

At times in her talk yesterday, Professor Landemore's exploration of AI sounded as if democracy might operate as a massive multiplayer online game (MMOG). She talked about the opportunities for using AI to improve public consultation, saying my sense is that there is a real potential for AI to basically offer us a better picture of who we are and where we stand on issues

When people talk about decision-making in relation to artificial intelligence, they generally conform to a technocratic notion of decision-making that was articulated by Herbert Simon, and remains dominant within the AI world. When people talk about the impressive achievements of machine learning, such as medical diagnosis, this also fits this technocratic paradigm.

However, the limitations of this notion of decision-making become apparent when we compare it with Sir Geoffrey Vickers' notion of judgement in human systems, which contains two important elements that are missing from the Simon model - sensemaking (which Vickers called appreciation) and ethical/moral judgement. The importance of the moral element was stressed by Professor Andrew Briggs in his reply to Professor Landemore.

Although a computer can't make moral judgements, it might perhaps be able to infer our collective moral stance on various issues from our statements and behaviours. That of course still leaves a question of political agency - if a computer thinks I am in favour of some action, does that make me accountable for the consequences of that action?

In my own work on collective intelligence, I have always regarded decision-making and policy-making as important but not the whole story. Intelligence also includes observation (knowing what to look for), sensemaking and interpretation, and most importantly learning from experience.

Similarly, I should regard democracy as broader than decision-making alone, needing to include the question of governance. How can the People observe and make sense of what is going on, how can the People intervene when things are not going in accordance with collective values and aspirations, and how can Society make progressive improvements over time. Thus openDemocracy talks about accountability. There are also questions of reverse surveillance - how to watch those who watch over us. And maybe openness is not just about open participation but also about open-mindedness. Jane Mansbridge talks about being open to transformation.

There may be a role for AI in supporting some of these questions - but I don't know if I'd trust it to.



Ethics in AI Live Event: Open Democracy in the Age of AI (TORCH Oxford, 13 November 2020) via YouTube

Nathan Heller, Politics without Politicians (New Yorker, 19 February 2020)

Hélène Landemore, Open Democracy: Reinventing Popular Rule for the 21st Century (Princeton University Press 2020)

Jane Mansbridge et al, The Place of Self-Interest and the Role of Power in Deliberative Democracy (The Journal of Political Philosophy:Volume 18, Number 1, 2010) pp. 64–100

Richard Veryard, Building Organizational Intelligence (LeanPub 2012)

Geoffrey Vickers, The Art of Judgment: A Study in Policy-Making (Sage 1965), Human Systems are Different (Paul Chapman 1983)

Stanford Encyclopedia of Philosophy: Democracy

Sunday, October 25, 2020

Operational Excellence and DNA

In his 2013 article on Achieving Operational Excellence, Andrew Spanyi quotes an unnamed CIO saying operational excellence is in our DNA. Spanyi goes on to criticize this CIO's version of operational excellence, which was based on limited and inadequate tracking of customer interaction as well as old-fashioned change management.

But then what would you expect? One of the things that distinguishes humans from other species is how little of our knowledge and skill comes directly from our DNA. Some animals can forage for food almost as soon as they are born, and some only require a short period of parental support. Whereas a human baby has to learn nearly everything from scratch. Our DNA gives very little directly useful knowledge and skill, but what it does give us is the ability to learn.

Very few cats and dogs reach the age of twenty. But at this age many humans are still in full-time education, while others have only recently started to attain financial independence. Either way, they have by now accumulated an impressive quantity of knowledge and skill. But only a foolish human would think that this is enough to last the rest of their life. The thing that is in our DNA, more than anything else, more than other animals, is learning.

There are of course different kinds of learning involved. Firstly there is the stuff that the grownups already know. Ducks teach their young to swim, and human adults teach kids to do sums and write history essays, as well as some rather more important skills. In the world of organizational learning, consultants often play this role - coaching organizations to adopt best practice.

But then there is going beyond this stuff. Intelligent kids learn to question both the content and the method of what they've been taught, as well as the underlying assumptions, and some of them never stop reflecting on such things. Innovation depends on developing and implementing new ideas, not just adopting existing ideas.

Similarly, operational excellence doesn't just mean adopting the ideas of the OpEx gurus - statistical process control, six sigma, lean or whatever - but collectively reflecting on the most effective and efficient ways to make radical as well as incremental improvements. In other words, applying OpEx to itself.


Andrew Spanyi, Achieving Operational Excellence (Cutter Consortium Executive Report, 15 October 2013) registration required

Related posts: Changing how we think (May 2010), Learning at the Speed of Learning (October 2016)










Tuesday, July 14, 2020

Technology Mediating Relationships

In a May 2020 essay, @NaomiAKlein explains how Silicon Valley is exploiting the COVID19 crisis as an opportunity to reframe a long-standing vison of an app-driven, gig-fueled future. Until recently, Klein notes, this vision was being sold to us in the name of convenience, frictionlessness, and personalization. Today we are being told that these technologies are the only possible way to pandemic-proof our lives, the indispensable keys to keeping ourselves and our loved ones safe. Klein fears that this dubious promise will help to sweep away a raft of legitimate concerns about this technological vision.

In a subsequent interview with Katherine Viner, Klein emphasizes the importance of touch. In order to sell a touchless technology, touch has been diagnosed as the problem.

In his 1984 book, Albert Borgmann introduced the notion of the device paradigm. This means viewing technology exclusively as a device (or set of devices) that deliver a series of commodities, and evaluating the technical features and powers of such devices, without having any other perspective. A device is an artefact or instrument or tool or gadget or mechanism, which may be physical or conceptual. (Including hardware and software.)

According to Borgmann, it is a general trend of technological development that mechanisms (devices) are increasingly hidden behind service interfaces. Technology is thus regarded as a means to an end, an instrument or contrivance, in German: Einrichtung. Technological progress increases the availability of a commodity or service, and at the same time pushes the actual device or mechanism into the background. Thus technology is either seen as a cluster of devices, or it isn't seen at all.

However, Klein suggests that COVID19 might possibly have the opposite effect.

The virus has forced us to think about interdependencies and relationships. The first thing you are thinking about is: everything I touch, what has somebody else touched? The food I am eating, the package that was just delivered, the food on the shelves. These are connections that capitalism teaches us not to think about.

While Klein attributes this teaching to capitalism, where Borgmann and other followers of Heidegger would say technology, she appears to echo Borgmann's idea that we have a moral obligation not to settle mindlessly into the convenience that devices may offer us (via Stanford Encyclopedia). This leads to what Borgmann calls Focal Practices.



Albert Borgmann, Technology and the Character of Contemporary Life: A philosophical inquiry (University of Chicago Press, 1984)

Naomi Klein, Screen New Deal (The Intercept, 8 May 2020). Reprinted as How big tech plans to profit from the pandemic (The Guardian, 13 May 2020)

Katherine Viner, Interview with Naomi Klein (The Guardian, 13 July 2020)

Peter-Paul Verbeek, Devices of Engagement: On Borgmann's Philosophy of Information and Technology (Techné, SPT v6n1, Fall 2002)

David Wood, Albert Borgmann on Taming Technology: An Interview (The Christian Century, 23 August 2003) pp. 22-25

Wikipedia: Technology and the Character of Contemporary Life

Stanford Encyclopedia of Philosophy: Phenomenological Approaches to Ethics and Information Technology - Technological Attitude