An interesting talk by Professor Hélène @Landemore at @TORCHOxford yesterday, exploring the possibility that some forms of artificial intelligence might assist democracy. I haven't yet read her latest book, which is on Open Democracy.
There are various organizations around the world that promote various notions of Open Democracy, including openDemocracy in the UK, and the Coalition for Open Democracy in New Hampshire, USA. As far as I can see, her book is not specifically aligned with the agenda of these organizations.
Political scientists often like to think of democracy in terms of decision-making. For example, the Stanford Encyclopedia of Philosophy defines democracy as
a method of group decision making characterized by a kind of equality among the participants at
an essential stage of the collective decision making, and goes on to discuss various forms of this including direct participation in collective deliberation, as well as indirect participation via elected representatives.
At times in her talk yesterday, Professor Landemore's exploration of AI sounded as if
democracy might operate as a massive multiplayer online game (MMOG). She talked about the opportunities for using AI to improve public consultation, saying
my sense is that there is a real potential for AI to basically offer us a
better picture of who we are and where we stand on issues.
When people talk about decision-making in relation to artificial intelligence, they generally conform to a technocratic notion of decision-making that was articulated by Herbert Simon, and remains dominant within the AI world. When people talk about the impressive achievements of machine learning, such as medical diagnosis, this also fits this technocratic paradigm.
However, the limitations of this notion of decision-making become apparent when we compare it with Sir Geoffrey Vickers' notion of judgement in human systems, which contains two important elements that are missing from the Simon model - sensemaking (which Vickers called appreciation) and ethical/moral judgement. The importance of the moral element was stressed by Professor Andrew Briggs in his reply to Professor Landemore.
Although a computer can't make moral judgements, it might perhaps be able to infer our collective moral stance on various issues from our statements and behaviours. That of course still leaves a question of political agency - if a computer thinks I am in favour of some action, does that make me accountable for the consequences of that action?
Similarly, I should regard democracy as broader than decision-making alone, needing to include the question of governance. How can the People observe and make sense of what is going on, how can the People intervene when things are not going in accordance with collective values and aspirations, and how can Society make progressive improvements over time. Thus openDemocracy talks about accountability. There are also questions of reverse surveillance - how to watch those who watch over us. And maybe openness is not just about open participation but also about open-mindedness. Jane Mansbridge talks about being
open to transformation.
There may be a role for AI in supporting some of these questions - but I don't know if I'd trust it to.
Ethics in AI Live Event: Open Democracy in the Age of AI (TORCH Oxford, 13 November 2020) via YouTube
Nathan Heller, Politics without Politicians (New Yorker, 19 February 2020)
Hélène Landemore, Open Democracy: Reinventing Popular Rule for the 21st Century (Princeton University Press 2020)
Jane Mansbridge et al, The Place of Self-Interest and the Role of Power in Deliberative Democracy (The Journal of Political Philosophy:Volume 18, Number 1, 2010) pp. 64–100
Richard Veryard, Building Organizational Intelligence (LeanPub 2012)
Geoffrey Vickers, The Art of Judgment: A Study in Policy-Making (Sage 1965), Human Systems are Different (Paul Chapman 1983)
Stanford Encyclopedia of Philosophy: Democracy