Sunday, November 3, 2024

Influencing the Habermas Machine

In my previous post Towards the Habermas Machine, I talked about a large language model (LLM) developed by Google DeepMind for generating a consensus position from a collection of individual views, named after Jürgen Habermas.

Given that democratic deliberation relies on knowledge of various kinds, followers of Habermas might be interested in how knowledge is injected into discourse. Habermas argued that mutual understanding was dependent upon a background stock of cultural knowledge that is always already familiar to agents. but this clearly has to be supplemented by knowledge about the matter in question.

For example, we might expect a discussion about appropriate speed limits to be informed by reliable or unreliable beliefs about the effects of a given speed limit on journey times, accident rates, pollution, and so on. In traditional discussion forums, it is extremely common for people to present themselves as having some special knowledge or authority, which supposedly gives extra weight to their opinions, and we might expect something similar to happen in a tech-enabled version.

For many years, the Internet has been distorted by Search Engine Optimization (SEO), which means that the results of an internet search are largely driven by commercial interests of various kinds. Researchers have recently raised a similar issue in relation to large language models, namely Generative Engine Optimization (GEO). Meanwhile, other researchers have found that LLMs (like many humans) are more impressed by superficial jargon than by proper research.

So we might reasonably assume that various commercial interests (car manufacturers, insurers, oil companies, etc) will be looking for ways to influence the outputs of the Habermas Machine on the speed limit question by overloading the Internet with knowledge (regime of truth) in the appropriate format. Meanwhile the background stock of cultural knowledge is now presumably co-extensive with the entire Internet.

Is there anything that the Habermas Machine can do to manage the quality of the knowledge used in its deliberations?


Footnote: Followers of Habermas can't agree on the encyclopedia entry, so there are two rival versions.

Footnote: The relationship between knowledge and discourse goes much wider than Habermas, so interest in this question is certainly not limited to his followers. I might need to write a separate post about the Foucault Machine.


Pranjal Aggarwal et al, GEO: Generative Engine Optimization (arxiv v3, 28 June 2024)

Callum Bains, The chatbot optimisation game: can we trust AI web searches? (Observer, 3 November 2024)

Alexander Wan, Eric Wallace, Dan Klein, What Evidence Do Language Models Find Convincing? (arxiv v2, 9 August 2024)

Stanford Encyclopedia of Philosophy: Jürgen Habermas (v1 2007) Jürgen Habermas (v2 2023)

No comments:

Post a Comment