A while ago, I was lucky enough to attend a presentation on a Google DeepMind project called “The Habermas Machine”. It’s a really intriguing use of the LLM technology – basically, you take a lot of people who disagree with each other and ask them what they think about an issue. Then you feed their answers into a model, which tries to produce a statement of minimal agreement that all of them might sign up to. They score the extent to which they do agree with it (which trains the model), and explain what it is that they don’t like about the statement. This second round allows the model to come up with another, better version, which also clarifies to the participants what the other side’s reasons are for disagreeing with them.
It's called “The Habermas Machine” because it’s meant to, loosely speaking, do a similar job to Jurgen Habermas’ “Ideal Speech Environment”. In tests, there seems to be decent evidence that not only is the machine better than a human moderator at coming up with consensus statements, but that the machine-moderated process leads to more convergence of opinions among the actual participants. (I think I might have predicted this; the model obviously has a “flat” affect, and unlike a human being, isn’t always leaking clues from its intonation and body language about what it really thinks of the participants. That might suggest that as LLMs get better at simulating human responses, they might be worse for this purpose!).
There’s really a lot to say and think about this. But it’s Friday and I’m a facetious person, so instead I’m going to share the notes I’ve been making ever since seeing the presentation on which other philosophers and social theorists might also benefit from having machines made out of them.
The Giddens Machine – in accordance with the principle of double hermeneutics, it’s the Habermas Machine, but only for reaching agreement on interpretations of Habermas.
The Goffman Machine – after your side lost on the Habermas Machine, it comes along and generates a set of reasons why you shouldn’t feel so bad about that and should come back for another go.
The Bourdieu Machine – you type your views into it, and then it repeats them with slight and subtle adjustments to make you sound more middle class
The Fourcade/Healy Machine – it gives you a score, then makes you do the work of finding out how to change your views so as to increase your score. Finding equilibrium for the machine is your job now.
The Gambetta Machine – instead of finding a consensus, it selects the most awful version of each conflicting view, and then everyone switches to that in order to show how committed they are.
The Austin Machine – instead of telling the machine “I agree with this statement”, you have to tick a box saying “I hereby agree with this statement”.
The Grice Machine – like the Habermas one, but via conversational implicature it aims to create consensus among all the views that you haven’t expressed rather than the ones you have.
The Derrida Machine – everyone keeps asserting the same statements, but the AI brings them into agreement by changing the meaning of the words themselves.
The Crenshaw Machine – in each round the machine finds a new issue to divide up the group in a different way. Equilibrium is reached when everyone realises they’re on their own and need to get along with each other anyway
It’s just a bit of fun. (Except the Goffman Machine – I think that could be very useful indeed in a lot of discursive situations, the flat affect of the machine would be a benefit there too). If you got all the jokes first time I’m impressed! Have a good weekend everyone.
Omg I miss Unfogged so much
The key problem with the Bordieu machine is that by using it you are inherently disqualified from doing so