Planned programming has been interrupted, because I want to recommend you all to read this essay in The Economist by Farrell & Shalizi (sounds like an upmarket delicatessen, actually the rising stars of political science). It’s on a subject that I got rather obsessed with a couple of years ago (and if I interviewed you for a podcast, I really do still intend to pubish it!). That is to say, an idea that keeps on being rediscovered independently by people as various as science fiction writers, computer scientists and postcolonial writers; the idea that organisations (corporations, states, bureaucracies in general) are themselves forms of artificial intelligence.
The extent to which one regards this as philosophically coherent is certainly debatable. When push comes to shove (like in a really irritating podcast interview, sorry guys, I was still very pandemic-brained), nobody is really prepared to say that General Motors is literally an intelligent agent, potentially capable of feeling remorse or joy. But, as Stafford Beer used to say, there is an important sense in which “The Purpose Of A System Is What It Does”; you can be agnostic about inner states and still say that if something behaves so as to achieve particular outcomes, and adjusts its organisation to maintain this outcome-directed behaviour in a changing world, it’s acting in a way that is reasonable to describe in the same sorts of language you use to describe a person with goals. When a 737Max crashes into the sea, it’s much more natural to say “Boeing did that” than to search around for some individual bod within the company to blame.
Anyway, that’s by the by, because Farrell & Shalizi (sounds like a maverick cop duo, actually the team bringing quantitative rigour to international relations) are mainly talking about the relationship between these jokily-attributed, not-quite artificial intelligences, and the normal kind like ChatGPT. And I think they’ve got that relationship spot-on; the robots are likely to become our colleagues not our overlords. I mean our colleagues in the same way that if you work in the advertising industry, the Google algorithm is one of your key audiences, and indeed if you work in SEO, you are basically spending your working day marketing to robots.
They will become our colleagues and our middle managers, because they do the same thing as us. In a recent post about history, I tried to introduce a concept that Stafford Beer calls “variety amplification”. This is borrowed from electronic engineering, and has the same sense as that of an amplifier circuit; by using a small source of imagination or cognition to modulate a much larger reservoir, you can create the illusion of greatly increasing your own capacity. The simple example would be that of an annoying boss, who realises that “saying no” is a cognitively cheap operation compared to “making a useful suggestion”, and consequently spends the whole working day forcing his subordinates to come up with ideas until they find one he likes.
And in the recent post about art, I suggested that generative Ais worked in a similar way; they take a load of already existing content, recombine it more or less at random and shake the recombinations through a series of filters calibrated to ensure that what drops out at the end is going to look reasonably like (a string of numbers that can be interpreted as) art. AI and management are doing a lot of the same thing – they are techniques for taking unimaginably huge lumps of information and (literally) making them manageable.
Even people who hate management science often have a soft spot for Alfred Chandler. In his book “Strategy and Structure”, in the course of a history of the development of the modern American corporation, he formed a theory that (massive oversimplification incoming) the one determines the other – companies (and by extension, bureaucracies in general) restructure and reorganise themselves because they have strategic goals (either explicit or emergent), and in order to achieve these goals, they need to be able to handle their flow of information. Reorganisations happen and structures change in order to ensure that every level of management is capable of handling the information that comes in to it (after having been attenuated by lower levels of the organisation handling problems themselves rather than bothering the boss).
Consequently, the best organisational structure at any given time – which you’d guess would be the one that organisations would tend to under mild pressure – will very much depend on what other techniques of information handling are available. Henry and Cosma (sounds like a slapstick comedy duo, actually the inventors of the theory of “cognitive democracy”) are basically telling us that the current generation of AI should be seen first and foremost as a new technique for the amplification of management capacity. And this means that it’s likely to drive significant organisational change, of one kind or another. The robots are perhaps neither our overlords nor our colleagues; perhaps they’re a bunch of management consultants that we’ve hired.