the most important number
and why we won't get it
It is time to think once more about “AI in General Management”. I have seen a few posts recently (nessun nome, nessun esercitazioni) about an idea I which I can’t really decide whether I agree with or not.
This is the thesis that, to put it in vaguely cybernetic terms, a really good LLM can work as a high capacity variety attenuator and amplifier, as well as a translator and transducer[1]. If you can boil a huge report down to three bullet points, but also expand the boss’s gnomic remarks to a full policy document, then this makes a lot of different forms of organisation possible. Particularly, it very much weakens the case for hierarchy, and I can see why a lot of people posting on the subject are interested in the possibility of it allowing for very small organisations indeed to tackle tasks which have previously required much bigger ones.
But can it work? I think that’s a question which “smart skepticism” would definitely tell you to sidestep; it might be a matter of unknown technological limits or it might be a matter of unknown intrinsic limits to the model, but it’s definitely not something to express a firm opinion on either way unless you’re cool with being made a fool of by history.
One of the things it might depend on, though, is another question that’s on my mind. Which is that a lot of the AI-enabled organisational models we’re talking about seem to rely on having most of the output produced most of the time by LLMs, with a knowledgeable human being checking them. Checking yes/no whether something is OK is a much faster job than creating it, so you get the big saving.
But … checking other people’s work is also a much crapper job than doing things yourself. It’s not cognitively demanding in terms of bandwidth, but it’s very cognitively demanding in terms of exhausting attention. Which ought to be worrying, because we know (quite spectacularly, from the world of self-driving cars), that it is very very difficult to keep paying attention when you’re monitoring a system that is meant to be A-OK most of the time, but needs you to be constantly aware because it sometimes screws up in a way that requires immediate action.
So, the number that I am interested in is something like “How many words of normal business English per day can a manager read and check for accuracy and sense, without their mind wandering and without going mad?”. There are strange echoes of Frederick Winslow Taylor here, whose fame and reputation largely rested on one case study in which he found, by trial and error, the optimal number of breaks for workers to take while loading iron ore.
I would guess that the place to look for this number might be on the editorial desks of newspapers, or in investment bank compliance departments. But I also suspect that it’s going to be difficult to get a stable, homeostatic answer. Because there will be variation between individuals, and everyone is going to want to pretend to be a 10x super-supervisor. Every company is going to find ways to convince itself that “our people are special, they can do much better than the rated output”.
And the nature of the problem that’s been set up is that if you fake it in this way, you won’t be aware that you’ve done something wrong, potentially for quite a while. As I said to someone this week, the difference between your legs and your judgement is that when your legs stop working, you’re immediately aware of the fact.
[1] I’ll explain some of the technical meanings here in a future “Beer Tasting Notes” post, but for the time being the only one that isn’t a fairly straightforward English word is “transduction”, which is a word Stafford Beer took from the cellular biology of the eye. The idea is that signals aren’t simply transmitted; only the part of the signal for which there is a structure already present to receive it. Some animals can’t see colours, for example. I am not yet quite sure what is really gained here over the concept of information being “lost in translation” but I’ll have another read to make sure.

"Which ought to be worrying, because we know..."
I thought you were going to say, "because we know that air traffic control is one of the most mentally exhausting and difficult professions."
I don’t know Stafford Beer’s work hugely well and wouldn’t dream of saying what he meant. But as a former biochemist, signal transduction is the process whereby a signal from outside a cell is detected and acted on. Most transduction pathways start with signalling molecules outside the cell binding to receptors on the cell surface, which in turn then trigger a response inside the cell, typically a cascade of reactions. In the eye the external signal starts not with a molecule binding to a receptor but a photon of light falling on a cone or rod cell, which eventually triggers a nerve impulse.
So I have always understood the organisational metaphor to be the process whereby one entity detects and then acts on signals that come from outside. And soft-systems style you can look at transduction at multiple granularities/levels of system.
(Relatedly, “transduction” without the leading “signal” also means the process of inserting DNA into a target cell - which can be done eg by viruses or biologists wanting to affect a cell to make it do something it doesn’t currently do. But I doubt that’s the one in this context since it doesn’t often happen in the eye.)