As is often the case on Friday, I’m previewing ideas from the new book (which is now called “The Unaccountability Machine”). One of the things which delayed it was my feeling that I had to take a side on the big questions of AI. Is it, as some would have, “a revolutionary force that will transform and possibly destroy society as we know it”? Or is it, as others might say, “just a frikking autocomplete”?
I think the answer is a synthesis of the two – a really good autocomplete function could change society as we know it.
This was the purpose of writing the “laws of management motion” post earlier this week. The basic idea from Alfred Chandler (and from information theory and cybernetics) is that organisation is itself an information processing technology, and consequently, it exists and develops alongside and in response to other information processing technologies.
What I mean by that is that the key task of management and leadership is to prevent the control function from being overwhelmed and unable to do its job. The fundamental principle of organisation is W Ross Ashby’s “Principle of Requisite Variety” – a system can only be stable if the regulator has at least as much capacity to absorb information as the operations have to generate it. Organisation – and reorganisation – is all about ensuring that principle is respected, by changing the kinds of decisions that each level of management has to make, in order to match them to capacity.
You can see from this that anything which changes the capacity of decision-makers to handle information is going to change the tradeoffs involved, and expand the possibilities of organisational structure.
Let’s not get ahead of ourselves and start talking technology just yet. Think of much simpler information-management technologies. Like a system of writing things down in multiple ledgers in order to generate checksums and quickly see what payments had been made and received. That’s double-entry book-keeping, and it allowed massive changes in the economy and society.
Or even older – the main subject of David Graeber’s “Debt: The First 5000 Years” is how this single social relation changed and shaped the whole of civilisation. And debt is, in my view, basically a technology of information and control. It’s a way of investing in someone else’s project without having to know very much about it other than whether the borrower looks like they can pay you back; changing a multi-valued “what are the risks and returns?” question into a simpler yes/no.
So to begin talking about technology; a really good autocomplete function could have a very significant effect on people’s ability to manage information, and therefore potentially on organisational structures. I don’t want to add to the length of this post by dreaming up use cases as if I was pitching to YCombinator. But in general, a huge part of the problem of management and control is that of taking information which exists in an organisation, and ensuring that it arrives a) where it is needed, b) in time to be useful and c) in a form where it can be the basis of a decision. Anything that promises an order-of-magnitude increase in the productivity of that sort of task has to be taken seriously.
Of course, new structures are only an option. One thing that, trivially, is always made possible by a new information-handling technology is “the same but bigger”. Stafford Beer used to say that the way in which large corporations in the 1970s had installed mainframe computers to automate their existing processes was as if they had hired the combined talents of Einstein, Shakespeare and Leonardo, and put them to work memorising the phone book so managers could look up numbers more quickly.
On which subject, here’s an advert from Salesforce.com, advertising their new AI-enabled CRM tool, which mainly helps write personalised client emails.
Judging from my experience of ChatGPT, my guess is that AI based on LLM is going to affect management practice more than PowerPoint, and less than Excel. Most of it will be to improve PowerPoint and Excel. This might sound small, but this is because the impact of PowerPoint and Excel on management practice is vastly underrated and poorly understood.
There is often a blurred line between merely supporting decision making, and actually making decisions. Indeed decisions about who gets what information inevitably influence the final decisions that are made (see https://en.wikipedia.org/wiki/Information_asymmetry). Thus when we deploy a pre-trained LLM in an organization to help summarize and distribute information, the LLM can exert a large influence on organizational decisions, and may act as if it is pursuing its own agenda and goals; that is, it may be an agent. This can create a conflict of interest, especially in the case that a pre-trained model has been aligned via RLHF with the values of an external organization with different corporate values. This type of conflict is called a principal-agent conflict, and experiments with GPT models show that they exhibit principal-agent conflict when asked to act as agents in a simple online shopping task (https://arxiv.org/abs/2307.11137). If we are serious about deploying LLMs as decision support tools, we need to think very carefully about potential principal-agent conflicts, and how to mitigate them.
Moreover, it is likely that in the medium-term our everyday apps will gradually become more "agentified". As this happens, our phones and laptops will themselves start to resemble organisations, and we will need new approaches to maintaining organizational harmony (https://sphelps.substack.com/p/non-human-resources).