Discussion about this post

User's avatar
Dan Kärreman's avatar

Judging from my experience of ChatGPT, my guess is that AI based on LLM is going to affect management practice more than PowerPoint, and less than Excel. Most of it will be to improve PowerPoint and Excel. This might sound small, but this is because the impact of PowerPoint and Excel on management practice is vastly underrated and poorly understood.

Expand full comment
Steve Phelps's avatar

There is often a blurred line between merely supporting decision making, and actually making decisions. Indeed decisions about who gets what information inevitably influence the final decisions that are made (see https://en.wikipedia.org/wiki/Information_asymmetry). Thus when we deploy a pre-trained LLM in an organization to help summarize and distribute information, the LLM can exert a large influence on organizational decisions, and may act as if it is pursuing its own agenda and goals; that is, it may be an agent. This can create a conflict of interest, especially in the case that a pre-trained model has been aligned via RLHF with the values of an external organization with different corporate values. This type of conflict is called a principal-agent conflict, and experiments with GPT models show that they exhibit principal-agent conflict when asked to act as agents in a simple online shopping task (https://arxiv.org/abs/2307.11137). If we are serious about deploying LLMs as decision support tools, we need to think very carefully about potential principal-agent conflicts, and how to mitigate them.

Moreover, it is likely that in the medium-term our everyday apps will gradually become more "agentified". As this happens, our phones and laptops will themselves start to resemble organisations, and we will need new approaches to maintaining organizational harmony (https://sphelps.substack.com/p/non-human-resources).

Expand full comment
5 more comments...

No posts