snobby about excel
AI and the end user effect
I actually think there is a serious potential problem here; I wrote a bit about it for my professional clients this week, and have pitched it to a few newspapers, but I’ll outline it here as a Friday post. It’s a train of thought which began by noticing that more and more of my friends are getting evangelistic about Claude Code, and that it might be time for an update of last year’s “towards a sensible AI-skepticism” post.
The thing with Claude Code (and to a significantly lesser extent, Copilot) is that I’ve been on the lookout for a “killer app” of LLMs, in the original sense of “something like spreadsheets, which people will make capital investments and change their own workflow in order to use”. And I think it’s now hard to sustain scepticism as to whether this will happen; early adopters really are spending money and using LLM coding tools to produce apps for themselves.
But the “like spreadsheets” element has me thinking. Over the years, I’ve often found it amusing to tease the Dilbert types among my friends by defending Microsoft Excel, the programming language[1] of the common man. However, computer types don’t just dislike Excel out of pure snobbishness.
In the language of IT professionals, spreadsheets are known as “end-user computing” (EUC). And EUC is a problem as well as a solution. A great deal of corporate information technology work is trying to satisfy the twin goals of “a central and consistent source of data which is secure and accessible across the organisation”, versus “it’s a hell of a lot quicker and easier for me to just open up Excel than to schedule a meeting with the SAP team”.
I’m most familiar with this problem in financial contexts; I have joked in the past that “[some material percentage] of the job of risk management is persuading people to email you spreadsheets on time”. And it’s obvious that a big bank is not in an ideal situation if large and complex risk positions are being tracked in a spreadsheet on someone’s desktop. But it shows up in all sorts of other areas; you can have the best data security policy in the world, but marketing departments are free spirits who cannot be tied down, and who will often email a few megabytes of non-anonymised customer data to a new agency that they want to try out.
At present, EUC is to some extent self-limiting; there is a threshold of size and complexity beyond which it is totally unmanageable to use Excel, so you end up biting the bullet and calling the central IT guys. If Claude Code and its like become a generally used “super-Excel”, though, that might have quite unpredictable results. It’s a productivity boost at some points, but we might be forced to reconsider the aphorism that “speeding up output behind a bottleneck cannot increase overall productivity, although it can reduce it”.
I guess that the prediction problem then switches to something like – if the IT world of the future involves something like “trying to stuff 200 end user apps into a trenchcoat so they can pretend to be a system”, can other LLMs help with that? And the answer is … maybe?
The sentence from last year’s post which I think has held up the best is that “There’s also a very important role for scepticism that AI is in some way or other outside the price mechanism or the normal priorities of political economy.”I am having a tough time following the debate over resource use and cost of LLM use, but it does seem to me that there’s a constraint, and it’s not clear that Moore’s Law-type progress is sweeping that constraint away in the way one might have hoped. The interesting question for me at the moment is whether AI can, at reasonable expense, clear up its own messes.
[1] Yes that’s what it is, it’s even Turing-complete these days, deal with it.

I was the second employee at Software Arts, the company that developed the first spreadsheet back in the late 1970s. One of the big differences between spreadsheets and AI is that employees wanted spreadsheets. Their benefits were so obvious, that people would bring their own 6502 based PC to work, and the corporate brass would bitch and whine about it and vow to crack down on PCs and spreadsheets.
We were amazed at the uses people found for VisiCalc. Farmers loved it. Finance people and accountants loved it. A group at MGH used it to structure data collection to set parameters on medical equipment. I had attended Bob Frankston's presentation at the 1979 NCC in NYC and aside from a group of friends, the only other people in the room were a tired couple who just wanted to sit down away from the crowd. So much for its industry reception.
Artificial intelligence is different. It is being rammed down everyone's throats from above. Sure, there are a handful of programmers going through the usual infatuation with a new tool, but for most employees AI is as popular as a 2% pay cut to pay for the boss's new jet. Companies are threatening to fire employees who don't use AI in their work even as they are told that the company expects to fire them once AI proves it can do their job.
The whole AI story is based on the, probably correct belief, that AI will allow a massive reduction in head count without crashing the plane before the boss's stock options can be exercised. Programming isn't about producing code anymore than mathematics is about producing proofs. It's about understanding a process, what it does and how it goes about it. There's an ontology and epistemology. AI provides none of that.
Reading a paper on AI this morning it struck me that the LLM problem is that it's all knowledge and no skills.
So I think that means it's going to struggle to clear up a mess- because it won't know what a mess is in the first place.