finally we have created the silver bullet
from Fred Brooks' classic essay "No Silver Bullets"
I was going to do something else this Friday, but the relentless tide of “AI is coming for your job, AI is going to cause mass unemployment, what will you do when AI makes you obsolete” articles has provoked me sufficiently (I won’t link to them as there are so many and I’m not picking fights). Basically, as I said on social media, if your best idea for what AI can do in the workspace is “replace a hundred human beings with a server rack doing the same thing”, you’ve got no business calling yourself a techno-optimist[1].
In fact, I’m so angry I’m going to write a bullet point list because there are so many unconnected mistakes being made.
Probably most importantly, unemployment is not an equilibrium (even Keynes ended up having to agree with Pigou on this). If there is thirty per cent of the workforce willing to work but unable to find a job, that means someone can employ them and get rich. If nobody can think of how to employ several millions of educated workers, then maybe ask the artificial intelligence if you think it’s so smart.
(Caveat). As you can see from my mention of Keynes above, transitory or cyclical unemployment can last long enough to be unpleasant and have bad consequences. But this is not a new economic policy problem!
Another point at the macro level – investment is made in the anticipation of profit. We can’t get to a situation where investment in technology puts 30% of the population out of work, simply because once it’s put 20% of the population out of work we are in a historic Great Depression and nobody is investing in anything any more.
(Non-caveat). “Oh but Danny silicon valley VCs don’t think that way”. I disagree. For one thing, yes they do, they just think that if they hyperscale they can deter others and develop a monopoly. (If anything this might work the other way; it is a bit rich for Microsoft, with its track record of using FUD and bullying to stop any new technology challenging its monopoly on selling software to middle managers, saying that AI will make middle managers obsolete). For another, “investment” isn’t just “overpaying for startup equity”. Datacentres have to be built, connected to the grid and cooled; real resources have to be diverted to investment rather than consumption, and this doesn’t happen when there’s no clear path to selling the output.
I have argued in the past that people are overestimating the organisational level benefits of AI because they are extrapolating from individual experiences, and speeding up production behind a bottleneck doesn’t increase output (although it might reduce it). But one thing I haven’t emphasised enough is that bottlenecks are not natural obstacles – they are, in most cases, the consequence of increasing production until you hit a bottleneck. If AI removes a bunch of bottlenecks, that won’t be used to produce the same output faster and cheaper, it will be used to produce a lot more output until a new bottleneck is reached and requires human intervention. (Weirdly, there was a two week period after the announcement of DeepSeek when all the techbros were wailing at their share prices and shouting “its Jevons Paradox you idiots”, but this got really quickly forgotten).
And competitive equilibrium is likely to mean that this will happen sooner rather than later. Like Marc Rubinstein, I’ve been really impressed at the ability of an LLM to make a spreadsheet financial model in a few minutes rather than taking a few hours. But … that just means that you spend a few more hours tweaking the model. Because if you don’t, then your competition will; what this means is that you can no longer sell a spreadsheet model that doesn’t have a lot of industry knowledge built in. Something which was always a bit commodified is now completely valueless without having at least as much human input into adding non-data insights to it. Again, people who spend a load of time in other contexts talking about building “moats” seem to think firms will forget about the importance of this when they get a bit of AI.
Even the individual level anecdotes don’t, if you look at them carefully, support the labour-replacing predictions anything like as strongly as one might think. For example, take Mike Konczal’s “Me And My AI” post. He’s sped up his workflow, and used the extra productivity to start following up lots of little ideas that he otherwise wouldn’t have the time to do. But … either these ideas will be dead ends (in which case no harm done but no benefit either), or they will be productive new projects (in which case, that looks like it’s going to generate more work for Mike, not less). Seriously, read that post and ask yourself – does this look like a path which is going to lead to Mike making one of his colleagues redundant because he can do their work as well as his own, or a path that’s going to lead to him trying to hire another colleague to do his current work while he follows up his new projects?
Which gets me to the crux; I gave this post that title intentionally, because what the AI-employment-doomers seem to actually believe is “at last, we have invented the mythical man-month, from Fred Brooks’ famous essay The Mythical Man-Month”. Labour-time isn’t fungible. In most cases, sparing me half an hour on my job doesn’t mean that I can pick up half an hour of my desk-neighbour’s. (In fact, reorganising your processes to make something like this even slightly possible is an incredibly difficult and often traumatic business).
Time isn’t even necessarily fungible in my own job. As I mentioned a few posts ago, I have now set up my workflow so that I can look up references to European banking regulation really quickly. It’s great, I would never go back. But what I seem to be finding is that apparently I used to multitask a little bit; while looking for references, I would be thinking about what the reference was needed for and what I was going to say about it once I found it.
Now, it is massively nicer to have the ref immediately and then have ten minutes thinking time, rather than CTRL-F’ing and blinding in frustration for ten minutes then going “yep that’s what I wanted”. But it’s still the same ten minutes. I wrote in the past about workplace leisure, and the fact that most office time is always going to be wasted because of the nature of the process. It seems to me that the main effect of AI is likely to be that routine administrative tasks will become less tedious and white collar jobs more pleasant, rather than leading to any less demand. There’s techno-optimism for you!
Well, what that lacked in brevity it made up for in incoherence. Normal service will be resumed next week, if the good Lord spares me and the Singularity tarries in its coming. Have a good weekend folks.
[1] Also directed at “techno optimists” who think there is a birthrate crisis and the only solution is “tradwives and such like”. Catch yourself on, son, build a robot or something if you like robots so much.

The 1950s/60s comparison is instructive here. The productivity and communications improvements since then have been astronomical - and yet unemployment hasn't materially shifted. What seems to happen is that expected productivity simply recalibrates upward to consume the available pool, and we prove remarkably good at encouraging consumption of whatever surplus emerges to maintain demand. The bottleneck moves, the baseline rises, and we're back where we started - just faster.
Which makes your point about work becoming less tedious rather than less frequent feel like the realistic ceiling of what technology actually delivers. Historically, that might even be the best we should expect.
Will Mandidis (who has been on a tear recently) made a related, complexifying point on the relationship between thinking you're being productive vs actually being productive: “The market for feeling productive is orders of magnitude larger than the market for being productive.” Which is just to say it might be even worse than you describe! https://minutes.substack.com/p/tool-shaped-objects