38 Comments
User's avatar
James Cham's avatar

Will Mandidis (who has been on a tear recently) made a related, complexifying point on the relationship between thinking you're being productive vs actually being productive: “The market for feeling productive is orders of magnitude larger than the market for being productive.” Which is just to say it might be even worse than you describe! https://minutes.substack.com/p/tool-shaped-objects

Dan Davies's avatar

Or alternatively better! Feeling productive is nice and valuable; in retrospect I didn't write anything like enough about the Goldman Sachs junior banker strike of 2021 because it was really interesting. It was really obvious to me that they weren't actually objecting to the hours they were being asked to work - the thing they couldn't stand was being treated like their time was of zero value.

Dan Davies's avatar

By the way thanks very much for the recommendation that looks really good

Matt Woodward's avatar

A major confounding factor in a lot of these discussions is that we are, in general, shit at measuring productivity in software development.

There’s only two data points I’m aware of on the actual productivity impact of AI on coding, and they’re both pretty weak results in and of themselves, but they both have the critical finding that the subjects thought that the AI made their productivity go up but actually it made it go down. I don’t think they’re particularly strong results for the quantitative impact of AI on coding, but I do think they’re reasonable evidence that self-reported productivity increases are not trustworthy, which makes the whole discussion suspect.

Links:

https://arxiv.org/abs/2507.09089 (old data, old models, small sample size etc)

https://mikelovesrobots.substack.com/p/wheres-the-shovelware-why-ai-coding (close to anecdata)

Simon Kinahan's avatar

We used to have an actual academic literature on programmer productivity but abandoned it in favor of a bunch of nonsense about backlogs and story points. Since your LLM can’t attend a standup meeting how will you tell how many stories it completed this week?

Jim Grafton's avatar

The 1950s/60s comparison is instructive here. The productivity and communications improvements since then have been astronomical - and yet unemployment hasn't materially shifted. What seems to happen is that expected productivity simply recalibrates upward to consume the available pool, and we prove remarkably good at encouraging consumption of whatever surplus emerges to maintain demand. The bottleneck moves, the baseline rises, and we're back where we started - just faster.

Which makes your point about work becoming less tedious rather than less frequent feel like the realistic ceiling of what technology actually delivers. Historically, that might even be the best we should expect.

Jim Grafton's avatar

So this, combined with responding to a post on LinkedIn got me thinking and I wrote this..

https://abitofidletime.substack.com/p/we-didnt-lose-it-to-ai

Dan Riley's avatar

I'm reminded of Weizenbaum's observation that computers were largely used to scale up existing business processes beyond the point where they would be infeasible to implement manually, and thus acted as a conservative force.

Trevor Petch's avatar

The employment activity most obviously and immediately replaceable by AI/LLM is surely the production of articles/commentary/columns.

NickS (WA)'s avatar

I'm still confused about the creation or not of the silver bullet. Going back to his essay, Brooks writes:

"I believe the hard part of building software to be the specification, design, and testing of

this conceptual construct, not the labor of representing it and testing the fidelity of the representation. We still make syntax errors, to be sure; but they are fuzz compared to

the conceptual errors in most systems.

If this is true, building software will always be hard. There is. inherently no silver bullet."

That seems like a related point to your post about Excel. Can we, "stuff 200 end user apps into a trenchcoat so they can pretend to be a system"? If so, then it would be Brooks' silver bullet. If not then his point is still relevant (though it's clearly also true that we have seen orders of magnitude improvement in software development since Brooks' era).

Philip Koop's avatar

LOL.

We get regular training sessions on the use of our agentic coding models and recently there was a change in tenor: we are to avoid "vibe-coding" and try for "spec-based coding". There is a new process with stages Constitution->Specification->Plan->Tasks; i.e. there is a bunch of scaffolding we can now load into our environment to help the LLMs along and also *we* are to help them by analyzing a project, decomposing it into smaller tasks, and clearly specifying what each task needs to do. There are also keywords (analogous to HTML) we can use to guide the LLMs and tell them how to use data inputs. In other words, "agentic coding" is starting to look more and more like "coding". Another level of abstraction, a higher-level high-level language.

But as you note, the part of the work left up to me always constituted the majority of the task (leaving out all the palaver and negotiation with clients, which is a plurality if not a majority.) If AI could get my coding time down to zero, that would account for maybe 10% of my work. The problem is that the pre-LLM software tools are already very good.

tom flemming's avatar

"We were hoping the Agents would help with the difficult stuff, but all they do is write code..."

I also went back to that passage in NSB last week, and I wonder if the interesting possibility isn't *spec-fettling* rather than vibe-coding. Ie, focus on getting the LLMs to help with the conceptual construct rather than the code.

NickS (WA)'s avatar

I'm of two minds on that. First, I think it's a ways off. Currently spotting problems in the spec requires a lot of domain expertise which isn't the strength of LLMs.

On the other hand, I spend a lot of my time thinking about, "if we change X will it cause any problems for existing functionality" and I could imagine a case in which AI Agents would be helpful in answering that question.

On the third hand, an important part of getting a good spec is being able to explain to various stakeholders what decisions are reflected in the spec, and why those decisions were made.

Sam Tobin-Hochstadt's avatar

The central question about software productivity is to what degree the points Brooks makes are about communication. The main reasons that adding staff to a late project makes it later are that

1. The existing staff spends its time on communicating with the new people (both at the beginning and on-going)

2. It's challenging or impossible in many cases to do two parts of a project in parallel and then fit them together.

One reason that AI programming tools _might_ not be subject to the same dynamics is that if they enable a small team to do more, you can significant reduce human-to-human communication. That's assuming that you don't introduce lots of additional AI-to-AI communication but I think it's genuinely plausible.

Another reason is that if AIs are faster to produce code, you don't have to parallelize as much to get productivity improvement which alleviates some of the second problem.

TW's avatar

Most of the commentary I've read focuses on AI for tasks, perhaps inevitable since so much of it is written by developers. As the often sole humanities voice among them, I believe the real "destruction of work" will be the underlying structures of modern work. In turn, we'll see new structures and assumptions shake out. After all, nobody knew or cared what a "bottleneck" was until the Industrial Age's tooling and capacities created them.

More immediately, I'm seeing that AI has given each of my startup clients their own software development agency. Their fumbling, inexpert attempts have already dramatically reduced operations time and effort, enabling them to do the fabled "high-value" work such as contacting potential investors, talking directly to customers, etc. It will only get better. A lot better.

From this lens, it's not so much that Salesforce is no longer of value. It's just that it's of much less value than it was. It's very expensive for Salesforce to deliver what it delivers. And much of what it delivers is for customers who aren't you, but you're paying for those features anyhow. Not sure what you really, truly need? If only there was some kind of demiurge you could pester with questions...

Paul Davies's avatar

The bullet point about bottlenecks here is super interesting. I remember asking you after reading Unacc'y Machine if maybe AI would be good for spotting and helping to solve all the exceptions that slow processes, companies, etc down. But it's the other way around isn't it? All the repetitive elements (even complicated ones) can be maybe run by AI, but its more the exceptions and bottlenecks (and the original goals or purposes you want to pursue) that need to be managed by the humans.

Dan Davies's avatar

Yes; I was going to be writing the sequel about that, but then I switched tracks to "the problem factory"

David Higham's avatar

On a technical/ historical point, did Keynes ever agree with Pigou? I thought it was Don Patinkin who used real balances to argue that even with the liquidity trap and/or interest inelastic aggregate demand (the two “special cases” of Mr Keynes and the Classics) unemployment disequilibrium would eventually correct itself? That was the basis of what we used to call the neo classical synthesis and what Clower (with his dual decision hypothesis) and then Leijonhufvud (Keynesian Economics and the Economics of Keynes) criticised because it underplayed the information failures intrinsic to a monetary economy. As you say, the basic problem in macroeconomics has always been to explain why market mechanisms allow unemployment disequilibrium, hence the modern interest in the micro foundations of macroeconomic behaviour: crudely, everything’s about information failures and how long they can persist.

Blissex's avatar

«crudely, everything’s about information failures and how long they can persist»

But for example in the model by Keynes there is no information failure: it just happens that the "so-called Say's law" does not apply because there is a major investment commodity that requires no labour to make, and that commodity also happens to be the most liquid commodity ("money" and similar) so high preference for liquidity necessarily means labour is unemployed.

Blissex's avatar

«Patinkin who used real balances to argue that even with the liquidity trap and/or interest inelastic aggregate demand (the two “special cases” of Mr Keynes and the Classics) unemployment disequilibrium would eventually correct itself?»

A lot of results in "Economics" come with a hidden qualification: "in this specially constructed model": a lot of the skill in doing Economics in academia is to figure out the details of a model in which it is easy proving what one wants to prove.

John Quiggin's avatar

Also, channelling Keynes, shorter working hours.

Matt Woodward's avatar

See, the fact that Keynes was wrong about that is more or less the same thing Mr Davies is saying in the first bullet point, from my POV. Only the fundamental isn’t “businesses will employ them for something”, it’s “people will find a way to convert their available time into comfort and/or status”. We could likely be working 15 hour weeks like Keynes predicted, if we were happy to settle for 1930s standard of living, but we’re not. Some people will find things to do that other people find valuable enough to give them things in exchange, and everyone else will join in in order to keep up.

John Quiggin's avatar

For a long while, he wasn't wrong. Working hours declined a lot in most places after 1930, plus vacations, parental leave and so on. The end of that coincided with the defeat of unions. It's far from obivous that "we" have chosen this outcome

Matt Woodward's avatar

I’d be interested to see the data on that, if you’ve got it available.

John Quiggin's avatar

Here's the first hit I found. If you have access to Deep Research or similar, you can put together a comprehensive history with international comparisons. I should have mentioned that the US is almost unique in having little or no parental leave, as well as very short vacations.

https://eh.net/encyclopedia/hours-of-work-in-u-s-history/

Matt Woodward's avatar

Those data seem like they’re showing pretty flat numbers since the 30s?

John Quiggin's avatar

Big drop during Depression due to short-time working, followed by rebound and then decline. Relevant comparison is not "the 1930s" but 1930.

Mark's avatar

Dan, thank you. I read this today [https://www.transformernews.ai/p/the-left-is-missing-out-on-ai-sanders-doctorow-bender-bores] and felt unwell afterwards. In fact, I do not think I was even able to understand the article. It was like something from another world and deeply unserious and irresponsible. Thank you for offering a little optimism.

Rajan Patel's avatar

Hello Dan, might you have come across "Capitalism in the age of robots"? It's a lecture that Adair Turner gave in 2018. Among another things, I thought you might find his three-part explanation of the Solow Paradox to be interesting (proliferation of low-productivity jobs, proliferation of zero-sum competitive activities, and GDP's shortcomings as a meaningful signal). Apologies if you're already aware! https://www.ineteconomics.org/uploads/papers/Paper-Turner-Capitalism-in-the-Age-of-Robots.pdf

Blissex's avatar

«Probably most importantly, unemployment is not an equilibrium (even Keynes ended up having to agree with Pigou on this). If there is thirty per cent of the workforce willing to work but unable to find a job»

This is a strong argument that the 30-60% unemployment rates in third world countries that have persisted for dozens or hundreds of years were and are a "conspiracy theory" :-).

«that means someone can employ them and get rich.»

That proves that it is impossible that thirty percent of the workforce can "disappear" or become "underemployed" and that never happened in the past; for example that it is a myth that about 30-40% of the workforce were actually redundant in many eras and they were underemployed as status-symbol servants or to pointlessly share work on the farm. :-)

«If nobody can think of how to employ several millions of educated workers, then maybe ask the artificial intelligence if you think it’s so smart.»

I can think of that: as servants or temporary labourers.

«Another point at the macro level – investment is made in the anticipation of profit. We can’t get to a situation where investment in technology puts 30% of the population out of work, simply because once it’s put 20% of the population out of work we are in a historic Great Depression and nobody is investing in anything any more.»

The big question is why would that matter to *investors* if their needs are fully satisfied. Why would they invest if employing 70% of the population plus "AI" still makes enough stuff to live fabulously?

In the limit support that "AI" were able to completely replace human workers as to giving investors fabulous lives, why would investors care to invest in any human-employing venture? For investors hiring humans is only a means to an end, their own living standards.

Peak horse did happen.

Actually the issue is not "AI" replacing working-class humans who become totally unemployed but the *cost* of human workers vs. the cost of "AI" workers. As to this the typical wage in much of the world is $500-$5,000 per year and I doubt that "AI" can compete with that. My guess is that the office workers displaced by "AI" in London may well be able to find work at £1-£2 per hour.

Blissex's avatar

«The big question is why would that matter to *investors* if their needs are fully satisfied. Why would they invest if employing 70% of the population plus "AI" still makes enough stuff to live fabulously?»

Consider the past where the main capital goods were not oil and machinery but farmland and livestock: if an investor called "the Earl" owned enough farmland and livestock to live fabulously by employing just 50% of the population of an area, why ever would they invest to employ the remaining 50%? This is not a theoretical point, consider what happened in Ireland in the late 1840s. Like in many similar episodes equilibrium was reached in a few years and unemployment disappeared indeed (by reducing the supply of labour...).

Kenny Fraser's avatar

Absolutely excellent. AI will not replace people. This is not how the world works. Think about change and leverage not substitution. This post summarises the basics 100x better than I ever could.

Edwin Roorda's avatar

Can you expand on thie following?

"...real resources have to be diverted to investment rather than consumption, and this doesn’t happen when there’s no clear path to selling the output."

I agree with the point, but how do the massive investments get repaid at the reliable multi year 10%+ Net Profit levels these organizations and their backers expect as their due?

I've had a finance career, including a stint in private equity.

These investors are the "smart money" folks supposedly. How is ROI achieved beyond vague hand waving?