I was the second employee at Software Arts, the company that developed the first spreadsheet back in the late 1970s. One of the big differences between spreadsheets and AI is that employees wanted spreadsheets. Their benefits were so obvious, that people would bring their own 6502 based PC to work, and the corporate brass would bitch and whine about it and vow to crack down on PCs and spreadsheets.
We were amazed at the uses people found for VisiCalc. Farmers loved it. Finance people and accountants loved it. A group at MGH used it to structure data collection to set parameters on medical equipment. I had attended Bob Frankston's presentation at the 1979 NCC in NYC and aside from a group of friends, the only other people in the room were a tired couple who just wanted to sit down away from the crowd. So much for its industry reception.
Artificial intelligence is different. It is being rammed down everyone's throats from above. Sure, there are a handful of programmers going through the usual infatuation with a new tool, but for most employees AI is as popular as a 2% pay cut to pay for the boss's new jet. Companies are threatening to fire employees who don't use AI in their work even as they are told that the company expects to fire them once AI proves it can do their job.
The whole AI story is based on the, probably correct belief, that AI will allow a massive reduction in head count without crashing the plane before the boss's stock options can be exercised. Programming isn't about producing code anymore than mathematics is about producing proofs. It's about understanding a process, what it does and how it goes about it. There's an ontology and epistemology. AI provides none of that.
Not so long ago I would have agreed with you, and a lot of these top down initiatives are ridiculous, but I really am seeing people using Claude Code on their own initiative. It's not anywhere near the enthusiasm for spreadsheets and web browsers but it is a real thing.
That's one industry. How many farmers, M&A specialists, chemists, medical professionals, florists, newspaper editors, developers and so on are also adopting the technology? There's a lot of machine learning work in the various sciences, but it's in support of other work much like curve fitting.
I can imagine AI being useful in the mechanics of writing software. Software is written in highly regular language with relatively clear semantics. From what I've heard, though, once one gets beyond trivial examples, one winds up in a Danny Dunn and the Homework Machine situation where the job becomes iterative prompt engineering as opposed to iterative coding.
I spent decades developing software, so I've seen a lot of fads come and go. There were reasoning systems like PROLOG and Planner. There were countless programming-by-example systems, visual-programming systems and programming-by-results systems. Various elements of these have survived, but I can't even remember the names of most of these savior technologies and languages.(More than once, I had to explain why point and click wasn't going to replace software development.)
Maybe I'm being overly cynical, but I remember the vibes more than the substance, and there is still no magic today. The hard part is understanding what the software needs to do, and spreadsheets are, as you noted, one of the best ways of embedding that local knowledge.
AI is really good for small, mediocre, work. Of which there is quite a bit in the modern corporation, and excel often inhabits that niche. Whether such activities will be affordable when AI is forced to charge a real market price remains to be seen. I'm a little skeptical given the short lifespans of GPUs, and the incredible energy costs.
My policy on all programming fads is wait until the hype has died down, and see what remains. Usually it's nothing, but every now and then you get a moderate success, that delivers a small fraction of what was promised. The IT industry is incredibly faddish, and its best not to pay too much attention to their enthusiasms.
Way too cynical a take, in my opinion. Still not clear how the financing pencils out but lots and lots of people are choosing to use these tools on their own.
One of the great unanswerables at the moment for projects I've had sight of is - "is the cost of this going to keep getting cheaper?" If it does, lots of things are possible in the realms of workflows that ameliorate the error modes - but really only people on the inside of the big providers (OpenAI, MS, Google, Anthropic) have a good sense - and they aren't actually telling us. (Mix of commercial confidentiality and they don't quite know, they are betting on yes, but... the incentives are for them to keep betting.)
That of course is before you get to the idea that marketing can whip up their own app with access to sensitive data. I don't see that we've solved cybersecurity as a problem in general enough for this not to be a kind of fundamental danger in some situations.
Google's unexpected performance, and the huge leap they've made in the past two years, has a lot to do with their patented Tensor processors, and the general sense I get is that there's a lot more low-hanging fruit there, for them anyway. And you can get a pretty good idea based on the literature, which admittedly is extremely technically complex.
But the thing to keep an eye on isn't processing speed. It's the reshaping of entire workflows as what's valuable gets redefined. That's where the real cost savings will accrue. Meaner and much, much leaner.
Alternatively, watch for real world knowledge being embedded in the models. AI promoters like to pretend that AI does everything with pure reason, but Rodney Brooks pointed out that successful applications take advantage of the same things we do. For example, machine vision using convolutional algorithms embed the idea of size and position invariance. Things look the same wherever they appear in one's visual field. Human optical systems have this hard wired in neurons as do machine vision systems. When humans walk, there's a lot of embedded knowledge about forces, moments, elasticity and harmonics. Robots don't walk that way which is why you they don't mix walking robots with human performers in demos.
When AI gets past its pure reason hubris, we'll see more and more knowledge embedding, and that's when we'll start seeing real payoffs. I say this as the proud owner of an AI based rice cooker that was introduced in the 1970s. There is a place for AI, but right now it's driven by financial types.
As someone who sees a lot of startups--the main friction against more successful embedding is the nature of the customers themselves. It doesn't make sense to build humanoid robots unless there are customers, and many of the obvious use cases are in dinosaur companies. To make things worse, it's not just the dinosaurs...I've learned a lot of founders *don't like* how you sell to dinosaurs, as it involves a lot of seemingly pointless schmoozing. "Once you're in, you're in" is true of these industries, but definitely not in software, so the mental model is lacking. That said, there are some real bright spots if you offer warehouses to go with your nifty new warehousing software.
A pedantic note: "Copilot" is a brand, not a model. So in my employment, I have access to a "Copilot" associated with the Microsoft office environment, and an unrelated "Copilot" integrated with my Microsoft development environment, Visual Studio (it also has GitHub integration.) The development "Copilot" has a long list of models from different providers, including 8 GPT models, 5 Claude models, a Gemini model, and a Grok model. Most of these models are "premium" models, meaning that my employer gets charged for each query I make of them and the number of queries I am allowed is capped.
Anyway, I am aware of and impressed by the number of expert programmers who say that they are now able to program mostly using agent models. However, that is not yet my personal experience. I don't know whether that is because of my ineptness with the agents or the nature of the work that I do.
I know several expert programmers who make similar claims. I've also noticed that they've also started talking like members of a cult (or in some cases Meth addicts). Those things are probably unrelated.
FWIW this scepticism has been what software biz owners and lenders have been talking this week. The most cartoon version unsurprisingly is from the Blue Owl Co-CEO, but there were similar sentiments on the KKR and Ares calls too. Lipscultz: "Our software companies have hundreds of millions of dollars of average EBITDA that are deeply embedded in Fortune 500 work processes. And for those, we've got to pause and think, there's not just a matter of technology, there's the adoption of behavior. And for those on the call that are thinking Fortune 500 companies are going to take all their software and just rip it out and just say, I'll just ask ChatGPT, that's simply not the way it works."
Personal computers were the first big instance of this. Corporate IT departments resisted them as long as they could, then tried central control with some, but far from complete, success. WFH has made their problems even harder.
The dislike of Excel isn't about snobbishness (and Turing completeness is not a good measure of what is and isn't a programming language). It's similar to the dislike of academic code: we have a whole body of hard-worn experience about what kind of trouble you're storing up for the future, and it's frustrating to see that ignored. Yes there is a huge benefit to empowering users and meeting them where they are. But there's also a huge cost to a system where a routine everyday operation like copy-pasting a number will silently and invisibly change your calculation that you thought had a particular meaning into a different, unrelated calculation that definitely has no particular meaning.
Thanks for organizing and clarifying some thoughts I've been having for a while. I agree that AI is not independent of the price mechanism. No question that it can do some impressive stuff, but can it do the things you need it to at a price which both you can afford AND the AI provider can at least break even? Especially if the externalities are priced in at least to an extent rather than the rest of society bearing those.
This is the unavoidable financial reality. Can the hyperscalers (NY Expat's comment above) monetize their investments at the margins they are used to? I haven't seen a model where they can.
Will they accept lower overall margins across the business for greater overall revenue? The ratios still "smell off" to this finance pro, but declining margins paired with growing revenues is normal in maturing industries.
There are enormous "big disaster" potentialities in the current compute build out around AI. It could work, but there's a lot of faith in the bet, which amounts to a "This time it's different" rationalization to me. IMO this size bet hasn't been made at this interlocking global scale before. There are large government level impacts with international level risks involved. It's exciting, but so is sky diving.
This may be a little pedantic, but as a long term spreadsheet hater, the key problem with spreadsheets is lack of “row” permanence…of course once you have row permanence, then you actually are getting close to a database…which is superior in every way. Unfortunately software companies mostly gave up on trying to make friendly databases for end users.
I have a good sense for when I need to RTFM but LLMs do not. They are useful but you have to treat them as the Dunning-Kruger machines they are. This incidentally is why I don't think they spell the end of reading. I now spend much more time reading the manual because the amount of time spent writing code has gone down so much.
I have resisted using Copilot, but recently had occasion to ask it to write an executive summary for a document that was poorly structured and very poorly written—almost to the point of making it impossible to figure out what the writer was trying to say. Copilot produced a very serviceable executive summary that really helped me cut through the turgid prose and poor logic of the original document. When asked to rewrite some of the most turgid and incomprehensible paragraphs, it did a very good job: better than many copy editors (and lawyers) I've worked with over they years have done. Is relate this anecdote not so "demonstrate" that an AI can replace human analysts and authors — just to point out that a lot of the slop produced by Human Intelligences is less than good. Like Excel, it seems that Copilot is a "good enough" editor to be better than the human authors. Not to mention that it did what I asked it to do and what the human authors refused to do—write an short, concise, clear executive summary.
There are only a few things I've observed to be fundamentally true about technology, and the biggest is "cheaper and crappier always wins." Always. Excel is a poster child for this.
"C & C" wins because outcomes are generally more valuable than tools--using a mare's nest of spreadsheets to get the answer rather than trying to find a time to meet with the SAP team, who you don't like anyway. The killer twist here is that if you know what you're doing, you use more complex tools, and you begin to think in terms of the tool, not outcomes. But outcomes are almost inevitably more valuable than tools, especially since use of a tool will tend to reduce one's set of perceived potential outcomes. New entrants don't have this problem.
But safety! Security! Regulation! Sure. Salesforce will give you that. But, and it's a crucial but, the value they provide will shift toward those areas and away from, you know, actually tracking a customer base. Salesforce doesn't have the new thing in their DNA, and it's a terrible business to enter anyway--or get shoved into. The thing Bob hacked together in a couple hours works fine, and it does *exactly* what the sales guys want it to do at a fraction of the cost and the learning curve. If they want another feature, they can ask it to provide that. Oh, and if it doesn't work, you can ask it why and tell it to fix itself. No $$$ Salesforce Consultant call required. But in this new world, you'd better be good at making sales.
Remember, right this minute is as bad as this software will ever be.
I was the second employee at Software Arts, the company that developed the first spreadsheet back in the late 1970s. One of the big differences between spreadsheets and AI is that employees wanted spreadsheets. Their benefits were so obvious, that people would bring their own 6502 based PC to work, and the corporate brass would bitch and whine about it and vow to crack down on PCs and spreadsheets.
We were amazed at the uses people found for VisiCalc. Farmers loved it. Finance people and accountants loved it. A group at MGH used it to structure data collection to set parameters on medical equipment. I had attended Bob Frankston's presentation at the 1979 NCC in NYC and aside from a group of friends, the only other people in the room were a tired couple who just wanted to sit down away from the crowd. So much for its industry reception.
Artificial intelligence is different. It is being rammed down everyone's throats from above. Sure, there are a handful of programmers going through the usual infatuation with a new tool, but for most employees AI is as popular as a 2% pay cut to pay for the boss's new jet. Companies are threatening to fire employees who don't use AI in their work even as they are told that the company expects to fire them once AI proves it can do their job.
The whole AI story is based on the, probably correct belief, that AI will allow a massive reduction in head count without crashing the plane before the boss's stock options can be exercised. Programming isn't about producing code anymore than mathematics is about producing proofs. It's about understanding a process, what it does and how it goes about it. There's an ontology and epistemology. AI provides none of that.
Not so long ago I would have agreed with you, and a lot of these top down initiatives are ridiculous, but I really am seeing people using Claude Code on their own initiative. It's not anywhere near the enthusiasm for spreadsheets and web browsers but it is a real thing.
That's one industry. How many farmers, M&A specialists, chemists, medical professionals, florists, newspaper editors, developers and so on are also adopting the technology? There's a lot of machine learning work in the various sciences, but it's in support of other work much like curve fitting.
I can imagine AI being useful in the mechanics of writing software. Software is written in highly regular language with relatively clear semantics. From what I've heard, though, once one gets beyond trivial examples, one winds up in a Danny Dunn and the Homework Machine situation where the job becomes iterative prompt engineering as opposed to iterative coding.
I spent decades developing software, so I've seen a lot of fads come and go. There were reasoning systems like PROLOG and Planner. There were countless programming-by-example systems, visual-programming systems and programming-by-results systems. Various elements of these have survived, but I can't even remember the names of most of these savior technologies and languages.(More than once, I had to explain why point and click wasn't going to replace software development.)
Maybe I'm being overly cynical, but I remember the vibes more than the substance, and there is still no magic today. The hard part is understanding what the software needs to do, and spreadsheets are, as you noted, one of the best ways of embedding that local knowledge.
AI is really good for small, mediocre, work. Of which there is quite a bit in the modern corporation, and excel often inhabits that niche. Whether such activities will be affordable when AI is forced to charge a real market price remains to be seen. I'm a little skeptical given the short lifespans of GPUs, and the incredible energy costs.
My policy on all programming fads is wait until the hype has died down, and see what remains. Usually it's nothing, but every now and then you get a moderate success, that delivers a small fraction of what was promised. The IT industry is incredibly faddish, and its best not to pay too much attention to their enthusiasms.
Way too cynical a take, in my opinion. Still not clear how the financing pencils out but lots and lots of people are choosing to use these tools on their own.
Reading a paper on AI this morning it struck me that the LLM problem is that it's all knowledge and no skills.
So I think that means it's going to struggle to clear up a mess- because it won't know what a mess is in the first place.
One of the great unanswerables at the moment for projects I've had sight of is - "is the cost of this going to keep getting cheaper?" If it does, lots of things are possible in the realms of workflows that ameliorate the error modes - but really only people on the inside of the big providers (OpenAI, MS, Google, Anthropic) have a good sense - and they aren't actually telling us. (Mix of commercial confidentiality and they don't quite know, they are betting on yes, but... the incentives are for them to keep betting.)
That of course is before you get to the idea that marketing can whip up their own app with access to sensitive data. I don't see that we've solved cybersecurity as a problem in general enough for this not to be a kind of fundamental danger in some situations.
Google's unexpected performance, and the huge leap they've made in the past two years, has a lot to do with their patented Tensor processors, and the general sense I get is that there's a lot more low-hanging fruit there, for them anyway. And you can get a pretty good idea based on the literature, which admittedly is extremely technically complex.
But the thing to keep an eye on isn't processing speed. It's the reshaping of entire workflows as what's valuable gets redefined. That's where the real cost savings will accrue. Meaner and much, much leaner.
Alternatively, watch for real world knowledge being embedded in the models. AI promoters like to pretend that AI does everything with pure reason, but Rodney Brooks pointed out that successful applications take advantage of the same things we do. For example, machine vision using convolutional algorithms embed the idea of size and position invariance. Things look the same wherever they appear in one's visual field. Human optical systems have this hard wired in neurons as do machine vision systems. When humans walk, there's a lot of embedded knowledge about forces, moments, elasticity and harmonics. Robots don't walk that way which is why you they don't mix walking robots with human performers in demos.
When AI gets past its pure reason hubris, we'll see more and more knowledge embedding, and that's when we'll start seeing real payoffs. I say this as the proud owner of an AI based rice cooker that was introduced in the 1970s. There is a place for AI, but right now it's driven by financial types.
As someone who sees a lot of startups--the main friction against more successful embedding is the nature of the customers themselves. It doesn't make sense to build humanoid robots unless there are customers, and many of the obvious use cases are in dinosaur companies. To make things worse, it's not just the dinosaurs...I've learned a lot of founders *don't like* how you sell to dinosaurs, as it involves a lot of seemingly pointless schmoozing. "Once you're in, you're in" is true of these industries, but definitely not in software, so the mental model is lacking. That said, there are some real bright spots if you offer warehouses to go with your nifty new warehousing software.
A pedantic note: "Copilot" is a brand, not a model. So in my employment, I have access to a "Copilot" associated with the Microsoft office environment, and an unrelated "Copilot" integrated with my Microsoft development environment, Visual Studio (it also has GitHub integration.) The development "Copilot" has a long list of models from different providers, including 8 GPT models, 5 Claude models, a Gemini model, and a Grok model. Most of these models are "premium" models, meaning that my employer gets charged for each query I make of them and the number of queries I am allowed is capped.
Anyway, I am aware of and impressed by the number of expert programmers who say that they are now able to program mostly using agent models. However, that is not yet my personal experience. I don't know whether that is because of my ineptness with the agents or the nature of the work that I do.
I know several expert programmers who make similar claims. I've also noticed that they've also started talking like members of a cult (or in some cases Meth addicts). Those things are probably unrelated.
FWIW this scepticism has been what software biz owners and lenders have been talking this week. The most cartoon version unsurprisingly is from the Blue Owl Co-CEO, but there were similar sentiments on the KKR and Ares calls too. Lipscultz: "Our software companies have hundreds of millions of dollars of average EBITDA that are deeply embedded in Fortune 500 work processes. And for those, we've got to pause and think, there's not just a matter of technology, there's the adoption of behavior. And for those on the call that are thinking Fortune 500 companies are going to take all their software and just rip it out and just say, I'll just ask ChatGPT, that's simply not the way it works."
I think they may be overestimating the intelligence of the average CTO.
Personal computers were the first big instance of this. Corporate IT departments resisted them as long as they could, then tried central control with some, but far from complete, success. WFH has made their problems even harder.
Great minds! Net Interest - out later today - addresses a similar theme.
The dislike of Excel isn't about snobbishness (and Turing completeness is not a good measure of what is and isn't a programming language). It's similar to the dislike of academic code: we have a whole body of hard-worn experience about what kind of trouble you're storing up for the future, and it's frustrating to see that ignored. Yes there is a huge benefit to empowering users and meeting them where they are. But there's also a huge cost to a system where a routine everyday operation like copy-pasting a number will silently and invisibly change your calculation that you thought had a particular meaning into a different, unrelated calculation that definitely has no particular meaning.
Regarding real world limitations,
I thought this was important: https://stratechery.com/2026/tsmc-risk/
Thanks for organizing and clarifying some thoughts I've been having for a while. I agree that AI is not independent of the price mechanism. No question that it can do some impressive stuff, but can it do the things you need it to at a price which both you can afford AND the AI provider can at least break even? Especially if the externalities are priced in at least to an extent rather than the rest of society bearing those.
This is the unavoidable financial reality. Can the hyperscalers (NY Expat's comment above) monetize their investments at the margins they are used to? I haven't seen a model where they can.
Will they accept lower overall margins across the business for greater overall revenue? The ratios still "smell off" to this finance pro, but declining margins paired with growing revenues is normal in maturing industries.
There are enormous "big disaster" potentialities in the current compute build out around AI. It could work, but there's a lot of faith in the bet, which amounts to a "This time it's different" rationalization to me. IMO this size bet hasn't been made at this interlocking global scale before. There are large government level impacts with international level risks involved. It's exciting, but so is sky diving.
This may be a little pedantic, but as a long term spreadsheet hater, the key problem with spreadsheets is lack of “row” permanence…of course once you have row permanence, then you actually are getting close to a database…which is superior in every way. Unfortunately software companies mostly gave up on trying to make friendly databases for end users.
Ironically, I suspect a big use case for Claude will be maintaining orphaned spreadsheets and VBA macros after their creator has left
I have a good sense for when I need to RTFM but LLMs do not. They are useful but you have to treat them as the Dunning-Kruger machines they are. This incidentally is why I don't think they spell the end of reading. I now spend much more time reading the manual because the amount of time spent writing code has gone down so much.
I have resisted using Copilot, but recently had occasion to ask it to write an executive summary for a document that was poorly structured and very poorly written—almost to the point of making it impossible to figure out what the writer was trying to say. Copilot produced a very serviceable executive summary that really helped me cut through the turgid prose and poor logic of the original document. When asked to rewrite some of the most turgid and incomprehensible paragraphs, it did a very good job: better than many copy editors (and lawyers) I've worked with over they years have done. Is relate this anecdote not so "demonstrate" that an AI can replace human analysts and authors — just to point out that a lot of the slop produced by Human Intelligences is less than good. Like Excel, it seems that Copilot is a "good enough" editor to be better than the human authors. Not to mention that it did what I asked it to do and what the human authors refused to do—write an short, concise, clear executive summary.
There are only a few things I've observed to be fundamentally true about technology, and the biggest is "cheaper and crappier always wins." Always. Excel is a poster child for this.
"C & C" wins because outcomes are generally more valuable than tools--using a mare's nest of spreadsheets to get the answer rather than trying to find a time to meet with the SAP team, who you don't like anyway. The killer twist here is that if you know what you're doing, you use more complex tools, and you begin to think in terms of the tool, not outcomes. But outcomes are almost inevitably more valuable than tools, especially since use of a tool will tend to reduce one's set of perceived potential outcomes. New entrants don't have this problem.
But safety! Security! Regulation! Sure. Salesforce will give you that. But, and it's a crucial but, the value they provide will shift toward those areas and away from, you know, actually tracking a customer base. Salesforce doesn't have the new thing in their DNA, and it's a terrible business to enter anyway--or get shoved into. The thing Bob hacked together in a couple hours works fine, and it does *exactly* what the sales guys want it to do at a fraction of the cost and the learning curve. If they want another feature, they can ask it to provide that. Oh, and if it doesn't work, you can ask it why and tell it to fix itself. No $$$ Salesforce Consultant call required. But in this new world, you'd better be good at making sales.
Remember, right this minute is as bad as this software will ever be.
> Remember, right this minute is as bad as this software will ever be.
I'm not sure that's a safe assumption. In my experience software rarely improves over time.