My experience thus far has been that AI ends up functioning very similarly to offshoring. You need your processes to work a certain way for it to accomplish much, but it's often a bad idea to structure your processes in that manner.
It's fairly well-established in the literature that about a third of people are extroverted, a third introverted, and the rest somewhere in between. Behavioral-researcher me believes that the pandemic made clear how disproportionately overrepresented extroverts are among managers and leaders. Extroverts believe value is added by being around other people, and this vaporized overnight with work-from-home. No wonder they keep pushing for RTO, citing reasons that millions of others find irrelevant, puzzling, or just wrong. "Mentorship"? "Collaboration"? On the Excel marketing dashboard you pull from BI, clean up, and throw over the wall every Friday?
I agree with the conclusion, but not the line of reasoning. In my office job (fancy lawyering), there was a fair amount of schmoozing, but not much hanging-around time. There was always something to do, but just a little lower and/or to the left on the urgency-importance matrix. AI (in its current form) could have been a very useful way to rummage through the files, which would have made me somewhat more productive. Somewhat. Schmoozing is still important, and takes time. And as my teenboy has proven to me with gaming, remote workers can schmooze with ease.
Lowering the cost of something merely means that more of it happens. Adopting AI would mean you spend more of your time commanding AI to rummage through the files.
This is especially the case with open-ended things like rummaging through the files. You can always do a bit more.
I was an in-house lawyer in a shop that didn't believe in outside counsel. Nice work, if you can get it! There was unmet demand, but it was for things that were neither urgent nor important nor at least fun to do. That's an advantage of being in-house, at least if the clients trust you.
There is a fair amount of discussion inside some organisations I’ve had contact with about the possibility that AI strips out the work done by young uns, and this could create expertise supply problems for later. Of course, so far as others have noted, there are few pilots where total replacement has occurred so far.
AI/ML works well for me as a general fancy-autocomplete tool when I'm writing code. This definitely saves me a few hours a week of scutwork, but I can write the code, so I can prompt the tools very specifically for what I want. The impact here is much like a good spell-checker. It saves a little time and adds a little convenience.
OTOTH, part of my team's business is training and testing AI/ML/NLP models for specific information extraction and/or classification tasks within a particular fairly narrowly circumscribed set of domains. There are absolutely things we can do now, very quickly, that would take a team of postdocs a long time to do. But there are two big barriers standing in the way of doing this more than we currently do.
The first is that these organisations often have quite reasonable expectations around transparency, explainability, and accuracy and they'd rather publish almost nothing than publish, say, 50,000 transcripts where 5% of them contain some egregious mistake. Ironically, they were much more comfortable with absolutely _massive_ error rates when they were just banging out OCR for printed books and newspapers.
The second is that one of the ways in which academics and people in similar institutions acquire prestige and power is by having lots of postdocs and PhD students working for them. So, they'd rather spend 60%-70% of their grant on hiring people, and quibble with us about spending 10% of the budget on "IT" because that's just what makes rational career sense for them.
Jobs measured solely by accurate output are rare, and yes, will be replaced. Most jobs are measured somewhat by output and much more by reliable (?) signals, e.g. being able to hire a bunch of postdocs. That this holds true in academia, at least at first blush a job supposedly defined by output, has grim implications for automation.
It would seem odd that you'd pay more attention to the noise the machine makes than to the widgets it cranks out. But over the long term, that may be a much wiser approach.
I think that’s almost certainly true. Also, the decision making system predates a lot of these kinds of inputs and, for a long time, the data being received was almost universally negative. So the concerns about cost and consequences aren’t obviously wrong, just slow to respond to changing signals.
I'd be interested to know whether a third and fourth set of barriers are being considered: the cost of testing the models AND the output from them; and the cost of the consequences when they mess up — which is inevitable considering their by-design unreliability.
Now wait just a minute, does this succinctly explain the productivity slowdown of the last 20 years? Is this widely known by everybody and I just missed it? *blinks*
There's some overlap with a previous column ("Made up numbers"), in that the figure of 22% is derived - largely - from estimates provided to them from a ChatGPT instance, trained on human estimates over a subset of the data (arguably more made up numbers):
"To assess the ability of AI to perform each of these tasks, we began with a subset of around 200 tasks and used these to fine-tune a version of GPT-4 to give answers that closely matched both our own expert assessments and academic and empirical evidence regarding AI’s current automation capabilities."
"Once trained on this subset we then used this model to analyse the entire set of tasks in the O*NET database, providing estimates of the amount of time that could be saved using AI and the type of AI most likely to be used to achieve these time savings."
I've never understood Baumol's orchestra example. It seems completely wrong.
Before recording and radio, an orchestral performance's audience was limited to maybe two thousand people, once. With technology, a performance can reach tens of millions, multiple times. I regularly listen to recordings of orchestral performances from the 1960s, and to radio transmitted performances from the other side of the world. Experiencing these performances would have been impossible or insanely expensive for me before.
It seems to me that technology has increased the productivity of orchestras by at least three orders of magnitude, which is why orchestral music is nearly free now.
Just like light. (Baumol's claim about music is exactly opposed to Nordhaus's claim about the cost of light.)
This is a similar performance to technology's effect on agriculture, which is head and shoulders above any effect on manufacturing.
None of this is to say that Baumol is wrong in general; just that it's a very poor example.
It depends on what one considers the produced good to be. If the produced good is "an orchestra plays some specific symphony", then Baumol holds. But if the produced good is "someone hears the performance of the symphony" then recording and broadcasting advances have dramatically improved productivity.
If no-one paid for orchestral performances then they wouldn't take place. Just performing an action is not economic unless there is an exchange. No exchange, no cost, nothing that Baumol can say about it in his capacity as an economist.
The good is always that which is received by the consumer.
I was thinking "remote work" just as I got to the point where you mentioned it. For the reasons you describe, it seems as if the two are complements. Rather than wander to the next office (or another desk in the open plan nightmare) to ask for help on some minor task, you get the AI to do it while you attend to a household chore.
I would be interested to know if Dan's employers could have hired him out for, say, lower level trouble shooting (or problem defining, or whatever...) to businesses are in industries going though their own crisis writ small. The skills may not be transferable, time and effort not worth it etc.
There is a long lag between the introduction of new ITC tech and that point in time when those who get the most out of it understand it enough to adopt it usefully, often after resolving a mismatch between how developers propsose it be used and what the intended users actually want and need.
This delays ongoing for AI exacerbated by a faster fate of new innovations causing scope creep, not just the much deplored over hyping.
The main problem with hiring Dan out is that it's not his employers' job...unless their job is to hire out ninjas/firemen, in which case that's their *only* job, not using him to firefight their own fires.
Dan and other internal experts are really expensive fire extinguishers. If you have one in your house, you certainly wouldn't rent it out. And if you *did*, you'd really be thinking your own house won't burn down...in which case you wouldn't have bought the expensive fire extinguisher to begin with. Metaphorically speaking, successful consultants tend not to spend much time talking to companies with expensive fire extinguishers on their walls.
I guess I was thinking that Dan wouldn't fill exactly the same role when hired out. To use your metaphors he wouldn't be hore out (or hire himself out) as a fire extinguisher but as a consultant as to smaller firms who can't afford their own fire extinguisher but can pay for advise on doing their own fire extinguishing, with the understanding that they will have to drop everything the moment their primary employer needs them. I realise that the kind of role we are talking about here might not lend itself to this. The primary employer might worry this could help competitors or sensitive information could be leaked. Maybe the ability to keep highly paid fixers on the books is a symptom of outsized corporate bureaucracy grown to the point where efficiency doesn't matter nearly so much as maintaining a position of power in its industry. They have no concern about whether or not they are getting the most value out of Dan's skills and talents they view him purely in terms of his primary function and the cost of working through a fixed bureaucratic structure to extract maximum value isn't worth the effort.
I'm part way through Graeber's "Bullshit Jobs" and while his description of many office jobs is similar, his interpretation is naturally very different. Yours rings more true to my own hours of hanging around (but of course that could be wishful thinking).
My experience thus far has been that AI ends up functioning very similarly to offshoring. You need your processes to work a certain way for it to accomplish much, but it's often a bad idea to structure your processes in that manner.
Hands up who’s reading this in work 🙌
It's fairly well-established in the literature that about a third of people are extroverted, a third introverted, and the rest somewhere in between. Behavioral-researcher me believes that the pandemic made clear how disproportionately overrepresented extroverts are among managers and leaders. Extroverts believe value is added by being around other people, and this vaporized overnight with work-from-home. No wonder they keep pushing for RTO, citing reasons that millions of others find irrelevant, puzzling, or just wrong. "Mentorship"? "Collaboration"? On the Excel marketing dashboard you pull from BI, clean up, and throw over the wall every Friday?
I love the paragraph that begins "I would actually be quite interested ..." and I am going to steal that concept.
My speciality is in software testing, and the claims that have been made about AI in that domain are, for the most part, fairly ludicrous.
For certain developers, there may be a job shift coming, and many of them probably won't like it: a "promotion" to manager of an "intern" that has trouble learning. https://developsense.com/blog/2023/11/to-the-developer-about-your-impending-promotion
I agree with the conclusion, but not the line of reasoning. In my office job (fancy lawyering), there was a fair amount of schmoozing, but not much hanging-around time. There was always something to do, but just a little lower and/or to the left on the urgency-importance matrix. AI (in its current form) could have been a very useful way to rummage through the files, which would have made me somewhat more productive. Somewhat. Schmoozing is still important, and takes time. And as my teenboy has proven to me with gaming, remote workers can schmooze with ease.
Lowering the cost of something merely means that more of it happens. Adopting AI would mean you spend more of your time commanding AI to rummage through the files.
This is especially the case with open-ended things like rummaging through the files. You can always do a bit more.
Interesting. Did your firm turn away jobs (ie were you capacity constrained & there was unmet demand?)
I was an in-house lawyer in a shop that didn't believe in outside counsel. Nice work, if you can get it! There was unmet demand, but it was for things that were neither urgent nor important nor at least fun to do. That's an advantage of being in-house, at least if the clients trust you.
Another absolute banger; you rule Dan!
You are no doubt familiar with the Kaya identity, Dan:-.
CO2 produced equals: population, times GDP per person, times CO2 per unit of GDP.
Production is similar. There is an intermediate variable that is ignored by most economists, work intensity. The Kaya identity for work is
Output equals value produced per keystroke, times keystrokes per minute, times minutes at work.
or, shorter,
Output equals productivity times work intensity times duration.
Early adopters of AI have reported loving it because it dramatically reduces work intensity.
There is a fair amount of discussion inside some organisations I’ve had contact with about the possibility that AI strips out the work done by young uns, and this could create expertise supply problems for later. Of course, so far as others have noted, there are few pilots where total replacement has occurred so far.
AI/ML works well for me as a general fancy-autocomplete tool when I'm writing code. This definitely saves me a few hours a week of scutwork, but I can write the code, so I can prompt the tools very specifically for what I want. The impact here is much like a good spell-checker. It saves a little time and adds a little convenience.
OTOTH, part of my team's business is training and testing AI/ML/NLP models for specific information extraction and/or classification tasks within a particular fairly narrowly circumscribed set of domains. There are absolutely things we can do now, very quickly, that would take a team of postdocs a long time to do. But there are two big barriers standing in the way of doing this more than we currently do.
The first is that these organisations often have quite reasonable expectations around transparency, explainability, and accuracy and they'd rather publish almost nothing than publish, say, 50,000 transcripts where 5% of them contain some egregious mistake. Ironically, they were much more comfortable with absolutely _massive_ error rates when they were just banging out OCR for printed books and newspapers.
The second is that one of the ways in which academics and people in similar institutions acquire prestige and power is by having lots of postdocs and PhD students working for them. So, they'd rather spend 60%-70% of their grant on hiring people, and quibble with us about spending 10% of the budget on "IT" because that's just what makes rational career sense for them.
Jobs measured solely by accurate output are rare, and yes, will be replaced. Most jobs are measured somewhat by output and much more by reliable (?) signals, e.g. being able to hire a bunch of postdocs. That this holds true in academia, at least at first blush a job supposedly defined by output, has grim implications for automation.
It would seem odd that you'd pay more attention to the noise the machine makes than to the widgets it cranks out. But over the long term, that may be a much wiser approach.
I think that’s almost certainly true. Also, the decision making system predates a lot of these kinds of inputs and, for a long time, the data being received was almost universally negative. So the concerns about cost and consequences aren’t obviously wrong, just slow to respond to changing signals.
I'd be interested to know whether a third and fourth set of barriers are being considered: the cost of testing the models AND the output from them; and the cost of the consequences when they mess up — which is inevitable considering their by-design unreliability.
Now wait just a minute, does this succinctly explain the productivity slowdown of the last 20 years? Is this widely known by everybody and I just missed it? *blinks*
There's some overlap with a previous column ("Made up numbers"), in that the figure of 22% is derived - largely - from estimates provided to them from a ChatGPT instance, trained on human estimates over a subset of the data (arguably more made up numbers):
"To assess the ability of AI to perform each of these tasks, we began with a subset of around 200 tasks and used these to fine-tune a version of GPT-4 to give answers that closely matched both our own expert assessments and academic and empirical evidence regarding AI’s current automation capabilities."
"Once trained on this subset we then used this model to analyse the entire set of tasks in the O*NET database, providing estimates of the amount of time that could be saved using AI and the type of AI most likely to be used to achieve these time savings."
I've never understood Baumol's orchestra example. It seems completely wrong.
Before recording and radio, an orchestral performance's audience was limited to maybe two thousand people, once. With technology, a performance can reach tens of millions, multiple times. I regularly listen to recordings of orchestral performances from the 1960s, and to radio transmitted performances from the other side of the world. Experiencing these performances would have been impossible or insanely expensive for me before.
It seems to me that technology has increased the productivity of orchestras by at least three orders of magnitude, which is why orchestral music is nearly free now.
Just like light. (Baumol's claim about music is exactly opposed to Nordhaus's claim about the cost of light.)
This is a similar performance to technology's effect on agriculture, which is head and shoulders above any effect on manufacturing.
None of this is to say that Baumol is wrong in general; just that it's a very poor example.
It depends on what one considers the produced good to be. If the produced good is "an orchestra plays some specific symphony", then Baumol holds. But if the produced good is "someone hears the performance of the symphony" then recording and broadcasting advances have dramatically improved productivity.
If no-one paid for orchestral performances then they wouldn't take place. Just performing an action is not economic unless there is an exchange. No exchange, no cost, nothing that Baumol can say about it in his capacity as an economist.
The good is always that which is received by the consumer.
I was thinking "remote work" just as I got to the point where you mentioned it. For the reasons you describe, it seems as if the two are complements. Rather than wander to the next office (or another desk in the open plan nightmare) to ask for help on some minor task, you get the AI to do it while you attend to a household chore.
I would be interested to know if Dan's employers could have hired him out for, say, lower level trouble shooting (or problem defining, or whatever...) to businesses are in industries going though their own crisis writ small. The skills may not be transferable, time and effort not worth it etc.
There is a long lag between the introduction of new ITC tech and that point in time when those who get the most out of it understand it enough to adopt it usefully, often after resolving a mismatch between how developers propsose it be used and what the intended users actually want and need.
This delays ongoing for AI exacerbated by a faster fate of new innovations causing scope creep, not just the much deplored over hyping.
The main problem with hiring Dan out is that it's not his employers' job...unless their job is to hire out ninjas/firemen, in which case that's their *only* job, not using him to firefight their own fires.
Dan and other internal experts are really expensive fire extinguishers. If you have one in your house, you certainly wouldn't rent it out. And if you *did*, you'd really be thinking your own house won't burn down...in which case you wouldn't have bought the expensive fire extinguisher to begin with. Metaphorically speaking, successful consultants tend not to spend much time talking to companies with expensive fire extinguishers on their walls.
I guess I was thinking that Dan wouldn't fill exactly the same role when hired out. To use your metaphors he wouldn't be hore out (or hire himself out) as a fire extinguisher but as a consultant as to smaller firms who can't afford their own fire extinguisher but can pay for advise on doing their own fire extinguishing, with the understanding that they will have to drop everything the moment their primary employer needs them. I realise that the kind of role we are talking about here might not lend itself to this. The primary employer might worry this could help competitors or sensitive information could be leaked. Maybe the ability to keep highly paid fixers on the books is a symptom of outsized corporate bureaucracy grown to the point where efficiency doesn't matter nearly so much as maintaining a position of power in its industry. They have no concern about whether or not they are getting the most value out of Dan's skills and talents they view him purely in terms of his primary function and the cost of working through a fixed bureaucratic structure to extract maximum value isn't worth the effort.
I'm part way through Graeber's "Bullshit Jobs" and while his description of many office jobs is similar, his interpretation is naturally very different. Yours rings more true to my own hours of hanging around (but of course that could be wishful thinking).