<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Dan Davies - "Back of Mind"]]></title><description><![CDATA[A newsletter of quiet contrarianism, slow analysis and ambient ideas]]></description><link>https://backofmind.substack.com</link><generator>Substack</generator><lastBuildDate>Thu, 16 Apr 2026 20:56:58 GMT</lastBuildDate><atom:link href="https://backofmind.substack.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Dan Davies]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[backofmind@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[backofmind@substack.com]]></itunes:email><itunes:name><![CDATA[Dan Davies]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dan Davies]]></itunes:author><googleplay:owner><![CDATA[backofmind@substack.com]]></googleplay:owner><googleplay:email><![CDATA[backofmind@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dan Davies]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[the most important number]]></title><description><![CDATA[and why we won't get it]]></description><link>https://backofmind.substack.com/p/the-most-important-number</link><guid isPermaLink="false">https://backofmind.substack.com/p/the-most-important-number</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 15 Apr 2026 11:27:10 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It is time to think once more about &#8220;AI in General Management&#8221;. I have seen a few posts recently (nessun nome, nessun esercitazioni) about an idea I which I can&#8217;t really decide whether I agree with or not.</p><p>This is the thesis that, to put it in vaguely cybernetic terms, a really good LLM can work as a high capacity variety attenuator and amplifier, as well as a translator and transducer<a href="#_ftn1">[1]</a>. If you can boil a huge report down to three bullet points, but also expand the boss&#8217;s gnomic remarks to a full policy document, then this makes a lot of different forms of organisation possible. Particularly, it very much weakens the case for hierarchy, and I can see why a lot of people posting on the subject are interested in the possibility of it allowing for very small organisations indeed to tackle tasks which have previously required much bigger ones.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>But can it work? I think that&#8217;s a question which &#8220;<a href="https://backofmind.substack.com/p/toward-a-sensible-ai-skepticism">smart skepticism</a>&#8221; would definitely tell you to sidestep; it might be a matter of unknown technological limits or it might be a matter of unknown intrinsic limits to the model, but it&#8217;s definitely not something to express a firm opinion on either way unless you&#8217;re cool with being made a fool of by history.</p><p>One of the things it might depend on, though, is another question that&#8217;s on my mind. Which is that a lot of the AI-enabled organisational models we&#8217;re talking about seem to rely on having most of the output produced most of the time by LLMs, with a knowledgeable human being checking them. Checking yes/no whether something is OK is a much faster job than creating it, so you get the big saving.</p><p>But &#8230; checking other people&#8217;s work is also a much <em>crapper</em> job than doing things yourself. It&#8217;s not cognitively demanding in terms of bandwidth, but it&#8217;s very cognitively demanding in terms of exhausting attention. Which ought to be worrying, because we know (quite spectacularly, from the world of self-driving cars), that it is very very difficult to keep paying attention when you&#8217;re monitoring a system that is meant to be A-OK most of the time, but needs you to be constantly aware because it sometimes screws up in a way that requires immediate action.</p><p>So, the number that I am interested in is something like &#8220;How many words of normal business English per day can a manager read and check for accuracy and sense, without their mind wandering and without going mad?&#8221;. There are strange echoes of Frederick Winslow Taylor here, whose fame and reputation largely rested on one case study in which he found, by trial and error, the optimal number of breaks for workers to take while loading iron ore.</p><p>I would guess that the place to look for this number might be on the editorial desks of newspapers, or in investment bank compliance departments. But I also suspect that it&#8217;s going to be difficult to get a stable, homeostatic answer. Because there will be variation between individuals, and everyone is going to want to pretend to be a 10x super-supervisor. Every company is going to find ways to convince itself that &#8220;our people are special, they can do much better than the rated output&#8221;.</p><p>And the nature of the problem that&#8217;s been set up is that if you fake it in this way, you won&#8217;t be aware that you&#8217;ve done something wrong, potentially for quite a while. As I said to someone this week, the difference between your legs and your judgement is that when your legs stop working, you&#8217;re immediately aware of the fact.</p><div><hr></div><p><a href="#_ftnref1">[1]</a> I&#8217;ll explain some of the technical meanings here in a future &#8220;Beer Tasting Notes&#8221; post, but for the time being the only one that isn&#8217;t a fairly straightforward English word is &#8220;transduction&#8221;, which is a word Stafford Beer took from the cellular biology of the eye. The idea is that signals aren&#8217;t simply transmitted; only the part of the signal for which there is a structure already present to receive it. Some animals can&#8217;t see colours, for example. I am not yet quite sure what is really gained here over the concept of information being &#8220;lost in translation&#8221; but I&#8217;ll have another read to make sure.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[brain donors of the firm]]></title><description><![CDATA[giving it all away and paying for the privilege]]></description><link>https://backofmind.substack.com/p/brain-donors-of-the-firm</link><guid isPermaLink="false">https://backofmind.substack.com/p/brain-donors-of-the-firm</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 08 Apr 2026 13:29:27 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I had a useful argument with a mate recently about the controversy over <a href="https://www.theguardian.com/technology/2026/mar/12/palantirs-nhs-england-contract-opens-door-to-government-abuse-of-power-health-bosses-told">Palantir signing a contract with the National Health Service</a> to provide AI and data visibility across all of its very many incompatible systems. The substance of the debate wasn&#8217;t really about concerns over spying or anything; it was more like me saying &#8220;the NHS is so huge it should hardly be outsourcing anything at all&#8221; and him saying something like &#8220;that&#8217;s mad, come on, even NASA doesn&#8217;t roll its own SQL databases&#8221;. It has been worrying at my mind for a while, and so now I am going to bother you lot with it.</p><p>Of course, the specific company Palantir raises a bunch of issues, but I don&#8217;t think you need to go down that rabbit hole. Because in my view, the important thing is not the actual ethics and behaviour, but the risk that&#8217;s being taken. Even if Palantir were the most upstanding company in the world, with a CEO who really believed in &#8220;don&#8217;t be evil&#8221;, it would be problematic for the NHS to be getting into bed with them. And the reason for that is that Palantir&#8217;s business model is based on a very strong version of &#8220;vendor lock-in&#8221;, where they have many more &#8220;forward-deployed engineers&#8221; than the usual IT consultancy, and where much more of the understanding and process knowledge is kept on their side of the organisational boundary than on the client&#8217;s side.</p><p>Straight away, this is ringing alarm bells for me, because it&#8217;s breaking almost all of Mariana Mazzucato and Rosie Collington&#8217;s sensible principles for governance of public sector consultancy contracts &#8211; transparent pricing, a clear end date, knowledge transfer as part of the specification. But even more so, when you&#8217;ve got vendor lock-in on the core management information functions of a medical system serving millions of people, then it hardly matters how much you trust the provider today. The problem is that <em>you&#8217;re stuck with them</em>. If Palantir were to transform from my best case scenario described above, to the worst version of the worst conspiracy theory about them, then there&#8217;s nothing that the NHS could do about it.</p><p>This is actually something I&#8217;ve been thinking about for a lot of my life in the context of IT consulting in general. (And specifically, for most of that period, about SAP systems). Running a database isn&#8217;t a core competency for most companies. But <em>managing the freaking company</em> is a core competence, or at least it ought to be. And one of the big messages from management cybernetics is that the distinction between these two things is not as clear as you&#8217;d think. A company is an information processing system. The distinction between code and data is very context dependent, and so is that between the nuts and bolts of storing and retrieving bits and bytes, and the decisions made about what bits and bytes to store and what interpretation to put on them when they&#8217;re retrieved.</p><p>And so &#8230; there might be IT things which the NHS does which don&#8217;t have the health service equivalent of &#8220;<a href="https://backofmind.substack.com/p/the-brompton-ness-of-it-all">Brompton-ness</a>&#8221;, but fewer than you might think, and anything which Palantir might be interested in seems to me to be very much on the other side of the line. In general, although big organisations are aware of the risks of vendor lock-in, my guess would be that if the question ever becomes worthy of discussion, then it might be time to worry that you&#8217;ve already gone too far in putting a function that&#8217;s crucial to your company&#8217;s survival on the other side of a corporate boundary.</p><p>This is, of course, not specifically an IT problem. There are plenty of companies which literally outsource their strategic thinking to consultancies, as Sonny Boy Williamson said &#8220;don&#8217;t start me talking&#8221;. But it feels like it&#8217;s easier to do this without it being obvious when you&#8217;re bringing The Computer into it.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[the stock market crash factory]]></title><description><![CDATA[my part in its downfall]]></description><link>https://backofmind.substack.com/p/the-stock-market-crash-factory</link><guid isPermaLink="false">https://backofmind.substack.com/p/the-stock-market-crash-factory</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 03 Apr 2026 14:47:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>on holiday, so I&#8217;m digging in the bottom drawer for offcuts and works in progress &#8230; I don&#8217;t think this will show up in any future books, but if it does, try to look surprised.</p><div><hr></div><p>&#8220;Heads you win, tails I lose&#8221; is not a sustainable betting proposition, and anyone who offers it to you is probably trying to draw you in to some con game or other. But what about &#8220;if the stock market goes up, you get the profit; if it goes down, you get your money back?&#8221;. Although it sounds like it has the same flavour of a one-way bet, this is actually a viable financial product. They teach you how to make it in business schools, and for a few years I worked in one of the world&#8217;s biggest factories for &#8220;guaranteed returns investments&#8221;. I&#8217;ll explain how, sparing you most of the maths.</p><p>Here&#8217;s one way to think about it. Imagine that you&#8217;re giving me a bunch of money to invest, the market is trading at 1000, and I&#8217;m promising the one-way bet described above. What do I do?</p><p>Well, a simple way to give you what I promised is to start off by putting all your money into the market, and hoping it goes up for at least the first day. If it does, then I just have to set up an alert on my computer that if it the market ever goes back to 1000, I need to sell all the shares so I can give you your money back. While I&#8217;m holding the cash, I need to set up another alert that if the market starts going up above 1000, I can buy shares again so I can give you the market profits.</p><p>(It might seem like I am working for nothing here, but here&#8217;s the trick &#8211; shares pay dividends and the cash earns interest. I never said I was going to give you those<a href="#_ftn1">[1]</a>. In fact, in a competitive investment market, I might have to pass some of them on, but this is how the economics of the thing work. When I was at the factory, this was my job, keeping an eye on a group of companies to see whether there was a chance that they would pay unexpectedly large or small dividends).</p><p>This simple way of doing things is not really how it&#8217;s done; in fact it&#8217;s more profitable for me to trade smaller amounts and take a bit of risk that I might have to make up a deficit. But there are a few important intuitions here. First, the risk of the stock market is insurable, and providing insurance against it can be good business. And second, that most of this sort of business of slicing and dicing investment risk to make it more attractive to savers, what they call &#8220;financial engineering&#8221; is really a matter of paying someone else to carry out a trading strategy on your behalf. When someone puts their money into a guaranteed returns product, imagine them as a 1920s Yankee aristocrat, on the telephone to a sharp-suited stockbroker, going &#8220;and I say, Gatsby, if it goes below 102, sell it all!&#8221;.</p><p>This is the importance of the word &#8220;services&#8221; in the industry classification &#8220;financial services&#8221; &#8211; everything that exists in the world of finance can usually be boiled down to a contract for someone to execute financial actions on your behalf. And like any other industry, these services have gradually become automated. Finance has always been one of the earliest adopters of information and communications technology, simply because it&#8217;s the industry in which the advantages of faster information and more accurate decision making can be turned into cash in a really quick and obvious way.</p><p>In the 1980s, the job of Mr Gatsby the stockbroker that we were talking about two paragraphs ago was automated, at scale. Mainframe computers were very good at keeping lists of someone&#8217;s investments, taking in data feeds from the stock exchange and spitting out lists of trades which needed to be carried out, in order to provide a kind of &#8220;portfolio insurance&#8221; to make sure that the value of those investments would never fall below a given threshold. And there were lots of investors who found that kind of insurance really valuable; for example, pension funds which needed to be sure that they had enough assets to pay benefits.</p><p>And so a cottage industry developed &#8211; the first iteration of the guaranteed returns product was even called &#8220;portfolio insurance&#8221; and mainly marketed to big institutional investors with exactly this sort of downside risk aversion. It was quite adventurously priced, but you can&#8217;t put a price on the kind of certainty that it claimed it delivered. Until the Great Crash of 1987.</p><p>It&#8217;s actually surprisingly hard to spot this event on a long term chart of share prices, which mainly just visually show a jagged but inexorable upward progress. But the Crash of &#8216;87 is still regarded as one of the most important events in financial history, and there are still plenty of academic papers written about this single data point every year. There are a couple of reasons for this. Conveniently for researchers, it&#8217;s a relatively self-contained event. There was no big event in the economy which triggered it, it had few consequences and it was all over in a few months. There&#8217;s no need to trace complicated chains of cause and effect going back and forward for years, you can be pretty sure that everything you need to know about the Crash of &#8217;87 is located within the specific market it happened in.</p><p>And the conclusion of that research seems to be that portfolio insurance was to blame simply because it had become quite a big industry during the 1980s. If one person decides that they want to sell and cut their losses if the market reaches 1000, then it&#8217;s likely to be easy to do; someone else will be in the market that day who is minded to buy shares at the current price. But equally obviously, the whole market can&#8217;t decide to cut their losses; who would they sell to?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p><a href="#_ftnref1">[1]</a> This is why the product isn&#8217;t all that popular in the USA and UK; it&#8217;s generally overpriced, in the sense that the dividends are more than fair payment for taking the risk and incurring the expense of making the trades. Even in Europe, where they love guaranteed returns products, market competition means that these days, they tend to pay at least a bit of interest on the cash.</p>]]></content:encoded></item><item><title><![CDATA[what's mine is mine and also debatable]]></title><description><![CDATA[me versus Friedman, again]]></description><link>https://backofmind.substack.com/p/whats-mine-is-mine-and-also-debatable</link><guid isPermaLink="false">https://backofmind.substack.com/p/whats-mine-is-mine-and-also-debatable</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 01 Apr 2026 09:54:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Not all readers of this substack will know that a couple of years ago, one of my legs was amputated.</p><p>No really.  Here&#8217;s a picture:</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!o2xi!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!o2xi!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o2xi!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o2xi!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o2xi!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!o2xi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg" width="355" height="473.25206043956047" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1941,&quot;width&quot;:1456,&quot;resizeWidth&quot;:355,&quot;bytes&quot;:6375319,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://backofmind.substack.com/i/192828382?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!o2xi!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 424w, https://substackcdn.com/image/fetch/$s_!o2xi!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 848w, https://substackcdn.com/image/fetch/$s_!o2xi!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!o2xi!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3e51beda-1d75-47cf-9312-1c27afa981ed_3072x4096.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>It is Caramel the rabbit again, and he is my rabbit, so the leg which was amputated is my leg, you see.  This is, of course, something which might become a bad joke, with a bit more effort and a few workshop stages.  But I was reminded last week that something not much less ridiculous  is close to business orthodoxy.  It was a post on this very platform, from David Friedman defending the legacy of his father Milton.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0fja!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0fja!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0fja!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0fja!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0fja!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0fja!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg" width="1220" height="1762" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1762,&quot;width&quot;:1220,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:642609,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://backofmind.substack.com/i/192828382?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0fja!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 424w, https://substackcdn.com/image/fetch/$s_!0fja!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 848w, https://substackcdn.com/image/fetch/$s_!0fja!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!0fja!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02cb934-b19d-424d-a821-02e1a8f5e027_1220x1762.jpeg 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>But it&#8217;s wrong.  If a company owns the money, and you own the company, that&#8217;s not the same thing as you owning the money.  Any more than a rabbit having a fluffy tail and you owning the rabbit means that you have a fluffy tail.  Ownership isn&#8217;t a transitive property across entities which are capable of owning things.</p><p>This matters a lot, because what we are talking about here is &#8220;legal personality&#8221;, which is one of the big benefits of forming a company.  (The other is limited liability, which also comes into play &#8211; very few people who talk about company profits as &#8220;the stockholders&#8217; money&#8221; would also talk about &#8220;the stockholders&#8217; debts&#8221;).  </p><p>A company is a separate entity from its owners, which can sue and be sued. One of the things they teach you  early on in business school is that of these two, the right to be sued is much more valuable than the right to sue other people.  Because it can be sued, the company can enter into binding contracts in its own name.  And it can own things, also in its own name.</p><p>Which isn&#8217;t a trivial point at all.  A partnership, for example, doesn&#8217;t have legal personality .  When a partnership owns cash or assets, it really is &#8220;the partners&#8217; money&#8221;.  And this is a bloody important thing to remember if you ever find yourself working for one.  Every single penny you get paid is less money that the partners own, so if you&#8217;re ever dealing with a partnership, get it in writing.</p><p>It is not the stockholders&#8217; money.  The company&#8217;s money is the company&#8217;s money, and if the stockholders don&#8217;t think that their ability to choose the board and approve dividends and major issues by vote is enough protection, they are welcome to choose another form of organisation.  </p><p>Companies were not allowed to exist in Britain for over a century after the South Sea Bubble (except by specific royal charter), and these were companies with unlimited liability.  When you&#8217;re given the privilege of creating an entity with legal personality, the least you can do in my opinion is to take that seriously.</p><p>Of course (I am revisiting here a theme from the chapter on Milton Friedman in The Unaccountability Machine, as readers will know), the modern neoliberal interpretation is meant to be a moral argument rather than a legal one.  The company&#8217;s money isn&#8217;t legally the property of the stockholders, but the management should act as if it was, because &#8230; reasons?</p><p>Employment, like ownership, isn&#8217;t transitive.  The stockholders own the company and the managers work for the company, but this doesn&#8217;t mean that the management work for the stockholders.  Again, if you want a fiduciary relationship in which someone is obliged to look out for your interests ahead of their own, there are plenty of ways for you to do this, but buying shares in a company isn&#8217;t one.</p><p>Every time I write about this, I&#8217;m impressed with what a sneaky trick it is.  The creation of the modern limited liability company was, in itself, a huge gift to investors.  In return for which, they have only intermittently kept up their side of the bargain, in terms of supplying plentiful investment capital for the general development of society.  And now they want to revisit the terms of that deal, making them significantly more favourable to themselves?  Imposing a whole load more obligations on people with a purely contractual relationship to the company?  </p><p>And to imply that this is the natural order of things, and that anything else would be like stealing from them? While not in any sense proposing to take on any more duties or risks themselves?  All you can or should say to this kind of argument is &#8220;nah, come off it&#8221;<br></p>]]></content:encoded></item><item><title><![CDATA[the club med theory]]></title><description><![CDATA[excerpts from a forthcoming]]></description><link>https://backofmind.substack.com/p/the-club-med-theory</link><guid isPermaLink="false">https://backofmind.substack.com/p/the-club-med-theory</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 27 Mar 2026 12:34:09 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I posted about this during the week, but the more I think about it, the more I think that considering the economic basis of all-inclusive holidays might be a good way in to what I&#8217;m currently writing about organisational decision making &#8230;</p><p>The modern all-inclusive holiday resort was invented in 1950, in Majorca, by a Belgian former Olympic swimming champion, G&#233;rard Blitz. He partnered with a tent manufacturer called Gilbert Trigano and built a campsite near the beach which he called &#8220;Club Medit&#232;ran&#233;e&#8221;. Rationing had just been lifted in France, and the club was an immediate success; they had to turn away more than ten thousand customers in the first summer of operation.</p><p>Club Med has developed quite a lot through the years; it is no longer a mutual organisation owned by its &#8220;gentils membres&#8221; and today it&#8217;s a subsidiary of a Chinese comglomerate which operates resorts all over the world. But the initial insight of G&#233;rard Blitz is still the basis of the entire industry. That is that what people seek from a holiday is not luxury or material comfort, but happiness. The point of an all-inclusive holiday is not really the potential to consume unlimited amounts of food and drink; it&#8217;s the relief from participation in the everyday economy.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">management science is a branch of philosophy, roughly twice a week</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>In the original concept, the idea behind removing cash from the holiday camp was that it would blur social distinctions. Giving everyone the same access to food and drink would remove some of the visible markers of class and therefore some of the potential sources of anxiety. (In the first Majorca camps, the gentils membres were encouraged by the gentils organisateurs to sing songs around the campfire together; it was more like a childrens&#8217; summer camp than the modern all-inclusive).</p><p>But the concept has caught on and persisted into a less class-conscious world because Blitz&#8217;s true genius was to understand that being in a market economy is stressful partly because of the cognitive demand that it places on you. Every transaction is a decision, and decisions cost energy. Removing the price mechanism, for a while, can be intensely relaxing.</p><p>This is quite difficult for people to understand if they have an economics degree. It might be argued the opposite way; that when you remove the price mechanism, you are destroying information and likely making things worse. If the restaurants in a resort are not competing for business and the staff are not motivated by tips, then standards are surely likely to slip; there is no incentive to provide good service.</p><p>But you can see what is wrong with this simply by looking at what happens, particularly in beach resorts where the all-inclusive model and the market model co-exist. The staff at the swim-up bar don&#8217;t necessarily have any reason to hustle; they might even be told by the management to slow things down, in order to keep costs (and the behaviour of the guests) under control. Meanwhile, beach vendors, masseurs and the sellers of excursions are in a situation of pure market competition. They live or die by their ability to persuade people to part with money in exchange for goods and services.</p><p>Which of these two categories of people are generally beloved by the guests, and which are considered pests? To ask the question is to answer it. There are a few things going on. One important point is that it&#8217;s not necessarily the case that the best way to sell something is to provide the customer with an enjoyable experience. Another strategy, as I learned when I was a stockbroker, is to pester them mercilessly until they pay you to go away.</p><p>A second issue is that people don&#8217;t necessarily consider the cumulative effect of the system which they are part of. If you were lying on the beach and one person came round once a day to offer you the opportunity to buy sunglasses, that might be quite nice. But it&#8217;s not an equilibrium in an unregulated market; there is no way to price the inconvenience of being woken from your nap, and so offers are made well beyond the point of diminishing returns.</p><p>Which is related to the real point, which economics has a tough time dealing with sometimes. And that is that choice is a mental effort. When economists model choice as having a cost at all, it&#8217;s usually in the sense that resources have to be spent on gathering information. The idea that dealing with the information itself might be unpleasant is much harder to model.</p><p>One of the few economists to take seriously the idea that mental torpor can be a pleasant state was JK Galbraith, whose contribution to a collection of beach-reading essays in 1963 said that &#8220;Total physical and mental inertia are highly agreeable, much more so than we allow ourselves to imagine. A beach not only permits such inertia but enforces it, thus neatly eliminating all problems of guilt.&#8221; Unfortunately, he never really extended this insight into a more general economic theory, but if he had done, he might have realised how important it was.</p><p>The psychological phenomenon which formed the basis of G&#233;rard Blitz&#8217;s empire is called &#8220;cognitive load&#8221;. It&#8217;s a description of the amount of effort needed to deal with the big problem of life and economics, the conversion of information into decisions. The opposite of being in Club Med is the all too typical situation of a civil servant or middle manager, that of being cognitively overloaded. Either you have too many decisions to make, or too much information to process, or all too often both. As well as being quite viscerally unpleasant, this is a dangerous situation to be in. People burn out, and while they are doing so, they often make bad decisions.</p><p>In fact, making bad decisions is intrinsic to the management of cognitive load, because the only way that you can bring an excess of information into line with your capacity to process it is to throw some information away. This in turn promotes risk averse decision making; since you don&#8217;t know how important the information you are ignoring might be, it makes sense to assume the worst. It also promotes a bias against action, because in an environment of uncertainty it makes sense to preserve options rather than doing anything that is difficult to reverse. All the things which we considered in the previous chapter under the description of &#8220;career risk&#8221; can also be seen as coping strategies for the management of cognitive load. That is why they appear at multiple levels of organisation; whole departments and corporations can act like they fear career risk, even though a department can&#8217;t have a career.</p><p>This also casts a pessimistic light on the possibility of solution. If one thinks of career risk as a problem based in a gap between individual and group incentives, then it feels like there might be some way to realign those incentives to reward the right kind of decision making behaviour. If the problem is one of cognitive load, then clever engineering solutions are less available. Only the brute force approach of increasing the information processing capacity is going to work to bring the decision making system back into balance.</p>]]></content:encoded></item><item><title><![CDATA[better than the other guy would have been]]></title><description><![CDATA[notes on a useless quantity]]></description><link>https://backofmind.substack.com/p/better-than-the-other-guy-would-have</link><guid isPermaLink="false">https://backofmind.substack.com/p/better-than-the-other-guy-would-have</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 25 Mar 2026 16:51:29 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Apologies for people who came in search of the whimsical and abstract &#8211; normal cybernetics and weird management science will be resumed on Friday. But I think I need to write an explainer, on something which keeps circulating in British (to a lesser extent American) debate and which, although I think originally well-meaning, keeps causing avoidable error.</p><p>I am not sure of the precise genealogy, and I don&#8217;t think it&#8217;s worth the effort to look up, but my guess is that at some point in the past, someone with basically good intentions realised that immigrants to industrialised countries tend to arrive as adults (with their education and infancy already paid for) and leave before they retire. Consequently, they live here for their top taxpaying years and not for the most expensive ones in terms of public services. And so, we started getting figures quoted for the &#8220;<a href="https://migrationobservatory.ox.ac.uk/resources/briefings/the-fiscal-impact-of-immigration-in-the-uk/">net fiscal contribution</a>&#8221; of immigration.</p><p>The trouble is that this is a much less straightforward quantity than you&#8217;d expect, and when people of less pure faith started manipulating it to produce &#8220;<a href="https://cps.org.uk/wp-content/uploads/2025/09/Here-to-stay.pdf">net fiscal contribution</a>&#8221; of different subgroups that they didn&#8217;t like, it was difficult to explain what they were doing wrong without admitting that the bad move was the first one in the game. &#8220;<a href="https://home-affairs.ec.europa.eu/whats-new/publications/net-fiscal-position-migrants-europe-trends-and-insights_en">Net fiscal contribution</a>&#8221; bundles up different aspects of the tax and benefit system.</p><p>By which I mean, the system does two things. It raises money (in aggregate) in order to pay for things (in aggregate), and it redistributes. If it only did the first, the net fiscal contribution of any group of people you cared to define would be a straightforwardly meaningful number. But it doesn&#8217;t; it also does the second.</p><p>Which means that the progressivity of the tax system and redistributivity of the benefits and of government expenditure get convoluted with the thing you&#8217;re ostensibly trying to measure. If you import a lot of immigrants to do low-paid jobs, they are going to be at the bottom of the income distribution, and so they are going to be recipients from the redistributive bit even while they are contributors to the money-raising bit. (Sometimes it helps to consider daft corner cases &#8211; so when Israel was in Egypt&#8217;s land, oppressed so much they could not stand, the slaves who built the pyramids were net recipients from rather than contributors to Pharoah&#8217;s fisc).</p><p>So if you were going to do the analysis, the relevant question is not &#8220;does this foreign born person receive more in benefits and spending than he or she pays in taxes?<a href="#_ftn1">[1]</a>&#8221;. It is &#8220;Is their net position more negative or less positive than a domestically-born person at the same point in the income distribution?&#8221;. That&#8217;s the question which will let you know whether not having that person would make the fiscal balance better or worse.</p><p>Or at least, it would let you do that if you could magically exchange two people like this without any other effects on the economy. This could happen by chance, if net migration was exactly zero and the population characteristics of immigrants and emigrants were exactly the same. But in general, it doesn&#8217;t; there is net immigration (for the moment) which expands the workforce, and emigrants from industrial economies tend to be better-paid than immigrants to them.</p><p>In a situation like this, the &#8220;net fiscal contribution&#8221; has a big negative bias. First, it counts the redistribution effect as an immigration effect. And second, it only gives immigration credit for the taxes paid directly by immigrants themselves, not the taxes that are collected elsewhere because of their presence. (Most obviously, taxes on profits for companies they work for, but also consider a 55 year old lawyer who doesn&#8217;t leave the workforce because he or she is able to hire the services of a care worker).</p><p>This negative bias can outweigh the considerable fiscal benefit of the age structure of immigrant populations, and so if you want to make a case against immigration then you start going on about &#8220;high skilled&#8221; immigrants (who are higher up in the income distribution and so on the other side of a progressive tax regime). If you are a real asshole, you can then notice that some countries are more likely to have people migrate to the UK lower down the income distribution, and you can achieve the goal you really wanted, which is to dress up tax analysis as racial biology.</p><p>I don&#8217;t think that there is any good way to do this calculation; &#8220;net fiscal contribution&#8221; analyses come in very different degrees of good faith, but even with the absolute best will in the world, it&#8217;s broken out of the box. Consider yerselves splained!</p><div><hr></div><p><a href="#_ftnref1">[1]</a> Including specific taxes like the &#8220;NHS contribution&#8221; and the absurdly inflated fees that the UK charges for visa renewals. I am not sure how it deals with the fees paid by international students, but as one of the mottoes of this blog has it, &#8220;if something&#8217;s not worth doing, it&#8217;s not worth doing properly&#8221;.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[everything is a nail, or at least it ought to be]]></title><description><![CDATA[&#8220;the irrational decision&#8221; by Ben Recht]]></description><link>https://backofmind.substack.com/p/everything-is-a-nail-or-at-least</link><guid isPermaLink="false">https://backofmind.substack.com/p/everything-is-a-nail-or-at-least</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 20 Mar 2026 13:12:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The hammer is, when you think about it, a really great invention. It doesn&#8217;t get the same credit as fire and the wheel, but it must have been revolutionary in its time. Without a rigid object to swing, you could starve to death in a coconut grove, but as soon as primitive man picked up a rock, he was in business.</p><p>The proverb &#8220;if the only tool you have is a hammer, everything looks like a nail&#8221; ought to be seen in this context. If you really are at the stage of development where the only tool you have is a hammer, then it strikes me that it&#8217;s incredibly sensible to go around looking at your various problems, and seeing if any of them could be improved by a bit of hammering.</p><p>Not only that! It would actually make a lot of sense for the unknown genius responsible for this great invention to spend a bit of time thinking whether problems can be <em>redesigned,</em> or reconfigured so as to be more amenable to hammerlike solutions. If you have suddenly gained access to a lot of cheap nails and hammers, then the wood-glue-and-dovetail-joint furniture company are likely to regret having relied so heavily on that proverb to dismiss you.</p><p>In the context of &#8220;<a href="https://press.princeton.edu/books/hardcover/9780691272443/the-irrational-decision?srsltid=AfmBOoqkxGIVQcpHkRggUg-9n9dyDhGpMzNPAGO9T9nxNr_3r6VLargP">The Irrational Decision</a>&#8221;, which is the book I&#8217;m reviewing here, sorry for the extended cold open, the &#8220;hammer&#8221; in question is the mathematics of (mostly linear) optimisation, and the subject of the book is all the ways in which, over the last century or so, people have not only used it to solve problems, but reshaped their problems to make better use of it.</p><p>The most important example of this being the incredibly productive feedback loop between &#8220;optimisation algorithms are really demanding in terms of computer processing&#8221; and &#8220;optimisation algorithms are really useful for designing better and faster computers&#8221;. This was one of those blinding &#8220;obvious when you think about it&#8221; moments for me, and I think it explains a lot of modern AI culture.</p><p>When people write that all the problems of AI will be solved by the AI, or that the Singularity will naturally be achieved when the AI learns how to make the AI, there&#8217;s a strong temptation to smile politely and edge toward the door, as one would with the ordinary kind of lunatic. But while sidling, it&#8217;s worth remembering that singularitarianism didn&#8217;t come out of nowhere &#8211; it&#8217;s in many ways a perfectly understandable extrapolation from the way in which successive generations of computer chips and optimisation strategies have built off each other to get us to the place we are now.</p><p>In fact, it&#8217;s a real tribute to Ben&#8217;s character and intellectual honesty that he didn&#8217;t write the much easier and more profitable book which was possible here, one which took his descriptions of the development of optimisation algorithms and computing, and extrapolated them in exactly this way. Instead, he starts asking the more interesting questions &#8211; the ones based around the same kind of things we see in Thi Nguyen&#8217;s &#8220;<a href="https://www.theguardian.com/books/2026/jan/06/the-score-by-c-thi-nguyen-review-a-brilliant-warning-about-the-gamification-of-everyday-life">The Score</a>&#8221;, in &#8220;Seeing Like a State&#8221; and indeed on this substack sometimes, the question of &#8220;what do we lose, when we adjust a problem so as to be manageable at scale?&#8221;.</p><p>As he puts it (following <a href="https://www.argmin.net/p/meehls-philosophical-probability">Paul Meehl</a>), algorithmic decision making is <em>always</em> going to have the evidence on its side. Because once you have put the problem in terms of the kinds of things which can be measured and defined a specific success metric - once there is any standard of evidence with which to judge the results - then &#8220;optimisation&#8221; means what it says. Anything you do differently from the output of an optimiser is &#8230; suboptimal.</p><p>But this often means that all the work is done in deciding what to measure and what the optimand should be, what counts as evidence and what as a test. Not only is that process a great way to put your thumb on the scale without leaving fingerprints<a href="#_ftn1">[1]</a>, a lot of the time things get measured because they are convenient to measure, rather than any particularly principled reason. As I&#8217;ve constantly said in econometric context, the easiest way to find a valid instrument for an unobservable quantity is simply to lower your standards.</p><p>And so it ends up with a distinctly better than average &#8220;last chapter&#8221;, addressing the open question of &#8220;how do we really want to make decisions, then?&#8221;.I have my own views on the industrialisation of decision making, which I think are in line with Ben&#8217;s so I&#8217;m unusually sympathetic to the project.But even if you&#8217;re not a fan of Michael Polanyi or participatory decision making, I think you&#8217;ll still enjoy the journey, which as well as a lot of interesting history includes enough back-of-an-envelope descriptions of important maths to make you feel a lot cleverer while you&#8217;re reading it.There&#8217;s also a bunch of other stuff I could write about, including what I think is a quite important discussion of the role and significance of randomised controlled trials (which he argues are basically a regulatory practice rather than a scientific one).But I have promised myself that I will no longer procrastinate book reviews until I can say everything I want to, and so here this one stops.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><div><hr></div><p><a href="#_ftnref1">[1]</a> Yes yes thumbprints</p>]]></content:encoded></item><item><title><![CDATA[the naked stress test]]></title><description><![CDATA[banking at the edge of sanity]]></description><link>https://backofmind.substack.com/p/the-naked-stress-test</link><guid isPermaLink="false">https://backofmind.substack.com/p/the-naked-stress-test</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 18 Mar 2026 13:48:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The purpose of this substack is at least partly to deal with ideas that I can&#8217;t get out of my mind, and one such is the European Central Bank&#8217;s exercise this year for a &#8220;geopolitical risk reverse stress test&#8221;. I <a href="https://www.ft.com/content/4f19b3e9-a41a-4168-a01b-62140ab9b789">wrote it up for Alphaville</a>, describing the mechanics as &#8220;an exercise in collective dystopian fiction writing&#8221;, and that&#8217;s basically what it is. Every bank has to start from the end of the scenario, in which they&#8217;ve made a large loss (three percentage points of their capital ratio &#8211; so for BNP Paribas this would be roughly twenty billion euros). Then they have to tell the story about how they got there, and it has to be one in which geopolitical risk was the big driver.</p><p>I like this approach! I think it&#8217;s much more productive than the normal kind of stress test, because it actually focuses management attention on the kind of things they need to be looking out for, rather than wasting ungodly amounts of time and effort on arithmetic exercises which always seem to boil down to &#8220;is the capital base adequate for the size and type of business&#8221;. But it is increasingly on my mind, because I am beginning to <a href="https://www.newyorker.com/business/currency/why-the-big-banks-cant-imagine-their-own-demises">doubt whether it&#8217;s psychologically possible</a>.</p><p>Consider this. It seems really quite unlikely to me that BNP Paribas is going to experience a loss of even two billion euros in a way that&#8217;s directly attributable to the current war with Iran. So, arithmetically, then need to be thinking about a geopolitical crisis that&#8217;s ten times as severe as the closure of the Straits of Hormuz. What does that even mean? How do you open up a spreadsheet and start typing in it if that&#8217;s your job? How do you walk into a conference room, point to a flipchart with &#8220;~10x Trump&#8221; written on it and say &#8220;aucunes id&#233;es, mecs?&#8221;</p><p>This is a real problem with scenario planning of all sorts; it is just intrinsically difficult to take it seriously enough. In any kind of simulation exercise, you have one big difference from the real thing, which is simply that you know you are in a simulation exercise, and that you aren&#8217;t actually facing the end of the world. Just like boxers will tell you there&#8217;s all the difference in the world between sparring and fighting, the main difference being that in sparring, when you can see that someone is having trouble, you ease up rather than going harder.</p><p>And anyone who has ever played around with a brokerage or spread betting account knows that paper trading is surprisingly uninformative about how you do with real money. The big reason being that jeopardy affects your decision making process. Usually in the direction of making it worse. For a few decades now, I have been objecting to one common practice of regulators, which is to allow banks to include the effects of &#8220;mitigating actions&#8221; in their stress test scenarios. The historical record shows that it is at least as common for management teams to take <em>exacerbating</em> actions when big crises hit.</p><p>People do know these things. I happen to know that when some big organisations carry out their periodic cyber risk planning and simulation exercises, it is not uncommon for them to bring in consultants who have worked in reality television, and who are experts on creating an atmosphere of stress, conflict and poor decision making<a href="#_ftn1">[1]</a>.</p><p>Which is a good start. Although the everyday business of risk management is all about consistency and meticulousness, it&#8217;s important to make sure that in planning for and managing the &#8220;non-everyday&#8221; kinds of risks (which are really the only ones that matter), some sense of chaos and weirdness is maintained.As well as quants and accountants, risk management should have dramaturges and clowns.At the very least, we should take measures to ensure that everybody in these discussions is out of their comfort zone.Make it a requirement that chapter three of every risk report has to be sung out loud, or require that all data analysis has to be carried out in the nude, or get the supervisor to fire flares at the windows or something.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p><a href="#_ftnref1">[1]</a> There are a <strong>lot</strong> of tricks of the trade; the big manual containing them is part of what you buy when you licence a format. I once went to a talk with someone who had worked on &#8220;Celebrity Masterchef Australia&#8221; or some such, who said that at the very first meeting, she had realised to her dismay that all the contestants seemed to know each other and be friends in a way that wouldn&#8217;t make good telly. &#8220;Right&#8221;, she said. &#8220;Let&#8217;s get started. Please could you line up, left to right, in order of how famous you are&#8221;.</p>]]></content:encoded></item><item><title><![CDATA[new new rules for the new new economy]]></title><description><![CDATA[drunken stumbles toward an economics of AI]]></description><link>https://backofmind.substack.com/p/new-new-rules-for-the-new-new-economy</link><guid isPermaLink="false">https://backofmind.substack.com/p/new-new-rules-for-the-new-new-economy</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 13 Mar 2026 15:18:11 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>As promised on Wednesday, here are some notes in the direction of what I think is the most important point in my &#8220;<a href="https://backofmind.substack.com/p/toward-a-sensible-ai-skepticism">toward a sensible AI scepticism</a>&#8221; post from last year:</p><blockquote><p><em>There&#8217;s also a very important role for scepticism that AI is in some way or other outside the price mechanism or the normal priorities of political economy. This is particularly obvious when someone suggests we should forget about some obviously crucial issue because the AGI will solve it for us, but it&#8217;s also in my view perfectly sensible to be sceptical about future economic benefits, whether they will in fact justify current venture capital investments and whether projects which aren&#8217;t economically viable without subsidies and exemptions from environmental or social regulation should be made so because they&#8217;re AI.</em></p></blockquote><p>I don&#8217;t think it&#8217;s either possible or worthwhile to launch a huge project trying to put numbers on things by going through SEC filings and the like. For one thing, the really important quantities aren&#8217;t going to be in the accounts, if they were then you have the problem that accounting standards don&#8217;t always match up to business reality, and if you solve that then congratulations, you took a snapshot of something that&#8217;s changing rapidly.</p><p>But I do think it&#8217;s worth a short while thinking about the <em>kinds</em> of numbers that you would want to know, putting order-of-magnitude bounds on them and comparing them to other industries. Basically trying to do the analytical job of asking &#8220;what sort of a business is this? Is it like a gold mine, or like an airline? How do the costs and revenues scale with demand? In what conditions does it do well or badly?&#8221; The <em>structure</em> of a model is more important than the numbers plugged in.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. it will probably move on to other subjects for a while, having done rather a lot on AI recently, sorry</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>I think, along these lines, that there are two big questions to ask &#8211; what do the marginal cost economics of AI look like, and what is the equilibrium capex? I&#8217;ll take the second one first.</p><p>Over in one of <a href="https://www.efinancialcareers.com/news/goldman-sachs-tmt-bankers-restructuring">my other secret identities</a>, I&#8217;ve been covering this as a banking sector personnel issue. A number of investment banks have reorganised their tech teams to reflect the kinds of financial needs that different clients have. Goldman Sachs, for example, now has a head (well, two <a href="https://www.ft.com/content/2ba20e5d-c8ea-4b7c-8768-88b3cc18615d">co-heads</a>) of &#8220;Global Internet and Media&#8221; and of &#8220;Global Technology Infrastructure&#8221;.</p><p>Why? Well, the economics of AI seems to be the economics of datacentres. And a datacentre is a big capital asset which needs a lot of power and cooling, not a <a href="https://backofmind.substack.com/p/just-a-few-little-satanic-mills">weightless</a> creature of pure mathematics. (In Henry Farrell&#8217;s great phrase, &#8220;when software eats the world, what comes out the other end?&#8221;). Big sheds with expensive machines in them are the sort of thing that you historically finance with debt rather than equity, and they tend to need a hell of a lot of capital to be raised rather than a few million dollars of VC.</p><p>This isn&#8217;t entirely new; the period that we remember as the &#8220;dot com bubble&#8221; was actually at least half a &#8220;telecoms bubble&#8221;, in which investors&#8217; money was financing not just web applications, but also people to dig up roads and put fibre-optic cables down.</p><p>But it strikes me as important that, unlike fibre optic cable, data centres have an economically important depreciation life. The longest-lived piece of capex is probably the shed itself. It is hard to get a straight answer about how long the GPU chips last (because the accounting depreciation is going to be mainly driven by obsolescence and the replacement cycle), but the best estimates I can find suggest that it&#8217;s under a decade best case, and potentially as short as five years if you really thrash them by doing training work. (Training an LLM is a lot more computation-intensive, and therefore power and heat intensive, than inference, so it physically degrades the chips faster). And the cooling system has literal moving parts.</p><p>That matters for the long-term economics. During the 00s, we talked quite a bit about &#8220;dark fiber&#8221;, in the sense of cable that had been laid well in excess of any reasonable estimate of the demand for bandwidth. Hand on heart, I never took this scepticism seriously; it seemed to me that it would all get used eventually, and that even if it wasn&#8217;t, the real expense in laying cable was digging the road up (or sailing the special boat across the Atlantic), so you might as well put in a big margin. We are still using the cable laid in the 00s today, and can expect to do so for decades to come. If datacenter capex is physically degraded within ten years, then it matters a lot more if there&#8217;s too much of it.</p><p>So much for capex. What about margins?</p><p>Here I am treading lightly, because it is difficult. Costs and pricing are expressed per &#8220;token&#8221;, but the published data immediately seems to admit that this is a bad choice of unit because it costs a lot more to output a token than input one. It seems to me that the actual marginal quantity being produced and consumed is &#8220;processing power&#8221;, which is apparently measured in gigawatt hours these days. In any case, I think more than anything this vindicates my original decision not to get too precise. As my old dad used to say, if something isn&#8217;t worth doing, it&#8217;s not worth doing properly.</p><p>The fact that datacentre capacity is measured in gigawatts suggests that there is a marginal cost here which is unlike the &#8220;too cheap to meter&#8221; economics which underwrote the original &#8220;<a href="https://www.amazon.co.uk/Information-Rules-Strategic-Network-Economy/dp/087584863X">Information Economy</a>&#8221; of Shapiro and Varian. Messing around in pricing sheets and consultant reports, I get the understanding that Anthropic charges &#8220;a few dollars per million tokens&#8221; and that a Claude Code query typically uses a five-figure number of tokens. And so, ruthlessly ignoring the input versus output questions, I arrive at the belief that the cost to the buyer of asking an LLM to do a commercially meaningful task and getting a commercially useful result is in the order &#8220;a few cents, maybe as much as a dollar or two&#8221;.</p><p>There is a temptation to start guesstimating profit margins and trying to say that the marginal cost to produce LLM services is also therefore &#8220;a few cents&#8221;. But I am wary of doing so. On the one hand, the current pricing sheet might be considerably subsidised because of management and VCs assuming that the old Shapiro/Varian rules apply and that they need to establish a &#8220;<a href="https://backofmind.substack.com/p/stuck-in-the-moat">moat</a>&#8221; made out of &#8220;network effects&#8221; in order to lock in customers for future gouging.</p><p>On the other hand, to the extent that the price is related to the costs at all, it will have some relationship to overhead costs as well. (I&#8217;ll note in passing that the difference between the economic and accounting concepts of &#8220;marginal costs&#8221; is a whole nother rabbit hole here). As I mentioned above, training and inference seem to have different cost economics. Developing models consumes more power and runs down your GPUs a lot more expensively than using them.</p><p>Which kind of worries me a little. You might be tempted to say that &#8220;this is good, means that once the models are trained, which can be done a lot cheaper than current industry practice, look at <a href="https://www.theregister.com/2025/09/19/deepseek_cost_train/">DeepSeek</a>, we will be back to territory quite close to too-cheap-to-meter, this is web 1.0 economics really&#8221;. But &#8230; where is the equilibrium in which there is much less expenditure on model training?</p><p>I suspect it might not be there. There&#8217;s always going to be a temptation to upgrade the model and take market share. There&#8217;s a considerable risk, as I see it, that AI might have the lethal economics which characterises airlines and media &#8211; very low marginal costs, very high overheads, lots of expensive capex. In that sort of environment, people go bust a lot, because there always seems to be a big player who didn&#8217;t like their market share last year, competing against a big player who has ambitions to be the last one standing.</p><p>I haven&#8217;t got into stock market valuations here, but it seems to me that the path to profit is a bit more convoluted than people might think. And if the big players are using their own models to give them strategic advice, they might need to worry that the <a href="https://www.kcl.ac.uk/news/artificial-intelligence-under-nuclear-pressure-first-large-scale-kings-study-reveals-how-ai-models-reason-and-escalate-under-crisis">bias toward aggression</a> is just as disastrous in industrial economics as it is in any other kind of deterrence model.</p>]]></content:encoded></item><item><title><![CDATA[Heads up - kindle daily deal! (Delete if you already own "The Unaccountability Machine!")]]></title><description><![CDATA[Sorry to bother you in between proper posts, but it just struck me that not necessarily everyone has a copy of my last book, and the electronic version is currently on sale at an absolutely uneconomic price.]]></description><link>https://backofmind.substack.com/p/heads-up-kindle-daily-deal-delete</link><guid isPermaLink="false">https://backofmind.substack.com/p/heads-up-kindle-daily-deal-delete</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Thu, 12 Mar 2026 11:17:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sorry to bother you in between proper posts, but it just struck me that not necessarily everyone has a copy of my last book, and the electronic version is currently on sale at an absolutely uneconomic price. The UK and US links are below. Normal service will be resumed shortly, thanks very much everybody!</p><p>https://www.amazon.co.uk/Unaccountability-Machine-Systems-Terrible-Decisions-ebook/dp/B0CGFWBFD6?dplnkId=3012f34b-4269-4a12-b374-1ff5622b7045</p><p></p><p>https://www.amazon.com/Unaccountability-Machine-Systems-Terrible-Decisions-ebook/dp/B0CGFWBFD6?dplnkId=caddc29f-0b19-4809-b856-1db8914a28ce</p>]]></content:encoded></item><item><title><![CDATA[the misaligned organisation]]></title><description><![CDATA[continuing to worry at a fascinating bone]]></description><link>https://backofmind.substack.com/p/the-misaligned-organisation</link><guid isPermaLink="false">https://backofmind.substack.com/p/the-misaligned-organisation</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 11 Mar 2026 15:23:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This week&#8217;s order will be reversed due to catching up - a bit of a philosophical joke post today, and something more substantial on the economics of AI on Friday.  This is catching up with an issue I discussed on the social media with a couple of friends yesterday, but which I think has wider interest &#8230;</p><p>Now the <a href="https://www.nytimes.com/2026/03/10/opinion/ai-chatbots-virtue-vice.html?unlocked_article_code=1.SFA.LrMB.CaBlbdR7TOHa&amp;smid=nytcore-ios-share">New York Times</a> has caught up with the idea of &#8220;<a href="https://backofmind.substack.com/p/everything-i-dislike-is-indefinably">emergent misalignment</a>&#8221; that we were talking about a few weeks ago. (Capsule summary &#8211; if you take a general purpose LLM, and then specifically train in on examples of badly written or insecure computer code, it doesn&#8217;t just learn bad programming habits. It also starts to give bad medical advice, to give bad responses to ethical questions and to admire Hitler).</p><p>I think the op-ed does quite a good job of taking this seriously as a phenomenon; that there is seemingly some kind of &#8220;shape&#8221; to the vector space of tokens, and that the unimaginably vast dataset of content scraped from the Web has a sort of principal component that can be interpreted as &#8220;good/bad&#8221;. I am not sure about all the virtue ethics stuff (as <a href="https://bsky.app/profile/crookedfootball.bsky.social/post/3mgpt44wiwc2r">Chris points out</a>, the whole point of virtue ethics is that morality can&#8217;t be reduced to an algorithm, and as <a href="https://bsky.app/profile/bweatherson.bsky.social/post/3mgph3p4cgc2v">Brian says</a>, &#8220;I have days when I make lots of coding errors, but I don&#8217;t think I feel more Nazi on those days&#8221;).</p><p>But this shape to the data is not in any way meaningless &#8211; as I said in the last post, although I think everyone had kind of guessed that the &#8220;anti-woke&#8221; vector points in the direction of &#8220;Nazi&#8221; rather than the direction of &#8220;free speech absolutist&#8221;, it&#8217;s quite interesting to know that this is literally mathematically true. And although I&#8217;m not personally convinced by the idea I raised in my last post, that if you discovered that one of your views tends to cluster with the &#8220;bad&#8221; group you should reconsider it, I think it&#8217;s a serious challenge; maybe you&#8217;re just a unique and heterodox thinker, but maybe it&#8217;s just a prejudice and the thing about that distinction is that you&#8217;re probably not well placed to make the judgement call.</p><p>What&#8217;s on my mind after reading it again though, is that this is an empirical fact about non-human data processing systems implemented as neural networks. Is it a fact about the neural network algorithm specifically, or is it something that is generally true of things which make decisions by processing data? Specifically, since I&#8217;ve argued in print that organisations, corporations and governments can be seen as &#8220;artificial intelligences&#8221;, in the sense that they&#8217;re non human decision making systems, do they have this property of emergent misalignment?</p><p>I think you could make a reasonable empirical argument that they do. The things which make organisations dumber and worse at operational and technical functions (lack of resources, poor internal communication, low morale) also do seem to make them more callous and unethical. The example at the top of my mind is the Home Office, but I&#8217;m sure there are others. (JK Galbraith once told a colleague who had been offered a job at the State Department that &#8220;you will find that State is the kind of organisation which, although it does small things badly, does big things badly too&#8221;.)</p><p>And although it very much feels like an excuse (&#8220;lack of resources&#8221; is a terrible accountability sink), I think it could even be argued that there is a causal link between organisations being bad at doing things, and being bad in the sense of doing bad things. One way to describe the kind of thing I talk about a lot in &#8220;The Unaccountability Machine&#8221; is that breaking links of accountability and creating policies which have inhumane effects when applied to real world cases, are all strategies by which overloaded administrators and systems try to manage their stress. The cognitive dissonance caused by being a good person in a bad system is immense, and one way to reduce it is to stop being such a good person. As someone at Fox News said in the aftermath of January 6<sup>th</sup> 2021, &#8220;bad ratings make good journalists do bad things&#8221;.</p><p>Writing this down, I think it&#8217;s unconvincing to say that there is some general law of virtue, connecting competence and morality in the way that the New York Times author seems to be hinting at. I&#8217;ve sketched out a causal mechanism whereby the two might be linked in organisations, but it&#8217;s not one which could work for the LLM casel the neural network isn&#8217;t under any more or less stress when it&#8217;s trained to write bad code. </p><p>So it might just be an empirical coincidence. Unless, I suppose, the corpus of training data was produced under such conditions as to import the relationship between information overload, unaccountability and general badness into the token space. Which I still think is a bit too speculative; what do you guys think? Anyway, happy Wednesday.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[the fudge must flow]]></title><description><![CDATA[tolerance for ambiguity in investment]]></description><link>https://backofmind.substack.com/p/the-fudge-must-flow</link><guid isPermaLink="false">https://backofmind.substack.com/p/the-fudge-must-flow</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 06 Mar 2026 15:28:49 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>And so, I&#8217;m travelling a bit so my thoughts are slightly scattered. But I want to respond to a few comments on <a href="https://backofmind.substack.com/p/what-is-real-and-what-is-fudge">last week&#8217;s post</a>, in which people did their level best to suggest ways that cost-benefit analysis and net present value modelling could compromise between the (presumably desirable) unbiased estimates and the (regrettably necessary) fudge factors. As long term readers will know, I do not necessarily agree that fudge factors are bad<a href="#_ftn1">[1]</a>. But that&#8217;s not the real source of my discomfort with the idea that the use of fudge factors can be tamed in this way.</p><p>The problem is, in my view, that <em>the distinction between an estimate and a fudge factor is itself a decision</em>. And because it&#8217;s a decision, it&#8217;s also subject to fudge factors. The creation of data is a process, in which all sorts of compromises always have to be made.</p><p>A couple of years ago, I did a <a href="https://backofmind.substack.com/p/how-to-get-fired-by-me?utm_source=publication-search">Friday joke post</a> grumbling about common things people say during modelling, like &#8220;this data is a bit misleading, but it&#8217;s all we&#8217;ve got&#8221; and &#8220;the estimation method is pretty fragile, but it&#8217;s better than nothing&#8221;. I claimed at the time that I would never countenance such practices.</p><p>But what if I was a little bit less pure and scrupulously ethical than I am? Ignoble thought, what if I were to <em>look at the results</em> and then decide whether I was going to go all How Very Dare You, or just &#8220;yeah, not the best but I&#8217;ll allow it&#8221;. By taking the &#8220;for my friends, the utmost of accommodations, for my enemies, the law&#8221; approach to data, I can put quite a substantial fudge factor into the model without ever leaving any fingerprints.</p><p>I will push this a bit further. Even in the absence of manipulation &#8211; in fact, let&#8217;s stipulate that there&#8217;s not even any of the subconscious finger-on-scales effect which motivated the intention of double blinding in medical trials &#8211; the boundary between estimate and fudge is not clear. If you don&#8217;t want to allow straightforwardly identified &#8220;fudge factor&#8221; lines in a model, or if you excessively stigmatise the fudge factor, then actually, your investment strategy is being determined by your tolerance for ambiguity. Which is to say, the easiest way to get rid of fudge factors is to be really loosey-goosey about what you are going to allow into the estimates. But of course, this is psychologically difficult to do.</p><p>And here&#8217;s a random semi-related punchline. In this series of posts, both I and everyone in the comments have, I think, been kind of implicitly assuming that the main use of fudge factors is to make projects look better than they otherwise would. But that&#8217;s not necessarily true at all. His Majesty&#8217;s Treasury, in some cases, applies a negative ten per cent &#8220;optimism bias removal&#8221; to investment analyses drawn up by spending departments. Call that what it is.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p><a href="#_ftnref1">[1]</a> Capsule summary for recent arrivals &#8211; fudge factors, as well as being a way in which bad managers can ignore reality in favour of arbitrary gut feelings, are often the only way that good managers can make a model take into account important information which, although important, is not the sort of information which lends itself to being expressed in terms of the parameters of a spreadsheet model.</p>]]></content:encoded></item><item><title><![CDATA[a failure of sense making]]></title><description><![CDATA[new book alert]]></description><link>https://backofmind.substack.com/p/a-failure-of-sense-making</link><guid isPermaLink="false">https://backofmind.substack.com/p/a-failure-of-sense-making</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 04 Mar 2026 18:30:12 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!B1XV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Sorry for the non-appearance of Friday&#8217;s post last week &#8211; I got a stinking cold. I now have a bit of a backlog, because I want to respond to comments on that, but I also want to talk about actual uses of AI in the wild, and such like. But I&#8217;m delaying these because there&#8217;s a new book coming out which I saw an advance copy of and which is now available for pre-order, and it&#8217;s really good.</p><p><a href="https://www.amazon.com/Crisis-Engineering-Time-Tested-Turning-Clarity/dp/0306836866">https://www.amazon.com/Crisis-Engineering-Time-Tested-Turning-Clarity/dp/0306836866</a></p><p>It&#8217;s called &#8220;Crisis Engineering&#8221;, and it&#8217;s by Marina Nitze (who used to run the Department of Veterans&#8217; Affairs technology team that&#8217;s discussed in <a href="https://backofmind.substack.com/p/last-chapters-and-how-to-avoid-them">Recoding America</a>) and two of her colleagues. For the most part, it&#8217;s a practical handbook of &#8220;what you need to do in a crisis&#8221;, but for that reason it&#8217;s really a book about &#8220;what you need to know about crises in order to react well when you&#8217;re in one&#8221;.</p><p>For a while now, I&#8217;ve had this recurring comedy bit that the reason you ought to take Stafford Beer&#8217;s management cybernetics seriously is that when left to themselves, intelligent engineers faced with a management problem will almost always reinvent about fifty per cent of &#8220;Brain of the Firm&#8221; without ever having heard of it. I would say that &#8220;Crisis Engineering&#8221; is very much a book in that tradition.</p><p>The title of this post is taken from what I see as the central idea of the book &#8211; that a crisis is a &#8220;failure of sense-making&#8221;. That&#8217;s what distinguishes a <em>crisis</em> from things just generally sucking a bit. It&#8217;s the same thing that Stafford Beer was reaching for when he said that &#8220;&#8220;What counts as a crisis is the expectation of loss of control; in other words, cybernetic breakdown in an institution.&#8221;</p><p>In other words, crisis is a state of exception. It&#8217;s defined as a situation in which doing the normal things will not produce the normal results. It&#8217;s a restructuring of the black box, if you will, the connection of inputs to outputs. The root of a crisis is often a past mistake of information architecture; something which you had &#8220;attenuated&#8221; in order to pay attention to the things which really matter, turns out to really, really matter. (The book has a really nice discussion of the Three Mile Island disaster, which can be traced to a stuck steam valve).</p><p>And so, the crucial step in crisis engineering is to re-establish a common view which corresponds to reality &#8211; to restore sense-making. Once that step has been taken, actually solving the problem becomes a tractable task, and without it nothing is going to work. There&#8217;s a lot of specific advice on how you can go about doing this, organising teams to do so, and so on, because it&#8217;s a practical handbook, but I think that&#8217;s the big philosophical point.</p><p>It tracks with my personal experience of financial crises. As I think I&#8217;ve mentioned before on this &#8216;stack, this was my niche when I was a banker. In normal conditions I was barely able to do the job, but when the world went mad there were few that could touch me. (The joke always was that, in the words of one market-maker I worked with &#8220;the thing I like about Dan is that he&#8217;s crap, but when it all goes to hell, he doesn&#8217;t get any worse&#8221;). But the two were linked. The reason that I prospered in crises was exactly that I spent all my time chasing up weird and interesting-looking trivia instead of concentrating on doing the job properly. Which meant that I was able to re-establish sense-making quicker.</p><p>Which means that I think the underlying moral (which I am going to write a lot more about, not least because I have a book contract to do so!) is that the crucial step in reacting to a crisis is that of <em>understanding that you are in a crisis</em>. The important property of the system is the ability to notice a discrepancy between the world and its mental model, and to take it seriously. Almost every post-mortem of an industrial tragedy seems to begin with something ignored which is retrospectively obvious.</p><p>Anyway, this marks something of a record for me, having not only written a book review without procrastination, but done so before it actually came out! It&#8217;s a very good book &#8211; my blurb quote is &#8220;This is the book I wish every single boss I ever worked for had on their desk&#8221;, but there are much better recommendations from much more prestigious people. I&#8217;ll leave you with Stafford Beer&#8217;s guide to managing a crisis. I&#8217;ve posted it before, but I&#8217;ll probably post it again because it&#8217;s so fantastic.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!B1XV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!B1XV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B1XV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B1XV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B1XV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!B1XV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg" width="1296" height="2048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:2048,&quot;width&quot;:1296,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:421665,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://backofmind.substack.com/i/189906045?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!B1XV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B1XV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B1XV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B1XV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc02b75e6-ce90-4f7b-aeb2-30f909718bc8_1296x2048.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p>]]></content:encoded></item><item><title><![CDATA[what is real and what is fudge]]></title><description><![CDATA[drawing a veil over the whole sordid business]]></description><link>https://backofmind.substack.com/p/what-is-real-and-what-is-fudge</link><guid isPermaLink="false">https://backofmind.substack.com/p/what-is-real-and-what-is-fudge</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 25 Feb 2026 14:17:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>While writing my current book, &#8220;The Problem Factory&#8221;, I&#8217;ve been drawn in to an extended discussion of net present value analysis (basically cost/benefit analysis, in financial contexts and with a bit of drama about the cost of capital). I&#8217;ve discovered, in passing that the <a href="https://brianalvey.com/2022/01/06/the-most-popular-software-for-writing-fiction-isnt-word-its-excel/">original author</a> of the joke that &#8220;more fiction is written in Microsoft Excel than Microsoft Word&#8221; is understandably salty about not getting full credit for it. And I&#8217;ve hit a quite interesting question, where I&#8217;m no longer sure of my own opinion.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p>The question being &#8211; there are going to be fudge factors<a href="#_ftn1">[1]</a> in every spreadsheet model. There have to be, otherwise you will systematically make wrong decisions. (You need them to take account of difficult-to-model but important things like strategic flexibility, optionality, brand values and so on). So, do you want the fudge factors to be fudging the normal model parameters (revenue, costs, growth etc)? Or are you going to add a special line for &#8220;Other Benefits and Costs&#8221; that is there explicitly as a fudge factor?</p><p>This decision can&#8217;t be avoided, because if you don&#8217;t do the second, you <em>will</em> get the first. A lot of the skill of a good analyst is being able to flex the model outcome by at least 20% without anyone being able to tell how you did it. This can&#8217;t be engineered out of the system, not least because it <em>shouldn&#8217;t</em> be engineered out of the system &#8211; as I <a href="https://backofmind.substack.com/p/how-and-why-to-lie-with-spreadsheets">argued a while ago</a>, fudging the model is the way in which good managers put the information back in which bad accounting systems have taken out.</p><p>Which would militate in favour of &#8220;let&#8217;s have it out in the open, have a dedicated fudge factor in the model and keep all the actual parameter straight&#8221;. The idea here would be that transparency keeps the use of fudge factors under control &#8211; you can see whether it&#8217;s the fudge factor making the difference between a &#8220;yes&#8221; and a &#8220;no&#8221;, and you can argue about how big the fudge should be. I started off thinking that this was obviously the superior method; arguments about the model are always basically arguments about the project itself, so it makes sense to make that explicit rather than having everyone hide their cards behind model parameters.</p><p>I&#8217;m not as sure as I was after writing it down, though. Putting the fudges into a specific line is always going to stigmatise them. The whole <em>point</em> of the &#8220;other non-modellable costs and benefits&#8221; line is that it might make the difference, otherwise what&#8217;s the point of doing it? If you are including a separate line to incorporate information that&#8217;s not easily expressed in terms of the model parameters, then you have to accept that if this information is relevant to your decision, it has to be possible that it will change the answer.</p><p>And really, is it the worst thing in the world if someone puts a little bit of spin on the ball in making assumptions that are subject to huge uncertainty anyway? The good thing about having the fudge factors in the model parameters is that there is a sort of automatic cross-check on them &#8211; you can fudge, but only to the extent that your fudging still delivers a model that you can stand up and pretend to be an unbiased forecast without embarrassing yourself. Conducting the debate about a decision in terms of technical disagreements over input values in an agreed model is often a very hypocritical way to go about a highly subjective and politicised process, but hypocrisy is not the worst sin. &#8220;Bringing disagreements out into the open&#8221; might create an adversarial context where one doesn&#8217;t need to exist. I&#8217;m not at all convinced by this argument either, but at least now I see why people go through what any objective observer would have to recognise as a charade.</p><div><hr></div><p><a href="#_ftnref1">[1]</a> Of course, the definition of a &#8220;fudge factor&#8221; is itself imprecise. I&#8217;m using it to refer to any parameter of a model that is set based on subjective assessment, and which is either actually or potentially used in a goal-directed way to adjust the outcome. The idea being, I guess, that if you don&#8217;t need to look at the final cell of the model in order to decide what value to type into a cell, that cell isn&#8217;t a fudge factor. Everything in a model is <em>potentially</em> a fudge factor; the question of whether something is or isn&#8217;t is case-dependent, the question being whether you&#8217;re using it fudgily.</p>]]></content:encoded></item><item><title><![CDATA[finally we have created the silver bullet]]></title><description><![CDATA[from Fred Brooks' classic essay "No Silver Bullets"]]></description><link>https://backofmind.substack.com/p/finally-we-have-created-the-silver</link><guid isPermaLink="false">https://backofmind.substack.com/p/finally-we-have-created-the-silver</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 20 Feb 2026 14:03:42 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I was going to do something else this Friday, but the relentless tide of &#8220;AI is coming for your job, AI is going to cause mass unemployment, what will you do when AI makes you obsolete&#8221; articles has provoked me sufficiently (I won&#8217;t link to them as there are so many and I&#8217;m not picking fights). Basically, as I said on social media, if your best idea for what AI can do in the workspace is &#8220;replace a hundred human beings with a server rack doing the same thing&#8221;, you&#8217;ve got no business calling yourself a techno-optimist<a href="#_ftn1">[1]</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p><p>In fact, I&#8217;m so angry I&#8217;m going to write a bullet point list because there are so many unconnected mistakes being made.</p><ul><li><p>Probably most importantly, <em>unemployment is not an equilibrium</em> (even Keynes ended up having to agree with Pigou on this). If there is thirty per cent of the workforce willing to work but unable to find a job, that means someone can employ them and get rich. If nobody can think of how to employ several millions of educated workers, then maybe ask the artificial intelligence if you think it&#8217;s so smart.</p></li></ul><p><strong>(Caveat)</strong>. As you can see from my mention of Keynes above, transitory or cyclical unemployment can last long enough to be unpleasant and have bad consequences. But this is not a new economic policy problem!</p><ul><li><p>Another point at the macro level &#8211; <em>investment is made in the anticipation of profit</em>. We can&#8217;t get to a situation where investment in technology puts 30% of the population out of work, simply because once it&#8217;s put 20% of the population out of work we are in a historic Great Depression and nobody is investing in <em>anything</em> any more.</p></li></ul><p><strong>(Non-caveat).</strong> &#8220;Oh but Danny silicon valley VCs don&#8217;t think that way&#8221;. I disagree. For one thing, yes they do, they just think that if they hyperscale they can deter others and develop a monopoly. (If anything this might work the other way; it is a bit rich for <em><a href="https://decrypt.co/357967/microsoft-ai-chief-two-years-ai-to-automate-white-collar-jobs">Microsoft</a>, </em>with its track record of using FUD and bullying to stop any new technology challenging its monopoly on selling software to middle managers, saying that AI will make middle managers obsolete). For another, &#8220;investment&#8221; isn&#8217;t just &#8220;overpaying for startup equity&#8221;. Datacentres have to be built, connected to the grid and cooled; real resources have to be diverted to investment rather than consumption, and this doesn&#8217;t happen when there&#8217;s no clear path to selling the output.</p><ul><li><p>I have argued in the past that people are overestimating the organisational level benefits of AI because they are extrapolating from individual experiences, and speeding up production behind a bottleneck doesn&#8217;t increase output (although it might reduce it). But one thing I haven&#8217;t emphasised enough is that bottlenecks are not natural obstacles &#8211; they are, in most cases, <em>the consequence of increasing production until you hit a bottleneck</em>. If AI removes a bunch of bottlenecks, that won&#8217;t be used to produce the same output faster and cheaper, it will be used to produce a lot more output until a new bottleneck is reached and requires human intervention. (Weirdly, there was a two week period after the announcement of DeepSeek when all the techbros were wailing at their share prices and shouting &#8220;its Jevons Paradox you idiots&#8221;, but this got really quickly forgotten).</p></li></ul><p>And competitive equilibrium is likely to mean that this will happen sooner rather than later. Like <a href="https://www.netinterest.co/p/excel-forever">Marc Rubinstein</a>, I&#8217;ve been really impressed at the ability of an LLM to make a spreadsheet financial model in a few minutes rather than taking a few hours. But &#8230; that just means that you spend a few more hours tweaking the model. Because if you don&#8217;t, then your competition will; what this means is that you can no longer sell a spreadsheet model that doesn&#8217;t have a lot of industry knowledge built in. Something which was always a bit commodified is now completely valueless without having at least as much human input into adding non-data insights to it. Again, people who spend a load of time in other contexts talking about building &#8220;moats&#8221; seem to think firms will forget about the importance of this when they get a bit of AI.</p><ul><li><p>Even the individual level anecdotes don&#8217;t, if you look at them carefully, support the labour-replacing predictions anything like as strongly as one might think. For example, take <a href="https://newsletter.mikekonczal.com/p/three-ways-terminal-ai-has-changed?utm_source=post-email-title&amp;publication_id=67575&amp;post_id=187880028&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=i90k&amp;triedRedirect=true&amp;utm_medium=email">Mike Konczal&#8217;s &#8220;Me And My AI&#8221; post</a>. He&#8217;s sped up his workflow, and used the extra productivity to start following up lots of little ideas that he otherwise wouldn&#8217;t have the time to do. But &#8230; either these ideas will be dead ends (in which case no harm done but no benefit either), or they will be productive new projects (in which case, that looks like it&#8217;s going to generate more work for Mike, not less). Seriously, read that post and ask yourself &#8211; does this look like a path which is going to lead to Mike making one of his colleagues redundant because he can do their work as well as his own, or a path that&#8217;s going to lead to him trying to hire another colleague to do his current work while he follows up his new projects?</p></li></ul><p>Which gets me to the crux; I gave this post that title intentionally, because what the AI-employment-doomers seem to actually believe is &#8220;at last, we have invented the mythical man-month, from Fred Brooks&#8217; famous essay The Mythical Man-Month&#8221;. Labour-time isn&#8217;t fungible. In most cases, sparing me half an hour on my job doesn&#8217;t mean that I can pick up half an hour of my desk-neighbour&#8217;s. (In fact, reorganising your processes to make something like this even slightly possible is an <a href="https://backofmind.substack.com/p/dancing-martial-arts-masters-of-the?utm_source=publication-search">incredibly difficult and often traumatic business</a>).</p><p>Time isn&#8217;t even necessarily fungible in my <em>own</em> job. As I <a href="https://backofmind.substack.com/p/how-to-make-your-organisation-dumber">mentioned a few posts ago</a>, I have now set up my workflow so that I can look up references to European banking regulation really quickly. It&#8217;s great, I would never go back. But what I seem to be finding is that apparently I used to multitask a little bit; while looking for references, I would be thinking about what the reference was needed for and what I was going to say about it once I found it.</p><p>Now, it is massively nicer to have the ref immediately and then have ten minutes thinking time, rather than CTRL-F&#8217;ing and blinding in frustration for ten minutes then going &#8220;yep that&#8217;s what I wanted&#8221;. But it&#8217;s still the same ten minutes. I wrote in the past about <a href="https://backofmind.substack.com/p/a-more-subtle-cost-disease?utm_source=publication-search">workplace leisure</a>, and the fact that most office time is always going to be wasted because of the nature of the process. It seems to me that the main effect of AI is likely to be that routine administrative tasks will become <em>less tedious</em> and white collar jobs <em>more pleasant,</em> rather than leading to any less demand. There&#8217;s techno-optimism for you!</p><p>Well, what that lacked in brevity it made up for in incoherence. Normal service will be resumed next week, if the good Lord spares me and the Singularity tarries in its coming. Have a good weekend folks.</p><div><hr></div><p><a href="#_ftnref1">[1]</a> Also directed at &#8220;techno optimists&#8221; who think there is a birthrate crisis and the only solution is &#8220;tradwives and such like&#8221;. Catch yourself on, son, build a robot or something if you like robots so much.</p>]]></content:encoded></item><item><title><![CDATA[three times is political action]]></title><description><![CDATA[the levers not working]]></description><link>https://backofmind.substack.com/p/three-times-is-political-action</link><guid isPermaLink="false">https://backofmind.substack.com/p/three-times-is-political-action</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 18 Feb 2026 18:26:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>This is a bit more &#8220;ripped from the headlines&#8221; than the usual post on this substack, but it&#8217;s a puzzle of British politics which I think might have more general relevance. The question that is on my mind is something like &#8220;why are the government losing so many court cases?&#8221;. Or more particularly &#8220;why are the government taking so many silly Ls, by making rookie legal mistakes?&#8221;.</p><p>A few recent cases which for various reasons crossed my threshold in quick succession:</p><p>1) The <a href="https://www.bbc.co.uk/news/articles/cgl88wezkzpo">Woodland Park datacentre</a>. This was given planning permission by the relevant Secretary of State, overruling the local council and saying that there was no need for an environmental impact assessment. Of course, there was, and the government admitted this on the courthouse steps.</p><p>2) The proscription of <a href="https://theconversation.com/palestine-action-why-the-high-court-ruled-against-the-government-and-what-it-means-for-the-future-of-protest-275976">Palestine Action</a> under the terrorism legislation. The Home Office decided to say, in minutes and a press release, that one benefit of doing so is that it would make it much more operationally convenient for the police. In doing so, it ignored its own policy that terrorism legislation has to be used for terrorism reasons, not just to make things easier.</p><p>3) The <a href="https://www.localgovernmentlawyer.co.uk/governance/396-governance-news/99733-legal-advice-ahead-of-divisional-court-hearing-sees-government-ditch-plans-to-postpone-local-elections">local elections fiasco</a>. After having decided to postpone local elections in the areas where councils are going to be wound up and reorganised, the Secretary of State then took a look at his legal advice and realised that once more, it wouldn&#8217;t fly.</p><p>Once is bad luck, twice is coincidence but the third time is enemy action, as the saying goes. Repeatedly, it seems that the government has got into the habit of taking decisions which it then doesn&#8217;t fancy to defend. What is going on?</p><p>All I have are hypotheses. It is possible that what we&#8217;re seeing here is arrogance. The environment of &#8220;in front of intelligent neutral parties who can&#8217;t be intimidated, with severe penalties for lying&#8221; is often an uncongenial one in which to defend one&#8217;s decisions. The idea here would be that having got where they are today by factional bullying and waiting for the intellectual and organisational collapse of the other side, the Labour Party is poorly equipped for doing things without its traditional weapons and advantages.</p><p>But I feel that this might be letting the civil servants and lawyers off the hook too easily. It might be that the professional advisors were bullied and overruled, but why has this started happening so much, so recently? Are Keir Starmer and his team really so much more intimidating than governments of the past?</p><p>Or, is there a problem of state capacity here? I think it&#8217;s also possible that we&#8217;re seeing a combination of factors; hollowing out and juniorisation in a civil and legal service that can&#8217;t pay competitive wages, plus bad habits learned during the pandemic, when it was possible to make calls on team spirit and the <a href="https://www.youtube.com/watch?v=5u8vd_YNbTw">greater good</a> and go well beyond your statutory powers.</p><p>In any case, our prime minister is apparently frustrated at the fact that the &#8220;<a href="https://www.civilserviceworld.com/news/article/starmer-frustrated-with-gap-between-pulling-lever-and-delivery">levers of power</a>&#8221; don&#8217;t work. That&#8217;s always been a bit of an odd metaphor in my view &#8211; there aren&#8217;t many machines that work by pulling levers, I think the image we&#8217;re meant to have is of a railway points-switching box. But it&#8217;s clearly one that&#8217;s now actively misleading. Pulling at levers and dialling things up and down is only a useful way of thinking about government within the normal zone of operation. We left that normal zone some time between 2008 and 2016.</p><p>So now, getting things done is more like herding (whatever your favourite difficult-to-herd animal species is). You need to either build support and consensus through politics, or you need to absolutely dot the i&#8217;s and cross the t&#8217;s with respect to the legal statutes giving you the power to act without that consensus. Our government doesn&#8217;t appear to have caught up to this reality, but I think it is a reality which has changed, rather than them just being no good at it.  I might be wrong - I do not have strongly held opinions on this subject - but I do think I&#8217;ve identified something that&#8217;s going on here.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[how to make your organisation dumber]]></title><description><![CDATA[adding the term &#8220;adversarial context&#8221; to the lexicon]]></description><link>https://backofmind.substack.com/p/how-to-make-your-organisation-dumber</link><guid isPermaLink="false">https://backofmind.substack.com/p/how-to-make-your-organisation-dumber</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Sat, 14 Feb 2026 17:08:43 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Apologies for the non-arrival of yesterday&#8217;s post! I got caught between a couple of deadlines and also went down a slight rabbit hole with trying to tweak my Google NotebookLM environment. (In case anyone&#8217;s interested, the notebook in question is one that I&#8217;ve uploaded all the important European banking statutes to, so I can use the Gemini AI as a natural language search, to tell me which Article I ought to be looking at, and which past regulatory guidance is relevant to the document I&#8217;m reading. It is actually very good, particularly when I get queries for quick response. But, it&#8217;s got one weird quirk &#8211; it tends to get the actual content right, but it is incredibly inaccurate with page numbers of citations. I am coming to the conclusion that this might be related to the &#8220;<a href="https://techcrunch.com/2024/08/27/why-ai-cant-spell-strawberry/">strawberry problem</a>&#8221;; the model uses tokens which are usually somewhat longer than individual words and characters, so it can&#8217;t reliably give you the right paragraph number for the same reason it finds it hard to tell you how many r&#8217;s there are in the word &#8220;strawberry&#8221;).</p><p>But in any case; while writing a script for a presentation, I realised that I&#8217;m increasingly using the phrase &#8220;adversarial context&#8221;, to describe a cybernetic phenomenon that seems to be quite important. Basically, Stafford Beer places huge importance on what he calls &#8220;translation and transduction&#8221;. This is the practice of dedicating resources to places where information has to be transmitted across organisational (or intra-organisational) boundaries. It&#8217;s part of the central problem of management cybernetics &#8211; making sure that information arrives <em>where</em> it can play a part in decisions, <em>in time</em> to be useful and <em>in a form</em> where it can be accepted as input by the decision maker.</p><p>In the general case, organisational boundaries are information-reducing filters. But increasingly, I&#8217;m thinking that Beer should have paid more attention to the case where effort is expended on doing the opposite of &#8220;translation and transduction&#8221;. Because I think this is actually quite common.</p><p>A lot of the time, organisations and people have opposing interests, but are meant to communicate information. When this happens, there&#8217;s an incentive to be strategic; to present the information which serves your interests the most, and suppress things which portray your case in a bad light.</p><p>It gets worse, because the existence of those incentives creates what you might think of as a &#8220;market for lemons&#8221; type problem. Everyone thinks that everyone else is doing this, so a) you&#8217;d be a fool not to, and b) you have to discount most of what those other bastards are saying. The adversarial or strategic context makes it difficult or impossible to communicate.</p><p>Potentially this might be quite hopeful, because it means that if you can restructure things to reduce the number of adversarial contexts, your organisation could get a lot smarter without any of the individual people in it getting any less dumb.It also suggests to me that structure and context might matter a lot more when it comes to determining the decision-making ability of organisations, than the talent of the individuals.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[why this, why now, why not?]]></title><description><![CDATA[the digital euro and theories of action]]></description><link>https://backofmind.substack.com/p/why-this-why-now-why-not</link><guid isPermaLink="false">https://backofmind.substack.com/p/why-this-why-now-why-not</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 11 Feb 2026 15:19:07 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Since <a href="https://www.thebritishacademy.ac.uk/documents/6018/Global_DisOrder_-_The_US_Dollar_System_as_a_Source_of_International_Disorder.pdf">Henry&#8217;s and my piece</a> on the global dollar system was published (and my <a href="https://www.ft.com/content/5b9ae74b-4326-4d62-9a08-9049c15a5ef0">FT Alphaville summary</a>, which leaned a bit more on the Euro angle), I&#8217;ve been having some interesting conversations, which I think throw a bit of light on our current crisis. Basically, a lot of people still don&#8217;t understand why the ECB is keen on doing this itself. I think I do, for a number of reasons which go to the heart of much more general questions of state capacity. The Socratic dialogue in my head goes something like this:</p><p><em>If we grant that the Europeans don&#8217;t like being dependent on Visa and Mastercard, why don&#8217;t they just encourage the European banks to develop a local equivalent?</em> (This is a popular objection, which is actually the official position of several European bank trade associations and of the rapporteur on the bill for the European Parliament<a href="#_ftn1">[1]</a>). </p><p>To the extent that this isn&#8217;t a case of &#8220;<a href="https://stianstian.medium.com/bionic-duckweed-using-the-future-to-fight-the-present-3e471b642c28">bionic duckweed</a>&#8221; (a cynical objection to a current workable scheme in disguise as advocacy of a hypothetical but probably unachievable future perfect one), I think the obvious reason is that the European banking system has been trying to get their act together on this for ages and keeps coming up with half-ready, half-of-Europe proposals like <a href="https://en.wikipedia.org/wiki/Wero_(payment)">Wero</a>.</p><p>Which isn&#8217;t really to blame them &#8211; trying to dislodge the Visa/MC duopoly is difficult, for obvious reasons of network economics. The special power that the ECB has is to use its legal tender powers to overcome the network economics by mandating takeup. (As part of the current legislation, anywhere in Euroland which accepts digital payments at all will have to accept the digital euro on the same basis).</p><p><em>Well OK, I get that the co-ordination problem is a bit difficult, but that doesn&#8217;t mean it has to be a central bank thing &#8211; couldn&#8217;t you just pass the legislation to mandate it and have it owned by a private sector consortium?</em></p><p>Two reasons why that isn&#8217;t as good a solution. First, if the digital euro and its legal tender status are the basis of the requirement, then any new developments in payments technology can be kept up with just by changing the functionality of the digital euro. If you pass a specific law mandating a particular set of private sector payment rails, then you are going to need to keep amending that law, which at best introduces a load of inertia into the system, and at worst provides opportunities for the whole thing to be torn apart if the political consensus breaks down.</p><p>And more dramatically, what I&#8217;m now going to start calling the &#8220;Washington Post Problem&#8221;. If something is in the private sector it can be bought. Anything which can be sold, one day will be sold, and possibly to someone who doesn&#8217;t run it properly. A digital euro that&#8217;s a statutory function of the European Central Bank is the best guarantee of strategic autonomy, precisely because it&#8217;s a digital euro that will always be under the control of the European Central Bank.</p><p><em>But all those things could still be done without having to have everything on the books of the central bank! This is a new thing to do which hasn&#8217;t been tried before! Why does a set of independent payment rails need the central bank to effectively be taking retail deposits? This isn&#8217;t their core competency!</em></p><p>And here we have it &#8211; the thing I regard as the crisis of the age. The answer is, frankly, that the European Central Bank still has a bit of <a href="https://backofmind.substack.com/p/mana-mojo-management">mojo</a> left. It doesn&#8217;t regard &#8220;doing something that&#8217;s new&#8221; as wholly impossible and outside its capacity.</p><p>If you want to do something, &#8220;doing it&#8221; is the <em>simple</em>, straightforward way to get it done. Tendering for outside contractors, drawing up a service level agreement and trying to anticipate all the contractual contingencies &#8230; that&#8217;s the triple-cushion-in-off-the-blue strategy. Just as a plan for building houses which begins with &#8220;put a fence around the site and order bricks&#8221; is simpler than one which begins with &#8220;redefine the duties of local authorities to consider economic growth in planning applications&#8221;.</p><p>The outsourcing and contracting approach is one that was forced on the public sector, first by ideology and subsequently by necessity. It&#8217;s what you do if you don&#8217;t respect your staff&#8217;s ability to deliver, or if you don&#8217;t have the budget to make large capital expenditures. An organisation which isn&#8217;t in that position doesn&#8217;t need to make the compromise or take the risk. The hollowing out of state capacity is bad, but the fact that we&#8217;re on the cusp of forgetting that any other state of affairs is even possible is genuinely worrying.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p><a href="#_ftnref1">[1]</a> There&#8217;s a bit of cross-cutting between political and national factions on the relevant committee; some people are taking positions which are out of line with their ideological groupings, but which make sense when you realise that they are either Germans (who are worried about the effect on the deposit base of small savings banks) or Italians (a very card-based financial system that&#8217;s a big payer of merchant fees).</p>]]></content:encoded></item><item><title><![CDATA[snobby about excel]]></title><description><![CDATA[AI and the end user effect]]></description><link>https://backofmind.substack.com/p/snobby-about-excel</link><guid isPermaLink="false">https://backofmind.substack.com/p/snobby-about-excel</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Fri, 06 Feb 2026 15:44:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I actually think there is a serious potential problem here; I wrote a bit about it for my professional clients this week, and have pitched it to a few newspapers, but I&#8217;ll outline it here as a Friday post. It&#8217;s a train of thought which began by noticing that more and more of my friends are getting evangelistic about Claude Code, and that it might be time for an update of last year&#8217;s &#8220;<a href="https://backofmind.substack.com/p/toward-a-sensible-ai-skepticism">towards a sensible AI-skepticism</a>&#8221; post.</p><p>The thing with Claude Code (and to a significantly lesser extent, Copilot) is that I&#8217;ve been on the lookout for a &#8220;killer app&#8221; of LLMs, in the original sense of &#8220;something like spreadsheets, which people will make capital investments and change their own workflow in order to use&#8221;. And I think it&#8217;s now hard to sustain scepticism as to whether this will happen; early adopters really are spending money and using LLM coding tools to produce apps for themselves.</p><p>But the &#8220;like spreadsheets&#8221; element has me thinking. Over the years, I&#8217;ve often found it amusing to tease the Dilbert types among my friends by defending Microsoft Excel, the programming language<a href="#_ftn1">[1]</a> of the common man. However, computer types don&#8217;t just dislike Excel out of pure snobbishness.</p><p>In the language of IT professionals, spreadsheets are known as &#8220;end-user computing&#8221; (EUC). And EUC is a problem as well as a solution. A great deal of corporate information technology work is trying to satisfy the twin goals of &#8220;a central and consistent source of data which is secure and accessible across the organisation&#8221;, versus &#8220;it&#8217;s a hell of a lot quicker and easier for me to just open up Excel than to schedule a meeting with the SAP team&#8221;.</p><p>I&#8217;m most familiar with this problem in financial contexts; I have <a href="https://backofmind.substack.com/p/being-legible-to-oneself">joked</a> in the <a href="https://www.ft.com/content/be2dff47-ece7-4837-be65-d6266ace0656">past</a> that &#8220;[some material percentage] of the job of risk management is persuading people to email you spreadsheets on time&#8221;. And it&#8217;s obvious that a big bank is not in an ideal situation if large and complex risk positions are being tracked in a spreadsheet on someone&#8217;s desktop. But it shows up in all sorts of other areas; you can have the best data security policy in the world, but marketing departments are free spirits who cannot be tied down, and who will often email a few megabytes of non-anonymised customer data to a new agency that they want to try out.</p><p>At present, EUC is to some extent self-limiting; there is a threshold of size and complexity beyond which it is totally unmanageable to use Excel, so you end up biting the bullet and calling the central IT guys. If Claude Code and its like become a generally used &#8220;super-Excel&#8221;, though, that might have quite unpredictable results. It&#8217;s a productivity boost at some points, but we might be forced to reconsider the aphorism that &#8220;speeding up output behind a bottleneck cannot increase overall productivity, although it can reduce it&#8221;.</p><p>I guess that the prediction problem then switches to something like &#8211; if the IT world of the future involves something like &#8220;trying to stuff 200 end user apps into a trenchcoat so they can pretend to be a system&#8221;, can other LLMs help with that? And the answer is &#8230; <a href="https://medium.com/@ade/maybe-fe59934aed7f">maybe</a>?</p><p>The sentence from last year&#8217;s post which I think has held up the best is that &#8220;There&#8217;s also a very important role for scepticism that AI is in some way or other outside the price mechanism or the normal priorities of political economy.&#8221;I am having a tough time following the debate over resource use and cost of LLM use, but it does seem to me that there&#8217;s a constraint, and it&#8217;s not clear that Moore&#8217;s Law-type progress is sweeping that constraint away in the way one might have hoped. The interesting question for me at the moment is whether AI can, at reasonable expense, clear up its own messes.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Dan Davies - "Back of Mind" is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p><a href="#_ftnref1">[1]</a> Yes that&#8217;s what it is, it&#8217;s even <a href="https://www.infoq.com/articles/excel-lambda-turing-complete/">Turing-complete</a> these days, deal with it.</p>]]></content:encoded></item><item><title><![CDATA[this is your organisation on drugs]]></title><description><![CDATA[accounting hallucinations revisited]]></description><link>https://backofmind.substack.com/p/this-is-your-organisation-on-drugs</link><guid isPermaLink="false">https://backofmind.substack.com/p/this-is-your-organisation-on-drugs</guid><dc:creator><![CDATA[Dan Davies]]></dc:creator><pubDate>Wed, 04 Feb 2026 17:00:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VgE3!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3bf0676e-0319-46c8-a072-f5a70a3aad70_71x71.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Continuing to worry away at this &#8230; <a href="https://backofmind.substack.com/p/notes-on-the-industrialisation-of?utm_source=publication-search">Previously</a> on backofmind dot substack dot com:</p><blockquote><p>The first long-standing critique of industrialisation has been pretty easy to dismiss. It&#8217;s the noble but sentimental tradition of John Ruskin and William Morris, to the effect that the design tradeoff of the standardised industrial product is bad for us, or at least that it tends to be made badly. We have ugly and cheap things in our houses, when we could have beautiful handmade things.</p><p>Except we couldn&#8217;t, of course. And the same thing might be true of decision making in the industrial world. We all want to be treated as individuals and to have an accountable human being that we can speak to, but this might be as unrealistic a luxury demand as a wish to have hand-made cutlery, hand-thrown plates and hand-blown stemware on our tables.</p><p>But let&#8217;s turn that point around. Mass-produced consumer goods have a quality that we can measure and compare to the artisanal versions. What about mass-produced decisions? When the compromises on quality are being made within the cognitive process itself, what basis do we have to know whether the tradeoff was a good one? How do you know when you are making worse decisions?</p></blockquote><p>I&#8217;m increasingly worried about this. It&#8217;s a specific case of a general phenomenon &#8211; it&#8217;s difficult to tell how drunk you&#8217;re getting, it&#8217;s difficult to understand that you&#8217;re hallucinating, Dunning-Kruger syndrome is a thing, et cetera.</p><p>It&#8217;s particularly difficult when you have no effective means of feedback. I&#8217;ve suggested in the past that &#8220;being so successful that nobody can gainsay your creative choices&#8221; is often at the root of bad art which gets blamed on the long-term effects of cocaine. And the terrible cognitive consequences of being surrounded by sycophants are well known to political scientists these days.</p><p>What interests me is that as a society, we know that this is a very serious problem, and that quite draconian restrictions on individual liberty can be justified in cases where we think something has affected someone&#8217;s ability to accurately judge their mental state. Intoxicants are heavily regulated, the mentally ill can be imprisoned indefinitely and the suspicion of coercive control will justify state involvement in areas where it would never otherwise touch. (I would note that it still surprised me that there is not yet any enthusiasm to regulate chatbots despite a mounting pile of cases which would have brought any pharmaceutical trial with those side-effects to a juddering halt).</p><p>Is there an equivalent for organisations? Which is to say &#8211; if we take seriously the idea of companies as &#8220;<a href="https://www.antipope.org/charlie/blog-static/2018/01/dude-you-broke-the-future.html">very old, very slow AIs</a>&#8221;, what recognition in law is there of any equivalent problem of their getting into states where their ability to self-regulate is impaired? I think there is.</p><p>Accountancy is one of the most important control systems of a modern organisation. Which means that interfering with the accounting system of a company is the corporate equivalent of administering it hallucinatory drugs. And &#8211; cross check &#8211; false accounting is indeed very strongly illegal. In general, financial numbers are not subject to anything like the same free speech protections as other kinds of statements. This is partly what I <a href="https://www.amazon.co.uk/Lying-Money-Legendary-Frauds-Workings/dp/1781259666">once called</a> a &#8220;market crime&#8221; &#8211; an internal rule of a particular economic entity that ends up rising to the status of criminal law simply because the thing in question is so important. But I think it&#8217;s also describable as an underlying principle of justice; if you take the organisation seriously as an entity, then it&#8217;s got a right not to have its cognitive system messed around with.</p><p>Or at least, not messed around with in this particular way. Accounting is probably the most important information system in modern organisations, and accounting is heavily regulated with quite strict professional standards. But it&#8217;s by no means the only kind of information that companies rely on, and most of the other information systems are more or less completely unregulated. Famously, management consultancy is a profession with no professional standards body, and so is economics. Information technology has only very rudimentary professional standards, with no statutory regulation.</p><p>And, as I&#8217;ve voluminously written in the past, compliance with the standards is hardly any guarantee of anything; accounting fraud can be divided into the categories of &#8220;things you have to hide from the auditors&#8221; and &#8220;things the auditors will help you with&#8221;. Even before we begin to introduce generative transformer algorithms into general management, we&#8217;re starting from a pretty bad place in terms of thinking about how we might stop our systems from hallucinating, or what kinds of safeguards we might put in place.</p><p>The difference between industrial production of razor blades and industrial production of decisions is that when you get a faulty razor blade, you notice.</p><p>(Envoi: I think this means that I really ought to have another think about my <a href="https://www.ft.com/content/c41b372c-946d-47ea-9fbd-bd11a19d6828">past optimism</a> for the use of AI to replace and restructure management accounting)</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://backofmind.substack.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://backofmind.substack.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item></channel></rss>