This is going to be a short post - we’ll reverse the usual order and do the more substantive one on Friday - because I have a few things to promote…
A book! “The Unaccountability Machine” is now officially available in North America. Look out for me promoting it on a podcast near you some time pretty soon I hope. It is also available in paperback in the UK.
A webinar! The fantastic James Cham of Bloomberg Beta has organised an Event, at which I will be talking with him, Henry Farrell and Marion Fourcade, about weaponised interdependence, the Ordinal Society, the Unaccountability Machine and, in general, how we need to look networks, control systems and the management of information in order to understand even the plumbing of the new world order, let alone its architecture. It’s happening on Friday, at 1330 in California, 1630 in East Coast America and 2130 in Britain (therefore 2230 in Europe and presumably the middle of the night in Asia, sorry about that). You can request a signup on the form here.
A working paper! “The Problem Factory” is now available in its full edited glory on the Niskanen Center website. I would recommend downloading the pdf version which really looks incredibly sweet. Thanks to Steven Teles, Henry Farrell and David Dagan for letting me do it, and for allowing me to realise a long-held dream of acting like I was a 1970s LSE professor and doing economics by drawing diagrams with arrows and boxes and pictures of a building with a sawtooth roof.
. The working paper also gives my take on the Bat Shed, which brings me on to the actual subject of today’s post. Last week on backofmind dot substack dot com:
The point of Abundance as a mindset has to be to organise things better, to remove bad regulations and to restrict the use of “stakes not odds” reasoning to contexts where it’s actually necessary, rather than incentivised by particular financial and institutional structures. (That’s what my paper’s about!).
It was pointed out to me by a reader that the use of “stakes not odds” here is a bit unclear. It’s usually used in the context of American political coverage, as an admonition to talk about policy rather than polling. And because of that context, it matters that “odds” is a bit ambiguous - does it mean “implied probabilities” or “rewards for victory”? The two are not by any means necessarily the same, I think the journalistic context usually means the first and I kind of meant the second. Sorry about that - but the concept itself is worth expanding.
As a motivating example, let’s say that I were to be offered a really really lucrative speaking gig in the USA, but I was suddenly given to wonder whether my social media history was sufficiently irritating to put me at risk of being stopped at the border and thrown into a horrible prison. In making my mind up whether to take the flight, you can see that the upside is going to play almost no part in my decision process. Not only that, but the absolute value of the probability of my getting successfully through the immigration hall is going to have surprisingly little importance to me.
This sort of decision seems to have a two-stage process: I take the trip if both the expected value is possitve, and the probability of a disastrous outcome is below some (assumed low) threshold value. I personally probably wouldn’t take that sort of risk at a 1% chance, and I wouldn’t play Russian Roulette even with a gun that had 999 empty chambers. Not for any money. But I get up in the morning and regularly cross the road, I go bodyboarding in bad weather - there are contexts in which I’m clearly willing to take small existential risks.
It’s clearly something close to the Precautionary Principle, and the intuition is the same; as Taleb regularly emphasises, it matters a great deal in risk-taking whether the nature of the risk is such that if you lose, you are still able to continue playing the game. I don’t think it’s quite the same, though, because whether the “risk of ruin” rule is engaged or not doesn’t seem to only be related to the actual stakes. I think it’s not well understood, including by me, but that it seems to have something to do with the nature of the uncertainty and our ability to manage it and, importantly, to take interim decisions and quit in the face of new information.
In my career as an analyst, I noticed that investors absolutely hated to own securities that had legal or regulatory risk. That kind of risk has a lot of nasty properties - it can be big, and it’s downside only. But what made it really intolerable seemed to be that it was more or less unanalysable - there was no probability measure or any basis to calculate a fair price or expectation, just a future date on which you might find out if you had won or lost. It’s this property of the regulatory system, I think, that makes it more of a burden on commerce than it needs to be.
A few years ago I had a conversation with Zvi Mowshowitz that I think bears on the problem you're discussing. He mentioned that there were bets he'd take with money already committed, but that he wouldn't pay out a single extra dollar for, since that would make him a mark. I think the implied principle is that in some situations, you need both an inside-view EV calculation and an outside-view assessment of whether your EV estimate itself is downstream of an adversary's strategy.
Consider Alice evaluating an investment opportunity showing a 20% expected return. The crucial question isn't just "What's the probability of success?" but "Why am I seeing this opportunity at all?"
If Alice decides to invest even a small amount, this doesn't *cause* her to become a target. Rather, it provides evidence to herself that she's already in the reference class of "people whose decision processes make them attractive targets."
This creates a decision loop:
1. Alice calculates positive expected value using available information
2. Alice considers investing based on this calculation
3. Alice's willingness to invest becomes evidence that her information environment has been selected/manipulated
4. This evidence should update her EV calculation downward
5. The updated calculation often converges toward not investing
For a large class of seemingly positive-EV opportunities, this meta-analysis can appear to rationally lead to categorical avoidance.
The commercial success of Quakers with their fixed-price policy illustrates this principle beautifully. By refusing to haggle on principle, they allowed buyers to interpret stated prices nonadversarially, which allowed more people to transact in a non-paranoid way.
Investors hating securities with legal or regulatory risk reflects this same dynamic. The issue isn't just calculating the odds of regulatory action, but recognizing you're in an environment where your perception of those odds may be someone else’s optimization target. The categorical avoidance of such securities is meta-rational, not risk-averse.
This principle has dangerous corollaries, though. While contrarian investing can work for someone as thoughtful as Peter Thiel, "reversed stupidity" more often turns you into a differently controllable "rebel" who's still a target for extraction. It can lead to destructive behavior that profits no one—like raising tariffs just because economists advise against it, or (as I witnessed with a friend) peeing on a couch during a psychotic break merely because everyone wants you not to. The meta-level reasoning that begins as rational defense can deteriorate into reflexive opposition.
At a systemic level, this explains why this principle often leads to defensive sclerosis in institutions and markets, and depressive catatonia in individuals. People become increasingly unwilling to take any action that might signal they're in an exploitable reference class, until desperation finally drives them to fight for bad reasons. The result is long periods of calcification punctuated by irrational overreactions.
It bothered me that the sawtooth roofs of the two factories point in opposite directions. But here the shtick is to take metaphors seriously/too far, so what could it mean?
First possibility is that the decision-making process in the middle of the diagram is located at the South Pole. Both factory builders have correctly located the roof windows to point away so the factories get natural light but not direct sun. This is physically implausible but metaphorically promising if we think about what it might mean to pull a decision out of your South Pole.
Second possibility is that the builder of one of the factories did not understand what they were doing and imitated the form of the other without understanding its purpose or mechanism of action. This also seems metaphorically promising. Obvious to suspect the problem factory of this but as someone who has a career more in the solution factory I would be nervously checking whether we do in fact get direct sun in here.