everything is a nail, or at least it ought to be
“the irrational decision” by Ben Recht
The hammer is, when you think about it, a really great invention. It doesn’t get the same credit as fire and the wheel, but it must have been revolutionary in its time. Without a rigid object to swing, you could starve to death in a coconut grove, but as soon as primitive man picked up a rock, he was in business.
The proverb “if the only tool you have is a hammer, everything looks like a nail” ought to be seen in this context. If you really are at the stage of development where the only tool you have is a hammer, then it strikes me that it’s incredibly sensible to go around looking at your various problems, and seeing if any of them could be improved by a bit of hammering.
Not only that! It would actually make a lot of sense for the unknown genius responsible for this great invention to spend a bit of time thinking whether problems can be redesigned, or reconfigured so as to be more amenable to hammerlike solutions. If you have suddenly gained access to a lot of cheap nails and hammers, then the wood-glue-and-dovetail-joint furniture company are likely to regret having relied so heavily on that proverb to dismiss you.
In the context of “The Irrational Decision”, which is the book I’m reviewing here, sorry for the extended cold open, the “hammer” in question is the mathematics of (mostly linear) optimisation, and the subject of the book is all the ways in which, over the last century or so, people have not only used it to solve problems, but reshaped their problems to make better use of it.
The most important example of this being the incredibly productive feedback loop between “optimisation algorithms are really demanding in terms of computer processing” and “optimisation algorithms are really useful for designing better and faster computers”. This was one of those blinding “obvious when you think about it” moments for me, and I think it explains a lot of modern AI culture.
When people write that all the problems of AI will be solved by the AI, or that the Singularity will naturally be achieved when the AI learns how to make the AI, there’s a strong temptation to smile politely and edge toward the door, as one would with the ordinary kind of lunatic. But while sidling, it’s worth remembering that singularitarianism didn’t come out of nowhere – it’s in many ways a perfectly understandable extrapolation from the way in which successive generations of computer chips and optimisation strategies have built off each other to get us to the place we are now.
In fact, it’s a real tribute to Ben’s character and intellectual honesty that he didn’t write the much easier and more profitable book which was possible here, one which took his descriptions of the development of optimisation algorithms and computing, and extrapolated them in exactly this way. Instead, he starts asking the more interesting questions – the ones based around the same kind of things we see in Thi Nguyen’s “The Score”, in “Seeing Like a State” and indeed on this substack sometimes, the question of “what do we lose, when we adjust a problem so as to be manageable at scale?”.
As he puts it (following Paul Meehl), algorithmic decision making is always going to have the evidence on its side. Because once you have put the problem in terms of the kinds of things which can be measured and defined a specific success metric - once there is any standard of evidence with which to judge the results - then “optimisation” means what it says. Anything you do differently from the output of an optimiser is … suboptimal.
But this often means that all the work is done in deciding what to measure and what the optimand should be, what counts as evidence and what as a test. Not only is that process a great way to put your thumb on the scale without leaving fingerprints[1], a lot of the time things get measured because they are convenient to measure, rather than any particularly principled reason. As I’ve constantly said in econometric context, the easiest way to find a valid instrument for an unobservable quantity is simply to lower your standards.
And so it ends up with a distinctly better than average “last chapter”, addressing the open question of “how do we really want to make decisions, then?”.I have my own views on the industrialisation of decision making, which I think are in line with Ben’s so I’m unusually sympathetic to the project.But even if you’re not a fan of Michael Polanyi or participatory decision making, I think you’ll still enjoy the journey, which as well as a lot of interesting history includes enough back-of-an-envelope descriptions of important maths to make you feel a lot cleverer while you’re reading it.There’s also a bunch of other stuff I could write about, including what I think is a quite important discussion of the role and significance of randomised controlled trials (which he argues are basically a regulatory practice rather than a scientific one).But I have promised myself that I will no longer procrastinate book reviews until I can say everything I want to, and so here this one stops.
[1] Yes yes thumbprints

now struck with doubt as to whether the hammer is under-rated, or whether it's just that the modern hammer is a descendant of the original invention of the stone. I guess they don't call it the "fire age" or the "wheel age".
References to C Thi Nguyen’s “The Score” and Polanyi, woohoo!
Economists who cheer at 0.1% growth, ignore that that means most people are in recession with left behind areas remaining left behind (and voting Reform) and cost of living problems still unaddressed (with the young voting Green). I.e it's the stuff *not* contained in neoclassical algorithms that is mobilising people.