38 Comments
User's avatar
Jim Grafton's avatar

I’m not sure the industrialisation of decision-making is the core problem. At modern scale and complexity it’s basically unavoidable — and will only intensify as AI takes on more of the decision workload.

The real issue is that our regulation layers haven’t kept pace.

In viable systems you need:

fast, local feedback when a decision harms flow

escalation pathways when context is missing

sensing of systemic side-effects

the ability to adapt the rules themselves

When those regulators are weak, you get:

decisions that are defensible locally but damaging globally

moral capacity outsourced to checklists

fragmentation of context

delayed awareness of harm

Industrialisation isn’t inherently harmful — it just demands stronger feedback loops than artisanal decision craft ever required.

If we’re struggling to sense quality now, imagine what happens when AI is producing thousands of acceptable-looking decisions per minute, with no established governance for judgement drift or emergent bias.

The danger isn’t mass production. It’s inadequate regulation of the quality of what’s being mass-produced.

The industrial revolution broke craft.

The AI revolution might break judgement — unless we update the feedback loops.

Dan Davies's avatar

yes, that's roughly where I ended up in "The Unaccountability Machine", although I didn't really have the nerve or talent to express it clearly. With the new book, though, I'm beginning to worry that there are intrinsic problems that can't be solved, and that we might actually have to restructure things to smaller decision-making units.

Jim Grafton's avatar

It’s interesting how often that pattern emerges: systems scale by accumulating layers of coordination, control, and exception-handling until the complexity becomes self-defeating. The logic that created the system can’t simplify it, so the only answer feels like more layers.

By the time fragility is visible, the redesign looks “complex” simply because simplicity is unfamiliar.

One of the recurring lessons is that viable systems don’t just grow — they periodically re-partition. Smaller, semi-autonomous decision units with clear purpose and feedback loops tend to remain coherent because they can still hold enough context to make good decisions.

What feels radical is often just a return to the level at which judgement is still possible.

Maybe the real frontier isn’t industrialising decision-making further, but industrialising the conditions under which local judgement remains viable.

Scaling complexity is easy.

Scaling coherence is the hard part.

Indy Neogy's avatar

This is similar to the thought that occurred to me, a critical set of changes in my lifetime have been the decline of institutions (e.g. unions, media) that had some regulatory functions.

Marcelo Rinesi's avatar

There's an argument to be made that our most salient issues today aren't due to the standardization of decision-making in bulk but by the de-standardization of the most influential decisions, partly because of the increasing influence of the hyper-wealthy and partly because of the lower emphasis on the sort of explicit expertise that works in a way as a standardization of decision-making (very obvious in the US right now with the very and idiosyncratically artisanal decisions at the Presidential and cabinet level; terse shadow docket resolutions from SCOTUS and Kavannaugh stops can be seen as other examples of traditionally/nominally standardized decisions that have become more arbitrary/artisanal/ad hoc/etc).

I'm not saying what you're describing isn't a true issue --perhaps it's a bit like the dark matter of our current societal systems, not directly visible but structurally decisive-- but both in the news and in my day-to-day experience with tech companies I find myself wishing for more (and, then, better, but let's begin with more) standardization, not less.

Chris Bertram's avatar

I don't know whether things are worse, in the round, as the result of the bureaucratisation of/algorithmisation of decision-making, but I recognise these pressures in areas like asylum and welfare, where those in need of assistance or protection are subjected to a fixed set of criteria that mark them as deserving or not. We've outsourced our general duties of assistance and rescue to bureaucracies, partly for efficiency reasons, but if we were encountering a needy person one-to-one we'd not be operating a checklist. Once the checklist is in place then that also seems to infect people's individual capacities for response so that those who don't fit get stigmatized as bogus asylum-seekers or welfare cheats, just because their circumstances don't fit the categories well. To some extent, the harshness and inflexibility of bureaucratic systems has been mitigated by giving front-line workers discretion, but as soon as there are moral panics about cheats and bogus claimants you get political pressure to reduce that discretion so that anyone who doesn't fit the criteria just gets rejected. (Similar checklisting is blunting moral capacities when it comes to the conduct of war also as we've seen in recent conflicts.)

Sam.'s avatar

"To some extent, the harshness and inflexibility of bureaucratic systems has been mitigated by giving front-line workers discretion..."

Right, people complain about "bureaucracy" when it gets in their way of some good, then complain about "corruption" when someone else is receiving the good.

Jamie Heywood's avatar

Dan. Great, thought-provoking piece. A counter-point: Isn't the law also a 'standardisation of decision-making' that benefits a community by ensuring that its citizens don't suffer from the capricious whims of those in power. Societies have scaled over the last ten thousand years by depersonalising leadership , codifying laws and norms, and having them enforced by courts and moralising gods. How is what companies are doing not similar? The problem, in my view, is not the codification of decision-making principles per se (which personally I rather like, as it makes decisions transparent and open to debate), but rather the purpose behind these codified principles. A society's purpose is (or should be) decided by us; a company's purpose is decided by its shareholders.

Dan Davies's avatar

good points, thanks - I'm trying to wrestle with this in the book, that there are some kinds of decision making which can and should be standardised and some which can't or shouldn't, and that "crisis" is our term for the events which distinguish the two

Simon's avatar

I think this is right and also a second dimension: which decisions should be pushed down into the lowest possible viable system and which decisions (for one reason or another) need to be pushed up to the appropriate level, and how do we make that meta-decision

BR's avatar

I feel like there's another dimension to industrialized decision making that other commenters have touched on, which is that "bespoke" decisions are subject to all sorts of unpleasant "isms": nepotism, racism, favoritism, so on. So a move towards systematic decisions is frequently a desired change to remove those problems.

Of course systematic decision making tends to result in perverse or unintended outcomes. These _also_ lead to dissatisfaction, and tend to push in the opposite direction. Thus we end up with a slider running from what I will shorthand as "bespoke/corrupt" over to "uniform/perverse".

If you push that all the way to the right, you get a system implemented in software that is perfectly uniform and doesn't even have the capacity to recognize the perversity of the outcomes. In theory these systems can avoid all those "isms"... unless they were encoded into the design of the system, in which case you can't *escape* from them.

Of course a good decision system will have escape hatches to deal with the perversity. I think that amounts to "appealing to a higher level of System", perhaps? Though I also think the word "good" is load bearing - there are quite a few poorly-constructed systems that have accountability sinks instead of escape hatches. And you also have to guard the escape hatches, lest people that got a valid, but unpleasant, result from the system overwhelm it.

Doug Clow's avatar

>Are we getting worse decisions?

I think we absolutely are, in the sense of it now being a very, very common experience to go through customer support hell in trying to fix an administrative mistake or get something done that didn't quite work right in the system in normal operation mode.

These problems did occur in the past - the 20th century nightmare of bureaucracy, as caricatured in Terry Gilliam's film Brazil.

But I am pretty confident we are getting a lot more of it in the 21st century. So much more of our lives is so much more complex and requires large systems of organisation and decision-making.

Most of this is hidden most of the time. I am very open to the argument that the average quality of decision has gone up with the industrialisation and then digitalisation of decision-making. But there are just so many more decisions being made, about so much more complex things, that the number of egregiously bad decisions people encounter is much higher.

As an intuition, have a look at the discussions for the classic tech interview question "What happens when you put a URL in to the top bar in a browser and hit return?". There are a *boatload* of highly-automated decisions.

Doug Clow's avatar

On another point, I think the Arts and Crafts movement argument that you should pay more for higher quality stuff produced by empowered artisans was defeated by industrial capitalism producing stuff that was not just much cheaper (which was probably enough to win on its own), but actually better quality on many/most axes that people cared about. I am developing a theory of William Morris as an early forerunner of authentic smol beans uwu influencers who pretend to be artisanal producers by holding their lav mics in their fingers but are extremely effective users of modern mass production technologies. We all know what a William Morris pattern looks like precisely because it has been endlessly and cheaply reproduced by purely mechanical means.

TW's avatar

There is a perfectly doable way for everyone reading this to have hand-thrown dishes, hand-blown glasses, and hand-forged utensils at every meal.

Have very little else.

The bad decisions accumulate as the result, the offgassing if you will, of the increased complexity...sure...but that complexity is largely based on having more things. Ideas, models, and associations too of course, not just refrigerators and cars and wireless hearing aids. It's relatively easy for an individual to defect, brandishing a copy of Marie Kondo like a sword before heading off to the Zen monastery, but it's much harder to do this as a society. It's not that it's not possible so much as not desirable.

Cleaning hours shot up after the introduction of the vacuum. Previously cleaning was top-to-bottom once a year, not a few hours a week. "Well social standards changed," we say, and leave it there. But actually it's that *house interiors started to look vacuumed*. "House interior=vacuumed" became like "Neck, spots=giraffe." Culture comes behind in these situations, running to catch up with a just-so explanation clutched in its hand.

Peter Thom's avatar

The problem with ceding management decisions to AI is that large language models act on the basis of a record of history. And the record of history we have is flawed. For instance, in the U.S. there is no widespread writing that recognizes that the post-slavery South was a racial authoritarian polity and remained so until the VRA passed in August 1965. Hence, it’s not surprising, given the lack of realistic history of that Southern authoritarian system, that a large faction of the Republican Party now is trying to federalize a similar authoritarian system. Without the taint that realistic history could and should have thrown on that regional authoritarianism it will not be surprising if AI makes decisions that support such authoritarian styles of management, just as that faction of the Republican Party has. We should remember that AI trained on a model to enable a bot to post on social media quickly began posting racist language, because that is what it was trained on. AI will definitely lead to quicker decisions but will not definitely lead to better management decisions.

Indy Neogy's avatar

Adding on my reply to Jim (and echoing some of what he said) - the salient thing is the easiest way to make decision making cheaper in the short term is cutting back on feedback and regulatory functions. (Of course, as we're noticing more and more in various arenas, not considering long-term costs can come back to bite.)

One thing I think we need to examine very carefully is the post-Hayekian assumption that feedback and regulation are waste which needs to be cut and/or overhead which would terribly cripple the growth of our economy/society.

(Echoes perhaps of "why can't we have clean water/little snails as well as new houses? that would be real abundance").

I'm reminded as an example that the number of UK civil servants was fairly constant from 1900, then a huge boost for WW2 and has followed a bumpy path of decline since - but of course both the population and complexity of our society since 1945 have kept going up - so there's something to think about there.

Indy Neogy's avatar

Also important to note, since we're dealing in industrial manufacturing metaphors, most of the advances in manufacturing post WW2 have involved developing new and more sophisticated feedback and regulatory systems.

Inga Simonenko's avatar

The real failure mode isn’t that the system is cold. It’s that the system becomes unchallengeable — and exceptions get treated as defects.

Matt Woodward's avatar

The simple explanation here would be that we have the same intrinsic quality of leaders who’re making on-average worse decisions because the world is more complex and thus the decisions are harder to get right.

Ian M's avatar

Personal perception: a huge deterioration in decision making in the last 20+ years. Organisations chose managers who won’t make decisions, and also extract the authority to make decisions.

Paul's avatar

Adam Smith already noted the deleterious effect of division of labor on workers:

"In the progress of the division of labour, the employment of the far greater part of those who live by labour, that is, of the great body of the people, comes to be confined to a few very simple operations; frequently to one or two. But the understandings of the greater part of men are necessarily formed by their ordinary employments. The man whose whole life is spent in performing a few simple operations, of which the effects, too, are perhaps always the same, or very nearly the same, has no occasion to exert his understanding, or to exercise his invention, in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become. The torpor of his mind renders him not only incapable of relishing or bearing a part in any rational conversation, but of conceiving any generous, noble, or tender sentiment, and consequently of forming any just judgment concerning many even of the ordinary duties of private life..... His dexterity at his own particular trade seems, in this manner, to be acquired at the expense of his intellectual, social, and martial virtues. But in every improved and civilized society, this is the state into which the labouring poor, that is, the great body of the people, must necessarily fall, unless government takes some pains to prevent it." WN Book V, Chapter I, Part III

Veit Braun's avatar

I think there are a few things to consider here, and one is perhaps the difference between commodities that can easily be divided materially and organisationally and services, which we struggle to industrialise among others because they are difficult to split up and decontextualise. (Or rather, some contexts are difficult to transform in a way that they can handle one standardised solution—think haircutting.)

You can distinguish different classes of decisions and their problems. One is the reduction of a problem to a single axis of variety (e.g., two identical products sold at different prices), in which identifying the best possible world becomes a mere calculation. This is what economics is all about. Then there are decisions in which the options differ in more than one dimension but the outcomes are either identical or inconsequential. We can also think of a class that combines both (simple variety, no consequences), which is probably the least interesting.

Finally we're left with a decision that can neither be turned into a calculation and whose consequences are nonidentical and very difficult to reverse. Such a 'hard' decision requires both a lot of information to represent the axes to consider and an instance capable of prioritizing among the various inputs, paths of actions and outcomes on varying levels of detail or abstraction. This is what Dirk Baecker means when he says that the decision is really a third instance and cannot be reduced to the available options. What a lot of people are looking for in AI is such a third instance, which cannot just plow through myriads of data but also offers a heuristic for counting, weighing and judging.

From what I know about AI (mostly Stephen Wolfram's introductory text), LLMs a) lack a more abstract level of representation of the world on top of their concrete one and b) do not come with intentions or goals that can serve as a third pole for discriminating between otherwise incalculable options. That means they are very good at reproducing 'hard' decisions that have been made in the past but probably not able to create new ones by alternating between an abstract and a concrete plane. And I would think that this is precisely what is lacking today in politics (call it a compass or call it ideology) and may be the actual core that remains if you wash away all the 'soft' decisions with AI and economics.

Piers Brown's avatar

Now I want you to connect this to your recent thoughts about the loss / export of process knowledge as a result of off-shoring. Rather than the dichotomy between artisanal decision making (arbitrary, corruptible) and standardized decision making (inhumane, malforming), what might a knowledgeable process-rich decision making system look like? (Is a functioning precedent-based legal system an example?)

Rainbow Roxy's avatar

Excellent analysis. Do these systems also generate new unforeseen complexities?