I perhaps shouldn’t have concentrated so much on fraud and adversarial situations in the post about industrialised decision-making. It’s my comfort zone because I did so much work on it a few years ago, but I was wrong to imply that the problem in industrialising decision-making is that someone can intentionally generate cases in which your information set is incomplete or wrong.
It’s more that when you’re making decisions about people (several comments came back referring to welfare bureaucracies or immigration and asylum systems), you will always find yourself dealing with special cases. People are weird, they’re unique and they will always find ways to be in a situation which doesn’t match up to the set of data points that you decided to capture before you met them.
Sometimes you can handle this with a sort of “divide and conquer” algorithm; industrialise the process of decision-making, but not the process of final decision-making. If you have a way of identifying special cases, then you can select them out and decide them artisanally, freeing up resources by automating the easy cases.
To an extent, this is of course good and necessary to do. I would even really argue that in this case, the entire process is still artisanal – you have industrialised and standardised the accidents and inconveniences of decision-making, speeding up the process in cases where there was no real decision to make. It does have deficits, though.
First, in genuinely adversarial cases, there is the “shotgun, then rifle” method described in the previous post – if someone can find a way of making their case look like an easy one, then they can cause problems. And second, introducing the category of “special cases” is a great way to introduce bias. As I argued a lot during the exam results fiascos of 2020 and 2021, an unbiased decision process with an appeals procedure equals a process that’s biased in favour of people who are good at appealing.
But there’s a more fundamental problem. There’s a lot of information that is very difficult to represent in the right way to be part of an industrialised decision process. Consider, as a simple but non-toy example, the funding of a youth club.
It’s pretty well-acknowledged that this is a problem that a lot of government decision making systems don’t deal with well. The return on investment of spending on youth clubs is really high, but it shows up in terms of crime and employment outcomes that can be very far removed in time and space from the spending decision. You can commission social sciences research to demonstrate that, as a general proposal, shutting down youth clubs is going to cost a lot more money than it saves, but unless this is built into the accounting system, it’s very hard to translate this research into action – the accounting system, as noted in one of the first posts, is a mental prison from which it is extraordinarily hard to escape.
And how, for God’s sake, would you go about including something like this in the accounting system? Everything you add to it to help it handle difficult cases is going to make it less useful for its primary function of summarising transactions. A set of accounts in which everything’s an “investment in the future” is a system in which nothing is. An accounting system which reports outcomes and benefits from ten years’ time is one which is delivering the information ten years too late. It’s not that the benefits of youth provision are “unquantifiable” – they can be quantified, in terms that are not too difficult to translate into dollars and cents, with acceptable error bars. It’s just that even when they’re quantified, they’re just not the same type of numbers.
So in order to let this information play a role in the decision making, it needs to be translated. There’s a known bad way to translate it, which is to use some kind of net present value analysis to make the number literally comparable; to create a “expected value of crimes prevented/life chances enhanced” and compare that to the stream of costs to give an overall net cost or benefit. This is a bad way, because Goodhart’s Law starts to apply – you are making a measure into a part of your control system, with the result that it will cease to be a good measure. Firstly, it will most likely be straightforwardly gamed by the people whose income depends on the funding. And secondly, the true Goodhart’s Law problem will apply – you will start to get different kinds of youth clubs, optimised for whatever metrics go into the spreadsheet, and these will be materially different from the ones on which the research was done.
The issue here is that the information about a particular youth club is complex, in the sense of information-theory – it is difficult to compress, it resist being turned into a single number in this way. So the translation has to be carried out by something which has enough bandwidth to respect that complexity, and to incorporate it into a decision making system alongside the lower-complexity information in the accounting system. At present, the only such kind of “thing” is a human decision maker. It’s interesting to think about how we might reorganise our systems and governance if we started to believe that there were other kinds of system with the ability to process information about something as complicated as a youth club.