the fudge must flow
tolerance for ambiguity in investment
And so, I’m travelling a bit so my thoughts are slightly scattered. But I want to respond to a few comments on last week’s post, in which people did their level best to suggest ways that cost-benefit analysis and net present value modelling could compromise between the (presumably desirable) unbiased estimates and the (regrettably necessary) fudge factors. As long term readers will know, I do not necessarily agree that fudge factors are bad[1]. But that’s not the real source of my discomfort with the idea that the use of fudge factors can be tamed in this way.
The problem is, in my view, that the distinction between an estimate and a fudge factor is itself a decision. And because it’s a decision, it’s also subject to fudge factors. The creation of data is a process, in which all sorts of compromises always have to be made.
A couple of years ago, I did a Friday joke post grumbling about common things people say during modelling, like “this data is a bit misleading, but it’s all we’ve got” and “the estimation method is pretty fragile, but it’s better than nothing”. I claimed at the time that I would never countenance such practices.
But what if I was a little bit less pure and scrupulously ethical than I am? Ignoble thought, what if I were to look at the results and then decide whether I was going to go all How Very Dare You, or just “yeah, not the best but I’ll allow it”. By taking the “for my friends, the utmost of accommodations, for my enemies, the law” approach to data, I can put quite a substantial fudge factor into the model without ever leaving any fingerprints.
I will push this a bit further. Even in the absence of manipulation – in fact, let’s stipulate that there’s not even any of the subconscious finger-on-scales effect which motivated the intention of double blinding in medical trials – the boundary between estimate and fudge is not clear. If you don’t want to allow straightforwardly identified “fudge factor” lines in a model, or if you excessively stigmatise the fudge factor, then actually, your investment strategy is being determined by your tolerance for ambiguity. Which is to say, the easiest way to get rid of fudge factors is to be really loosey-goosey about what you are going to allow into the estimates. But of course, this is psychologically difficult to do.
And here’s a random semi-related punchline. In this series of posts, both I and everyone in the comments have, I think, been kind of implicitly assuming that the main use of fudge factors is to make projects look better than they otherwise would. But that’s not necessarily true at all. His Majesty’s Treasury, in some cases, applies a negative ten per cent “optimism bias removal” to investment analyses drawn up by spending departments. Call that what it is.
[1] Capsule summary for recent arrivals – fudge factors, as well as being a way in which bad managers can ignore reality in favour of arbitrary gut feelings, are often the only way that good managers can make a model take into account important information which, although important, is not the sort of information which lends itself to being expressed in terms of the parameters of a spreadsheet model.

If I were a spending department, I would simply inflate my figures to take account of the fact that the Treasury is known to apply a 10 per cent reduction,
On further reflection, I think the key question is whether the purpose of the model is predictive (trying to predict the future as well as possible) or political (trying to produce an objective-looking rationale for a decision you’ve already made). The former type of model should have the fudges clear; the latter needs them obfuscated. You only run into ambiguity if you’re trying to do both things in the same model, but I would argue you *really* don’t want to be doing that, because you run the risk of losing track of what’s fudge and what’s ground truth, and if you start accidentally fudging your fudges, the feedback loop could get ugly.
Writing that out, I can see the argument for “well if we keep two separate models, the people we’re showing the politically-fudged models to might find out and that would have adverse consequences”, and I see that concern, but my feeling is that in the long run that’s a more desirable failure case than “we lost track of reality”.