things which are negative but should not be negative
when "it's better than nothing" is no longer true
It seems that the famous “post-it note” (the one I pretend to have stuck to my computer telling me not to use the Substack for carrying on social media arguments) has fallen off again. So I will admit that this week’s comments on abstract issues of data and forecasting are motivated by the following points of contemporary relevance:
In the UK, public sector investment spending is all to hell
This is largely because of the stop-go effect of constantly changing plans and cancelling things because of panicking about the deficit
This problem has got worse, not better, since we introduced an independent fiscal forecasting agency to make long term projections of the public finances.
Nonetheless, everyone (except me) still seems to think that it’s a good idea to have these independent forecasts, and would be a better idea if they had more, not less significance in the overall fiscal policy framework. The idea appears to be that the forecasts aren’t very good, but they’re all we’ve got.
I would turn that round, and say they may be all we’ve got, but they’re not good.
Anyone who has done hard time in forecasting will be aware that it’s all too easy to get a model that provably and visibly subtracts value. You can get something called “negative r-squared”, which is quite dramatic; it means that your model errors are larger than the volatility of the actual data. This is depressingly common in itself, but even if you get over that low bar, you might not be outperforming other benchmarks like “the naive forecast” (assume that tomorrow will be the same as today, a benchmark which weather forecasting only started to systematically outperform surprisingly recently).
Fiscal deficits are intrinsically difficult to forecast because they’re the difference of two large numbers (income and expenditure). And forecast errors are really bad if you’re using the forecast as a control variable - UK Chancellors can often find themselves going from “black hole” to “windfall” in the space of a few months, with swings of tens of billions. That’s as much as 5% of total expenditure, but the way that the planning system works, capital investment spending is used to absorb the volatility rather than firing public employees, so the swings in the capital budget are much greater.
Norbert Wiener first noticed this sort of problem when trying to design automated gun sights in the Second World War. If you tried to use feedback from the sight to the servomotors, it would tend to overshoot the target, then overcorrect the overshooting, then oscillate back again. A neurologist friend encouraged him to see this as analogous to the “purpose tremor” which is a symptom of some kinds of cerebral injury. The British public investment system has a horrible purpose tremor.
In this sort of case, in my view, “independence” of forecasts ceases to be a virtue. It is certainly possible for Chancellors to make misleading or self-serving projections of the effect of their budgets, but we have to ask - what are we in the game for? To produce really good deficit forecasts, or to build infrastructure? Even a very biased and unworkable plan has the advantage that it is a plan, rather than a forecast residual.
To continue the argument from social media.... surely the problem is the fiscal rules not the forecasts or who does them? The reason decisions have got worse is not the OBR appearing but much harder economic conditions plus increasing dumb fiscal rules (and bad ideology).
Your position seems to be that there is a virtue to allowing essentially doctored forecasts because they might allow governments to pretend their following rules when they're not and thus stick to a coherent plan? But that surely just makes the whole thing murkier and risks another Truss-style problem?
Something about man, and the Sabbath?