A short one for Friday, that’s been on my mind for a while. Let’s take a simple formula for estimating a risk exposure:
Probability of Loss x Severity of Loss = Expected Loss
Now let’s get a little bit more philosophical. Assume that you’ve made the best possible estimates you can, on the basis of all the data that you have, of the probability and severity of loss. But you’re not satisfied…
… because you remember that Donald Rumsfeld quote, and you think to yourself that there are “novel risks” out there. Things like climate risk, geopolitical risk, cyber risk and such like, which aren’t in your data set (or at least, aren’t in it with sufficient weight) because they haven’t happened before (or at least, haven’t happened as often as there’s reason to think they will happen in the future).
This kind of thing is the problem that’s been on my mind forever; the extent to which things don’t follow actuarially well-behaved processes, the future doesn’t always resemble the past and the distribution of the outcomes of a random variable over time don’t have to be the same as the probability distribution at any one moment. So what do you do?
Obviously, you apply a margin of safety. But how? There are at least three possibilities:
Bump up your estimates of both probability and severity of loss
Bump up your estimate of one of the two parameters but not the other
Leave the parameters (which are, to reiterate, your best estimates given the data) the same, but bump up the final number
It’s on my mind because it’s a concrete example of the offhand remark I made on Wednesday about analytic philosophy potentially having a lot to contribute to accountancy. My intuition is that option 3 above is the one that does the least damage, and that the correct place to put margins of safety is right at the end of the process. But this is not the accounting standard! IFRS9 (for it is she) requires that “model overlays” of this sort have to be carried out in such a way as to affect the “staging” of a loan portfolio. (It would be tedious to explain in detail what staging is, but basically, it matters for accounting purposes whether something has experienced a “significant increase” in the probability of loss in the last year).
The European bank supervisors wrote about this last year, reminding me that my intuition is wrong, but also reminding me that even though option 3 is not allowed under the relevant standard, it is still the most common business practice for the banks they regulate. There is no real conclusion to this post; I just thought you might find it interesting.
Option three at least seems to be the most honest: the point is (isn't it?) that you're pretty sure that the actual expected loss figure is too low, so it should be higher; you don't have a good reason to fudge the inputs in any specific way, so any fudging of the inputs you do is in service of making the output higher. So why not just make the output higher directly?
Isn't there a sensitivity element to this that suggests 1 and 2 are the right place? If there's a parameter with a lot of uncertainty (of whatever sort), and that has a nonlinear effect on the magnitude of the loss, you're going to want to dick about with that to get a sense of how big a mistake it could be.