And so we say goodbye, in the United Kingdom anyway, to the principle that bankers’ bonuses should be capped at no more than twice their basic salary. A bittersweet moment for me, as the idea that this might be a useful thing to do was at least partly one of my own contributions to the world of regulation.
I’ve been apologizing for it to colleagues for a couple of decades, but way back in 1997, I wrote this thing, for the Bank of England Financial Stability Review (later renamed Financial Stability Report, but at the time the Inflation Report team didn’t want anyone stepping on their brand).
It makes quite hard reading for me in retrospect, because it’s really quite a bad article, and I can’t understand why. But any of my readers with a background in finance and economics will enjoy laughing at 25-year-old me for drawing indifference curves with “risk” on one axis and “return” on another, and for saying that binary options are “very difficult to price” (which I really don’t understand – they’re not, and I knew this at the time). And such like. I will simply say that lots of other people reviewed it and they didn’t spot this crap either.
But the key argument made in the piece is one that has popped up from time to time in all sorts of policy documents and speeches, in exactly the way you’d expect a “stylised fact” to do. It runs, sort of:
1. Bankers get paid big bonuses when a trade goes right, but don’t share so much in the losses when it goes wrong
2. This means that their payoff structure is asymmetric
3. Because of this, they have an incentive to increase risk, so as to maximise the value of this asymmetry
As Chart 4 of the piece shows, the structure can be shown to be analogous to a financial option, and it can be proved mathematically that the value of an option is strictly increasing in the volatility of the thing it’s an option on. (It can also be proved that in most relevant cases, although the sign has to be strictly positive, the actual sensitivity is just not big enough to produce anything like the behavioural effects attributed to it, something I only realised about a decade later by which time I was no longer in the regulation business).
If you think about things this way, the rationale is clear – if you cap the bonus so that the upside is limited in the same way as the downside, there’s much less incentive to take risks. In fact, if the cap is placed sufficiently aggressively, you can change the payoff diagram to one that has negative “vega” (sensitivity of the value of the option to volatility), and make the bankers behave more responsibly.
The trouble is … this is, like so many attempts to apply economic modelling to complicated management problems, methodologically tonto when it comes to the unstated assumptions. What we have here is a theory of a banker that goes out and takes risks at his or her own choice, and the only means of controlling those risks is by fiddling around with the bonus system to make it “incentive compatible”. I was writing based on the literature of the “principal agent problem”, beloved of undergrad syllabuses, in which the question is how to align the incentives of an employer and employee when the employee’s effort and output is not directly observable.
But this is meant to be a bank! If you’re a bank, and you’ve got an employee who is taking risks that you can’t measure, observe or monitor, then that’s your problem right there! In any real world case, the better approach to controlling risk-taking is to set limits, report frequently and close down any positions that are getting too large or too risky. Real world risk management problems are usually of the form “this stuff is held in six separate SQL databases, plus four or five spreadsheets on people’s desktops and it takes the product controller most of an afternoon to get it added up, we’ve asked CapGemini to help install a unified system but they’re quoting silly money”. And in the real world, banks don’t usually go bust because someone took a calculated gamble, intentionally taking a big risk. They go bust because someone took an absolutely huge risk without realising they were doing so, usually because of a bad accounting system that encouraged them to treat a probability close to zero as if it was actually zero.
It’s a big old blind spot in economics that it tends to treat information in a really unsophisticated way – most of the time, you’re either waving the problem away by assuming perfect information, or waving it away in a different direction by assuming that you have to treat monitoring as impossible and design incentives. As a matter of intellectual history, I would guess this is because the big debate over information in economics was the Socialist Calculation Debate of the 1920s, and so it was all regarded as settled ground by the time that actual information theory was invented by Wiener and Shannon in the 40s.
And, of course, institutionally, my article was written entirely about Nick Leeson and the collapse of Barings Bank in 1995, the regulatory epicentre of which was in the office next door to the one I wrote it in. It’s at least arguable that something like the “trader’s option” payoff structure was at least part of what motivated Leeson, and Jerome Kerviel later. But the problem with rogue traders in general is not really one of individual incentives; it’s the systems that fail to regulate them.
The comp question is not one of risk, I think. The issue is what kind of people do you want to attract to banking: entrepreneurs or bureaucrats? Bonuses attract entrepreneurs; salaries attract bureaucrats.
Banking used to be a fairly bureaucratic business, or at least Walter Bagehot thought so. "3-6-3" Take money in at 3%, lend it out at 6%, and hit the links by 3 PM. Traders are not bureaucrats. Central banking is still a bureaucratic business, and has a very muted bonus structure. Bank ops are bureaucratic.
Do you want to trust the payment system to entrepreneurial types? This is the main argument for financial segmentation, and weakened connectivity between the entrepreneurial bits of financial services and the plumbing. Financial regulators don't think at this level, because their economists have taught them not to look beyond their precious capital rules.
Levine was quite good on this development. The PRA explicitly made the point that having a large fraction of banker compensation be discretionary is inherently risk-controlling for banks, because banker compensation is their main expense. The proposition that delaying this compensation will make it harder to game the system seems at least superficially plausible?
Anyway, here is how Levine put it in his summary graf:
"You can see why, after 2008, politicians and regulators wanted banks to take less risk. But discouraging bankers from being risk-takers has some bad consequences. Banks are fundamentally risky: They are leveraged businesses, they borrow short to lend long, there is always some risk of blowing up. Silicon Valley Bank failed this year because it bought too many Treasury bonds; surely it didn’t intend that to be a big risk. If you get rid of all the risk-takers at a bank, then there will be no one left who is good at taking risks, but the bank will still be taking risks. You have to strike the right balance. Maybe the new regime does that: Paying big bonuses attracts risk takers, but paying big deferred bonuses attracts thoughtful, long-term risk-takers."