Sorry that the normal Wednesday post didn’t arrive – it’s been an absurdly busy week one way and another. FT readers got a double dose of me with this interview in the Robert Armstrong newsletter (which I’m really pleased with, particularly the subtle and sensitive edit to my stream of consciousness garbage which turns it into a very good summary of what the book’s about), and this writeup in the Tim Harford column!
Anyway, readers may or may not know that my day job is consultancy in the foothills of financial regulation, and I thought this document was more interesting than anyone had any right to expect from an update on a bank supervisor’s guidance on cybersecurity.
The thing that caught my eye was that the main problem identified by the Swiss bank supervisor had surprisingly little to do with any technical aspects of cybersecurity; it’s simply that when asked, lots of banks were not able to provide a comprehensive inventory of all the different systems (particularly those of outsourced service providers) which were holding critical customer data.
I’m told by people in the field that this is an endemic problem – the marketing department wants to try out a new agency for a social media campaign and bam, there goes a gigabyte of client data over the wires to sit on servers that you have no control over and no assurance that they’re going to take the same precautions as you. Everyone is meant to not do this, but it’s awfully hard to prevent them, or to force them to update the log of who has what data privileges. One of the key benefits of occasional “red team” or “penetration testing” exercises is that they often have the effect of improving central management’s visibility of what their system actually looks like.
It reminds me of a general problem, which is really dealt with by Stafford Beer only in passing. One of the key edicts of management cybernetics is that “it is not necessary to look inside the black box in order to understand its behaviour”. But this is only true a) if you’ve made the right decisions about how to divide things up into black boxes and b) if you’re prepared to accept that the behaviour of the black box might often be really quite illogical or pathological. Most of the time, corporate decision-making systems are what Beer called “muddy boxes” – the extent to which you can understand the internal workings depends on how much effort you are prepared to put in. And in my view, it’s often important to spend at least some time wiping the glass on the muddy box, to be sure that you have an accurate picture of the true inputs and outputs.
To give a more concrete example of this abstract idea, I was talking to someone from the Bank of England last week about some research they were doing on banks’ risk-taking incentives. Over the course of the conversation, it became really clear that they were thinking about a model in which the bank was a black box which took information inputs in the form of risk and return, then made decisions about its output based on some internal objective function. It’s a sensible way to model things from an economic perspective, but as a description of risk management in the real world it seemed very far at odds with my understanding of reality.
At least twenty per cent of the job of risk management is persuading people to email you spreadsheets on time. Bringing together the sort of consolidated picture of risk and return that’s assumed in my friend’s research is difficult, and according to the supervisors hardly any bank in Europe is really capable of doing it to the standard they consider acceptable.
Risk / return tradeoffs are not really a matter of “let’s buy a load of complex derivatives, if we make a profit we get a huge bonus and if we make a loss the state will bail us out”. They’re much more like “this looks safe but I’m not sure I trust the data vendor, we ought to have some hedges here but Bertrand is on holiday so he hasn’t updated the database and half the values in this spreadsheet seem to be hard-coded, the client is breathing down our necks so I’m going to say let’s do the trade and we’ll try to sell it on”.
The only people who seem to think about this at all systematically are the military strategy types, where “the fog of war” is acknowledged to be a limitation on the possibility of central control. Which seems crazy, because lack of knowledge of the positions of one’s own assets is surely one of the most ubiquitous problems in management and society.
This definitely reminds me of all the risk management processes I've seen or participated in.
Of course the meta problem is also one that IT largely claims to have solved and I think you can argue the gap with reality also creates some of our current problems.
I'm not a banker, so let's talk about a widget maker. Back in the day, if you were a widget maker with a satellite factory in... Chile, you basically interacted via phone and telex. You knew you couldn't really know what was going on so (a) you had to trust the people out there (a long standing problem) but also (b) you were virtually forced by circumstance to delegate lots of things to them, because you knew you couldn't know enough.
The promise of IT, be it SAP, spreadsheets or whatever is that you could have all the information flow up and centralise - and along the way, get rid of a slice of lower management. I'd suggest that the reality is that it doesn't quite work - but while people are happy to say: "it doesn't work, we need a better IT system" they are much less happy about the idea of "it doesn't work, maybe we need to have a middle manager post closer to the action and delegate enough power to them to be useful"
Even 20 % emailing spreadsheets seems too high. Haven’t worked much in a big bureaucracy since retiring from the World Bank in 2011, so this is not my area, but why isnt zero percent the optimal number given that people can work from shared websites or network drives ? Emailing spreadsheets seems to create a seriously non-linear problem of version control.