This week I find myself thinking about “cultures of regulation”, and what they imply for organisation. For another thing I’m doing, I’ve had to go back and review a few things I did about anti-money laundering regulation, and was reminded that this is often a uniquely dysfunctional part of banking, because it’s a place where two very alien cultures bump up against each other. And it also gives me another chance to work on my skills in terms of applying cybernetics to real-world policy issues.
What I mean by this is that bank compliance departments work on a “regulatory” model – their job is to be clear about the rules and to set up systems (System 2, if you bought my book or Stafford Beer’s) to make sure that they are complied with. Systems of this kind are meant to “just work” as much as possible – reports are automatically generated, triggers for elevation are automatically triggered, checks are either literally automatically checked, or done by hand in something that’s visibly an industrial process. They are meant to reduce the amount of variety being handled by the rest of the system by closing off actions that will cause problems or get in the way of things.
This is a problem, because anti-money laundering regulation works on a “law enforcement” model – they try to find the people doing the crimes and then to put them in jail. Their whole job is to seek out the unknown and to increase the variety of the system, not to reduce it.
And this results in the perennial quest of industry lobbyists for a “safe harbour”. Compliance officers do not like to hear things like “if you can’t do the time, don’t do the crime” or “ask yourself, do I feel lucky, well do ya, punk”. They want a definite list of required due diligence, such that they can be sure that if the checklist is followed, nobody will go to jail. Which is the thing that the law enforcers can’t give them. Partly because as soon as you set up a standard of what must be checked, you are implicitly creating a negative template of things which will never be checked. But mainly because the whole concept of a safe harbour is alien to the culture.
At the time of writing, these two houses alike in dignity have reached a sort of compromise, the nature of that compromise being that for their part, the compliance officers will absolutely spam the Suspicious Activity Report system with anything whatsoever that might possibly look bad if it turned out that a crime was being committed, while the law enforcers will cycle through stages of commissioning a new machine learning system, trying to persuade the banks not to carry out “defensive reporting”, being told that the only thing that would stop them is a legal safe harbour, rinse and repeat.
I do not have a solution to this problem!
So, I think I’m going to need to do the mathematician’s trick, and try to solve a general class of problems which has this as one of its special cases. The general problem here is of “regulation by enforcement” (to adopt a crypto industry slogan, but one which frankly has quite a lot of validity as a description of the SEC’s approach). It’s often very inefficient – it gives rise to a lot of problems similar to the spamming of suspicious transaction reporting – for companies to be uncertain about the precise nature of the rules, but it’s also often undesirable to create safe harbours. And some level of regulation by enforcement is inevitable.
One of the central issues of Malcolm Sparrow’s writing on “regulatory craft” is that the structured use of discretion in enforcement is an intrinsic part of regulation which can’t be removed – either you have a rulebook which sets out general principles (and therefore there is discretion in their application to specific cases), or you have a rulebook which is full of detailed requirements for nearly every conceivable case (in which case, given finite resources for enforcement, you have discretion about which bits of the rulebook to lean on). Or indeed, both.
For yet another thing I’m doing at present, I’m trying to make the case that many questions of “culture” can be reformed as questions of information design, and I think that the regulation/law enforcement divide is one of them.But this post is already too long, so I’ll return on Friday.
This is a topic that fascinates me. I have an initial question: what are you going to assume, or how are you going to think, about information flows to the regulator / enforcer function? In the financial-crimes area existing enforcers have excruciatingly imperfect information about the success of their efforts (e.g., how much money laundering is actually going on in their jurisdiction, which is the thing you would want to minimize from an overall design perspective) and tend to focus on metrics like number and size of enforcement actions and fines to show success. To the extent they get feedback from the political system it tends to be very crude (e.g., "show that we're tougher on big banks" or "do less to antagonize the crypto industry"). To assume that problem away risks creating a fantasy, but taking it head-on seems like a real problem for applying your methods. I'd be happy to be proven wrong though!
This was a huge issue relevant to tax evasion/avoidance/minimisation/planning (note the euphemism cycle). Tax lawyers and accountants wanted certainty, which meant allowing dodges until they were detected, then patching them. This was facilitated by courts and particularly the Chief Justice in the 1970s, Garfied Barwick, whose literalist rulings made taxpaying optional for those with good lawyers.
Eventually the state responded with a series of measures which roughly matched "Do you feel lucky"
* A general anti-avoidance provision (this existed, but had been read down to insignificance by the Barwick court).
* Changes to the Acts Interpretation Act, explicitly telling the courts to follow the announced intention of laws, rather than their own literal interpretation
* Retrospective legislation: When a scheme is discovered, it's declared illegal retrospectively
Lawyers hated all of these, but the net effect has almost certainly been to increase certainty about what is legal and what is not.