I've been working recently on a project with Henry Farrell about possible futures and reactions to the USA’s weaponisation of the dollar system, which I'll point to when it gets published somewhere. And also, in my day job, on the development of the digital euro. And so I've been thinking, in more general terms, about the way in which we've decided (not always with very much actual consideration) to turn the payment system into a tool of the state.
People like David Graeber would obviously argue that payments are intrinsically tools of government, that debt-money historically precedes commodity-money and that simply being involved in a nexus of transactions makes you more governable, and maybe that’s right. But I’m talking about something more direct and immediate; the development of the actual network into something the government can use, rather than a service it provides.
(It helps, IMO, to keep at the front of your mind that when we talk about “the payment system” in a modern economy, we’re just talking about a telecoms network. It happens to be a very fast, very secure, very reliable network, which is unusually picky about who it allows to connect to it and how. But the reason that it can be used as a tool of surveillance, and the reason that the threat to deprive someone of access to it can be used as a tool of coercion, is that unlike cash or commodity-money, book entry money is intrinsically a business of information processing.)
What interests me is that fear and distrust of this aspect has a history that goes right back into the David Graeber timescale. The passage of the Book of Revelations which brought the number 666 to heavy metal immortality is possibly the earliest recorded mention of payment systems regulation:
“And he causeth all, both small and great, rich and poor, free and bond to receive a mark in their right hand, or in their foreheads; and that no man might buy or sell, save that he had the mark, or the name of the beast, or the number of his name”.
It makes KYC guidelines look positively libertarian, to be honest. It also throws quite a light on the debate over Central Bank Digital Currencies – I used to keep trying to suggest to people at trade conferences that they were going to get a hell of a backlash if they kept telling objectors “it’s nothing to be scared of, we’re just going to introduce a system where there’s a special number that you need to have in order to be able to buy and sell things. Possibly understandably given my reputation and past behaviour, they mainly assumed I was joking.
I think there is a buried intuition about fairness – not a very strong one, or it would have been noticed earlier, but something that’s potentially politically significant – that these big networks which everyone has to participate in are a sort of common space, and people don’t like being reminded of the fact that they’re actually a big information processing system (and therefore a system of power to exclude and survey, thank you Professor Foucault) which somebody is in charge of.
The fact that the intuition is weak and easy to miss has led to, to my mind, some utterly incoherent policy making. On the one hand, social media networks can have their CEO literally intervening personally overnight to alter the algorithm and edit what’s shown to users, while still benefiting from an exemption which holds them to be wholly neutral platforms with minimal responsibility for the uses made of them. On the other hand, Deutsche Bank got fined $150m by the New York State Department of Financial Services[1] because they should have known that “payments directly to women with Eastern European surnames” were part of Jeffrey Epstein’s crimes.
From my point of view, I think that neutral spaces have the advantage that they respect the principle of variety. If there’s something the state wants to do, then it’s more economical in terms of information processing capacity to match a specific resource to that problem, rather than trying to incorporate it into a system which is mainly for something else. (I think this principle extends to a lot of consumer boycotts that I don’t take part in; it’s exhausting and wasteful of effort to spend your whole day trying to find out what other activities organisations you interact with might be doing, and dangerous to delegate this monitoring to someone else).
To be clear, that’s not an absolute principle by any means – it’s more of a tactical heuristic, like the principles of taxation which really just amount to “this is probably the best way to get the maximum number of feathers from the goose with the least amount of hissing”. The initiation of a bank account is a good opportunity to do some checks on whether someone is the sort of person who ought to have one, and since payments systems are always monitoring fraud and network abuse, it’s not too big an ask for them to add a few more patterns of potential suspicious activity. But it is something to think about in designing systems. Whenever there’s an opportunity to allow people to treat something as a simple black box and save their valuable bandwidth for something else, there needs to be a reason not to take it.
[1] Perhaps not technically completely true; although the NYDFS consent order does contain that phrase, this was a period during which Deutsche was committing so many offences that it was impractical to deal with each one invidually, so this penalty also covered its relationship with Danske Bank Estonia and one other money laundering
operation.
This feels somewhat similar to the idea of "separation of concerns" in software engineering: the notion that each component should have one job, and we should tackle complex problems by composing such components rather than by complicating them (literally binding them together, if you dig in to the etymology of "complicate"*).
Once you've complicated something with something else, it becomes a lot harder to hold that thing accountable for what you might assume to be its primary responsibility, because it now has this other responsibility and might end up trading these off against each other in surprising ways. A lot of "computer says no" scenarios are precisely that: some apparently simple request is denied because that request has become complicated with a bunch of other concerns that you might not even know exist, and which you would find hard to reason about even if you did.
* I got this from Rich Hickey - this talk is an hour long but the etymological bit is in the first 10 minutes or so: https://www.youtube.com/watch?v=SxdOUGdseq4
As soon as there's an editorial selection procedure (automating such a procedure doesn't make it an "algorithm") neutrality is a nonsense. We almost reached this recognition with, the mainstream media, then with X and FB, and now we are having to relearn it with LLMs where people imagine that "ask ChatGPT" gives you an objective answer. Grok and DeepSeek will disabuse them of this sooner or later.
The only neutral way of doing this is to let people select who to follow and present a sequential feed.