Reader feedback on the recent post about artificial intelligence in art ended up in a discussion of information bottlenecks in economics, in which Steven Clarke and Jason Smith pointed me in the direction of a general issue in the theory of neural networks called the “Bottleneck Principle”, and speculated about whether the price system was an information bottleneck. I’m going to talk at ninety degrees to that to begin with, because my personal hobby horses are in a different direction, then come back to thinking about markets as a bottleneck in the technical sense.
In a simple sense, of course the price system is a bottleneck. It’s a way of reducing the complexity of the world so as to model it. The trouble comes in when you start mistaking the map for the territory.
Most of the time, people don’t find it difficult to brush off the kind of undergraduate logic which claims that all actions are “really” motivated by self-interest; people might not be able to untangle what’s specifically wrong with that argument, but they quickly intuit that it’s some sort of semantic word game not worth getting involved with. And broadly, they are right.
For some reason, though, people have a much tougher time with exactly the same argument applied to companies only being motivated by profits. And it’s worse in terms of creating misunderstandings, because while everyone knows that “self-interest” is a complicated, individual and highly contestable concept, people who have never had to crack the spine on a set of accounts often think “profits” are a straightforward objective fact.
Consider the famous Blinder study on pricing behaviour. Among many other fascinating findings (including that expectations of future inflation seemingly play a very limited role), this survey asked firms why they didn’t change their prices more often. The single most frequently given reply (see p12) was “It would antagonize or cause difficulties for our customers”. In a very revealing footnote on p17, the authors say “One reader asked why we included ‘antagonizing customers’ on the list at all. The answer is simple: in pretesting, respondents kept bringing it up”.
As that page of the study makes clear, it’s quite difficult to fit this answer into the normal modelling framework – Blinder and his co-authors treated it like it was an adjustment cost, but noted that this is a bit of a fudge. The ‘cost’ of losing future orders because you annoyed customers with a price change isn’t really like a menu cost or an administrative friction.
It is possible to translate this answer into a framework so as to make “it would antagonize or cause difficulties for our customers” into a complicated statement about conditional forecasts of future prices and quantities, but as soon as you start to do so, you surely begin to realise that you’re engaged in the same kind of intellectual parlour game as a bright undergraduate student who has set out to prove that humanitarian aid workers in epidemics are as selfish as the rest of us.
You can represent it that way with a lot of effort, but the information isn’t really about that; that’s not how it enters into the decision making process. Business managers know that certain things annoy the clients, and they know that you’re best off not annoying the customers, and they even mentally associate these facts with their sales forecasting. But they don’t reduce everything to a set of conditional distributions over profits and maximise – that’s not an accurate description of the process.
What I’m saying here is that the representation in terms of prices and quantities is an information bottleneck; it’s chosen by some kinds of modellers specifically for that reason, in order to make modelling possible. This is only a problem when people forget that there’s an information reducing filter on their model and try to use simple statements about prices and quantities to overrule decisions based on the information they actually have.
I think this is an important step in understanding what went wrong during the period of “financialisation”; a lot of information was intentionally thrown away and replaced with price-and-quantity mechanisms, because it was felt that the cost of employing a bureaucrat to deal with the full information set was wasted. But on the other hand, I promised above that I’d come back to the technical sense of the information bottleneck, and there’s another thing to understand there too.
In neural network computing, it seems that it’s often quite important to include a stage that acts as a bottleneck; it makes the results better and more robust. In so far as I can understand the literature, the idea is that a neural net with sufficiently many layers and parameters will always try to fit the entire dataset, conforming exactly to its “shape”. If you include a bottleneck stage to force it to throw some information away, then it has to make an effort to capture the actual structure of the dataset and fit to the underlying process that is (noisily) generating the output. Effectively, unless you force the model to make a choice, it has no reason to treat signal any differently from noise.
Which is what the price mechanism does too; it forces decisions. Although “this is going to annoy the customers” and “that sensor could lead to a catastrophic stall” aren’t really information about sales and revenues, the fact that they could be translated into those terms is what gives them a causal role in decision making.