on being a black box
coarse graining and career risk
busy this week, so this is an excerpt from a work in progress, but I was inspired to pick this particular excerpt while reading Henry Farrell and Cosma Shalizi’s essay on AI as social technology. A hell of a lot of what I’ve been thinking and writing about the last couple of years has stemmed from the incredible seminar that Henry and Cosma invited me to, at which I spent quite a while wondering at the fact that the data scientists, political scientists, sociologists and philosophers all had different words for the central concept that I thought was called a “system of accounts”.
Among other things, one of the key subjects of my current book is the concept of “career risk”. It’s one of those things that everyone instinctively knows what it is, but which doesn’t seem to have any rigorous theory of it. So the only thing to do is what I always do, which is to theorise it myself, at which point Sodd’s Law begins to operate and I suddenly discover all the famous people who have been writing about it since 1947.
I think career risk does explain a lot, both in terms of there being a lot of things in the modern world which happen because of it, and in terms of it being a very useful generalisable concept. Henry and Cosma talk about “coarse-graining” as an essential feature of bureaucracy, and below, I argue that one inevitable consequence of coarse-graining in organisations is that it provides the ground for career risk (and therefore, for certain kinds of friction and entropy). Obviously, for the same reason I was thinking about accounting systems while everyone in Santa Fe was talking about algorithms and hierarchies, I use the term “black box” instead of “coarse grain”, but we’re all looking at different parts of the same elephant.
Black box actors are the condition for the existence of career risk
It seems, then, that our theory of career risk also ought to be built on the understanding that it is a problem of expertise. It arises when there are people who make decisions because they have a special role in the information processing system. This special role is that of being a “black box”.
I’m using the term “black box” here in a bit of a specialised way. There’s a bunch of theory in the background, for which I’m sorry. I learned it from Stafford Beer’s books on management cybernetics, as a way of modelling complex systems. When you’re doing that, there is an obvious need to reduce that complexity to a sensible level in order to explain what is going on. One way to reduce complexity is to take some part of the system, draw a box around it and then shade that box in to be completely black[1], visually declaring that you are no longer going to worry about the internal structure of that part of the system, but rather just treat it as something with inputs and outputs. The information going in can be seen, as can the decisions going out, but the process by which one is converted into the other is opaque. Not necessarily because it absolutely has to be mysterious, but rather because doing it any other way would be unrealistically expensive in terms of time and effort.
And although it’s best understood as an analytical technique for thinking and writing about organisations, the principle of the black box rings fairly at the level of ground truth as well. It’s a large part of what it means to delegate responsibility for something – to do so without also agreeing to treat the person or group that you’ve delegated to as a black box is the sin of “micromanagement[2]”. And there are people with particular decision making qualifications or legal certification – like social workers, or building inspectors – who are intrinsically black-boxed. For these kinds of workers, the nature of their position gives them the authority not to be second-guessed.
I worry that I might just have made it sound quite fun to be a black box. To some extent, it is. It’s one of the nice things about being a professional, that your judgement and opinions are respected in the way that those of hourly employees often aren’t. But there’s always a double edge. The same black-box property which stops you from being second guessed or overruled means that nobody is interested in your explanations for your decisions; it is definitional of being a black box that you are going to be judged by results[3].
And here we reach the payoff of our project of theorising career risk as a problem of information. If you are going to be judged by the results, and the results are uncertain (as they obviously are, in this naughty world we live in), then you are personally exposed to the risk. Furthermore, your risk exposure is exactly like that of the fund manager we were talking about at the beginning of this chapter; all your eggs are in one basket. This might be one decision among hundreds for your organisation, but it’s a much bigger part of your personal portfolio.
[1] I have actually done this once or twice, on a whiteboard when drawing management structure diagrams. It was unbelievably satisfying.
[2] In the jargon, there are also “muddy boxes”, which are treated as black most of the time, but which can be periodically opened up and examined when there is need to do so.
[3] If you’ll forgive even more jargon, the same management cybernetics books call this the problem of the “resource bargain”. Gaining the status of a black box is a kind of bargain, in which you gain autonomy in exchange for accepting accountability.

Does “muddy boxes” have a negative connotation in these sources? It sounds like it fits a pattern I generally see called a best practice in commercial software development.
You give some team a project, typically that will last for a few weeks to a few months. You treat the team as a black box while the project is ongoing; they don’t need to get higher approval for the steps they take along the way. At the project’s conclusion, whether success or failure, you crack the box open and examine the decisions made along the way. You might call this a retrospective, after action report, postmortem, murder board, etc.
That lets the team move quickly in the moment, while preserving the ability to justify actions after the fact that were positive expected value but didn’t pan out. Sometimes they’re “blameless postmortems” with deliberate friction between the retrospective analysis and personal performance reviews.
This has always felt in practice like a really good model; is there a hidden downside I’m not considering?
Coarse graining in physics is a sort of recursive process so that when you look into a coarse grain there are smaller coarse grains and when you zoom out there are bigger ones. The recursive process in physics leads to a mathematical group., the renormalization group.
Wikipedia first line: "In theoretical physics, the renormalization group (RG) is a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales."
In physics people first started using renormalization in quantum electrodynamic field theory -- Feynman Schwinger Tomonaga the first ones to do it successfully, because some of the finer grains produced infinities and they had to absorb those infinities into the observed properties of the coarse grain and not worry about them except insofar as they produced little changes in the coarse grain. I think they didn't extend the single renormalization into a group. Later Leo Kadanoff and then Ken Wilson extended that idea into solid state physics and, and now it's become a standard way of modeling and theorizing. It led to great advanced in solid state physics in examining phase transitions between different states.
Kadanoff had introduced the idea of "block spins" in aggregating arrays of spinning (magnetic) molecules
"The blocking idea is a way to define the components of the theory at large distances as aggregates of components at shorter distances." -- Wikipedia.
Doing all this recursively produces a group. It's led to great advances in quantum field theory and solid state physics