(an occasional series on details and insights in the works of Stafford Beer. Previous episodes, one, two, and one that I did before coming up with the series title)
As promised on Wednesday, a little bit of tidying up of the official record. In The Unaccountability Machine, I wrote:
Stafford Beer used to say that the way in which computers were used in the 1970s was as if companies had recruited the greatest geniuses of humanity, before setting them to work memorising the phone book to save a few seconds turning pages.
On podcasts, I’ve got even more loosey-goosey with the quote, and a heavily embellished version talking about Einstein, Mozart etc has appeared on this blog more than once. I am really in no position to criticise LLMs for hallucinating things, and don’t want to become the source for an apocryphal saying, so here is the correct version. (Or at least one correct version; I think he said something similar in a few other printed works that aren’t online). It’s from “Fanfare for Effective Freedom”, a lecture that Beer gave in 1973 and which is referenced in the second edition of Brain of the Firm. I’ll quote the whole paragraph in which it appears:
What is the alternative to these inherited systems of lagged quantized reporting on what has happened and lagged, quantized response to projected change? The answer from the mid-sixties onward has been and remains real time control. We have the technology to do it. This concept was fundamental to the plan we drew up for Chile in late 1971. We would abandon the hare-and-tortoise race to make relevant statistics overtake the lag in data, capture, and analysis, and implant a real-time nervous system in the economy instead. We would forget about the bureaucratic planning systems that talk in terms of months and years, norms and targets, and implant a continuously adaptive decision-taking, in which human foresight would be permanently stretched as far in any context as this real-time input of information could take it. Above all, we would use our cybernetic understanding of filtration to deploy computers properly as quasi-intelligent machines instead of using them as giant data banks of dead information. That use of computers taken on its own as it usually is, in my opinion, represents the biggest waste of a magnificent invention that mankind has ever perpetrated. It is like seeking out the greatest human intellects of’ the day, asking them to memorise the telephone book, and then telling them to man ‘Directory Enquiries’ at the telephone exchange.
As you can see, Beer is talking about the use of computers in management, but specifically about using them to handle big data (by the standards of the time). He goes on to deal with a number of objections to this idea, which I think have a surprisingly contemporary feel, and which go towards answering the question “ok, what should we be doing to reorganise our systems to cope with new technology?”:
First Objection: The boss will be overwhelmed with data. Answer: Not so. This is what happens now, as any manager who has had a foot-high file of computer read-out slapped in front of him can attest. The idea is to create a capability in the computer to recognise what is important, and to present only that very little information - as you shall see.
Second Objection: The management machine will over-react to such speedy signals, which may not be representative. Answer: Not so. This also happens now, as shown embryonically in Figure 1. The objection disregards cybernetic knowledge of filtration, and damping servo-mechanics [DD - Figure 1 is a chart showing the effect of the reporting lag on policy, not dissimilar from my own “cursed credibility curve”]
Third Objection: Such a system would be too vulnerable to corrupt inputs. Answer: Not so, again. Present inputs are corrupt and go undetected because they are aggregated and because the time has passed when they could be spotted. Clever computer programmes can make all sorts of checks on a real-time input to see if it is plausible.
Fourth Objection: ‘Intelligent’ computer programmes to do all this are still in the science-fiction stage. Answer: This is woolly thinking. People do not really think out what is involved because they conceive to the computer as a fast adding machine processing a databank – instead of seeing in the computer, quite correctly, the logical engine that Leibniz first conceived. The computer can do anything that we can precisely specify: and that includes testing hypotheses by calculating probabilities – as again you shall see.
Fifth Objection: Even so, such programmes would take hundreds of man-years to write and be debugged. Answer : I am sorry, but they did not. That is because the people involved in both London and Santiago were first rate programmers who understood what they were doing. Let me be brutal about this: how many managers are aware of the research done into the relative effectiveness of programmers? They should be. The best are anything from ten to twenty times as good as the worst; and when it comes to cybernetic programming, only the very best can even understand what is going on.
Sixth Objection: A real-time system with on-line inputs? It is Big Brother; it is 1984 already. Answer: Stop panicking and work out the notion of autonomy. I have still more so say about this later. All technology can be, and usually is, abused. When people turn their backs on the problem, crying touch-me-not, the abuse is the worse.
Seventh Objection: Only the United States has the money and the knowledge to do this kind of thing: let them get on with it. Answer: “I find that slightly boring”. This objection was voiced to me in one of’ the highest level scientific committees in this land. The answer came from the Chairman and I was glad not to be in his withering line of fire at the time. But he did not prevail, and neither did I.
I think I have a few readers at the Tony Blair Institute - if you can get any of these into one of the boss’s speeches, I will buy you a pint.
"That is because the people involved ... were first rate programmers who understood what they were doing."
Well that's right, but it's the clause at the end that is doing the work. A first rate programmer isn't just somebody with good knowledge of the mechanics of programming. Yes, it is helpful to understand technicalities - lambdas, closures, whatever - but the main point is to understand the business problem to be solved, and how to cast that into software. And the problem that most organizations struggle with is that they consider software development to be low-value and they want to put their people who understand the problems in high-value roles. Then the software people don't understand what the "business" people want, and the business people don't understand (and aren't motivated) to give the software people the understanding they need.
I suppose the question I would have for Beer is “Why do you suppose this has not happened in the past 50 years? Surely someone would write a computer program to run a small business that would be hyper efficient by standards of the day and proceed to take over the world. Why didn’t that happen?”
Now, I think the best rejoinder is “Amazon pretty much did”, but I note that it isn’t quite the way Beer is imagining things.