To precis for newer subscribers, I recently advanced the following sketch of an argument, claiming to “kind-of-sort-of believe it”:
1. In order to learn the lessons from history, you have to correctly choose which historical example is relevant to the current situation.
2. You then have to correctly generalise from the historical example to extract the underlying principles which form the lesson from history.
3. And then you have to correctly apply those principles to the current situation.
4. All of which seems much more difficult than just knuckling down and solving your own problem, without messing around hoping that reading stories about the past will help.
Smart readers spotted immediately that I was being dumb and annoying on purpose; it took me a bit longer to realise. In fact, step 4 is wrong; it is not more difficult, and learning the lessons of history could itself be an excellent way of knuckling down and solving your own problems.
The trick here is to realise that “recognising that an analogy is no good” is a relatively quick cognitive operation.
Any kind of problem solving is based on making mental models of the problem; abstracting some of the detail while hoping that you’re capturing enough of the causal structure so that if you solve the problem “in the model” the same solution will work in real life. Usually, disanalogies are quick to spot; if there’s a good reason why the mental model won’t translate, it tends to be glaring.
Creating a mental model from scratch is a very expensive cognitive operation, though.
So, if you have a supply of previously existing mental models, it might be a very good strategy to just start going through them one by one, effectively running your thumb through the book going “nope, nope, nope, maybe … nope, nope, nope … nah, doesn’t work … maybe … nope, nope … hang on this might work”. Rather than taking on the expensive task of making a model that you’re certain will work because you’ve constructed that way, you’re making multiple cheap attempts.
But where might you get a large supply of ready made mental models to go through in this way? Yes, obviously, you’re way ahead, nice one readers. Just filling your head up with stories about how things could happen means that you’ve got a catalogue to rifle through, while the constraint that they’re stories about things that actually did happen once should exercise some kind of rudimentary quality control on the library of candidate solutions.
This isn’t a perfect problem solving method and it’s easy to see why it won’t always work. But it’s a clear counterexample to my joke above. The process of abstracting the general principles from historical events happens previously, during the cognitive equivalent of off-peak electricity demand, when you don’t have an immediate problem to solve. Then the process of correctly applying the principles is just a matter of spotting an analogy that doesn’t work.
These are some big ideas in cybernetics. “Regulation by veto” is one of the most important ways in which complex systems achieve stability; different components of the system send the message “I can live with this” or “I can’t live with this”, and a higher-level system jumbles up the connections until every component has achieved stability or the whole thing has failed. (Darwinian evolution is, at a certain level of representation, a special case of regulation by veto).
It's also an example of “variety amplification” (where the word “variety” is, in context, synonymous with the quantity that gets named “information” in information theory). Jimi Hendrix’s guitar amp didn’t literally take the signal from his pickups and make it bigger – the laws of thermodynamics kind of rule that out! What it did was take that small electric current, and use it to modulate a much larger source of power from somewhere else.
Similarly, the regulation-by-veto trick works by taking a very large source of environmental variability, and then shaping it with a filter that’s easier to apply. In this way, it creates the functional illusion that a relatively simple regulatory system can control a much more complicated entity. Or alternatively, someone who is personally quite dumb can still make good decisions as long as they’re capable of learning from their mistakes.