I am wondering if there's not a link that can be made between cybernetics (especially nested cybernetic systems) and anarchism? Anarchism is about situating control at appropriate levels, which may or may not necessarily be at the very bottom layer ie if we might design an anarchist approach to climate change it might look very much like a set of cybernetic systems. They needn't necessarily be in conflict and a cybernetic 'state' might look very much like an anarchist one?
But Hayek **was** a kindred spirit. But he was also a right-wing nutjob. And the right-wing nutjobbery caused him to get stuck in a local minimum with respect to the issues he really wanted to address... Yours, Brad
Since my brain is poisoned by quantitative stats, my read of the techne-metis split is slightly different than yours. Techne delineates the sorts of decisions that can be fully automated because they concern concretely measurable outcomes, discrete decisions, and clear rules. The metis are more heuristic, more dependent of local context, less repeatable.
So while I agree you can aid the metis with better information signalling, you still need an agile, trustworthy actor with some powerful levers at the decision-making center. For this reason, I always read Scott as arguing that metis isn't scalable. I think that's a compelling argument! What if there are things that a bureaucratic state simply rules out?
Right, I’ve also been interpreting Scott’s distinction between techne and metis along the lines of Joseph Weizenbaum’s distinction between decision-making (quantitative, rule-governed) and judgment (qualitative, value-laden).
Hi Dan—I recently started watching the derivative US spy thriller The Bourne Legacy and decided that the plot was driven, scene-to-scene, by attenuation, red-handle channels, and other concepts from The Unaccountability Machine. Went back to the start of the series (Bourne Identity) and the parallels hold up. Then thought of the '70s "paranoia" features: Three Days of the Condor, Parallax Machine, even All the President's Men—and decided you might have inadvertently reviewed a movie genre. Thanks for a wonderful book—I hope to craft a laudatory twitter thread on it sometime.
A plug for a book (maybe I'll make that a couple of books) that I read long before I read Scott (of whom I'm a big fan): Stewart Brand's How Buildings Learn which is all about how buildings adapt to use via user feedback and which draws on some of the same sources as Scott, particularly Jane Jacobs. And then, via Brand, the quirky but consistently interesting Christopher Alexander, A Pattern Language. Some buildings are more adaptable than others which are locked into the architect's original plan and then fail miserably because they can't change. (Maybe there's a metaphor for the US polity and its constitutions there too ....)
How Buildings Learn is enormously entertaining and, IMHO, somewhat slippery as an original contribution. About a third of the book concerns my current neighborhood in California, and other chapters seem to exist because they describe where Stewart lived two decades previously or had a friend who was willing to put him up for two weeks. The references (notably to shearing layers) and photography-rich discussion are a delight.
+1 for Brand and Jacobs. both fabulous. Also - in a different lane, perhaps, Iain McGilchrist. The Master and the Emissary has a lot to say about the limits of robomorphism.
I think Scott would say that when the variable in need of control has agency and therefore you are dealing with the agency problem - that is, the system is *human*, then simple reliance on better design will not fix the problem. Yes, rockets got safer. Air safety measured by deaths per million miles has steadily improved since the 60s, but where the failures are the product of deliberate autonomous human action, the error rate stays much more constant. So, financial crashes: there is no evidence at all that system design — vis, regulation — is making things any better. Indeed, many would say that regulation is only making it worse. End of the day the fundamental metaphor of cybernetics is the turing machine. It may be universal, but it is a very bad metaphor for a human complex adaptive system
"intellectual carcinization" this happens in various "epistemic enclaves" as J. N. Nielsen puts it, ( see https://geopolicraticus.substack.com/p/epistemic-enclaves) which often have an inability or refusal to talk outside the parish of their capital. One example is the re-invention of multiple inheritance as used more recently in OO programming, or anything that uses a hierarchy which attempts to class/box systemically and then use those classes in the real world, and the real world don't play fair. Australian Archivists invented the Australian Series archiving methodology, inpart to cope with the fact that Government departments changed names and responsibilities at the drop of a hat, but often what they did in terms of function, did not change, so they invented something called "functional provenance" which could move around independently of the nominal hierarchy. This freaked out the European archival tradition, who were big on 'respect du fonds' and "original order", and couldn't cope with the idea of anything have multiple provenance, even if this took place only in the metadata --- on cards--- so they ignored the Australian antipodean colonial innovation. Some decades later it is re-invented in programming, and like the boomers who think they invented sex because no one talked about it before them, or at least in front of them, programmers think they invented it.
So I'm not a Scott fan - and I basically put this down to my roots on one side of the family being from a small village in Asia. Techne has lots of failings, but Scott very, very often steps around the fact that techne also tries to address some real problems. They are just not problems a professor from Yale has to worry about on his travels through places like that.
> In particular, big corporations and states need to have information channels going from the bottom to the top – the “red handle signals”, as I call them in my book, that can bypass the normal hierarchy and get information to the decision making centre, in time and in a form which can be understood.
This makes me think of Ashby's 'Law of Requisite Variety', "for a system to be stable, the number of states that its control mechanism is capable of attaining (its variety) must be greater than or equal to the number of states in the system being controlled" (https://www.edge.org/response-detail/27150)
Re: Greek words: even as a former student of theology, I never really grokked techne vs metis, so I'm with you on this one for sure.
Re: the one child policy. I'm confused by this example as it doesn't seem material to cybernetics. It seems like the guy had the information and he had it in a useable form -- he just used it badly based on the typical over-self-confidence of an engineer confronting a problem not amenable to engineering. What am I missing? Can you give an example of the kind of information that a coherent management system could have provided to that engineer, that would have helped him make a less terrible decision?
I think that probably needs to be another post maybe on Friday - I reached the word limit before I could really say how I thought the one child policy fits into this framework
I try to keep it to 800 words, which is one page of Word at my default settings. don't know why, just set it as a rule for myself early on because otherwise I tend to ramble like hell
It is also not clear that the 1-child policy was *cybernetically* effective. It had bad consequences, sure: the skewing of sex ratios towards males, for example. But did it accomplish its goals, considered as a control mechanism? If you were to look at a graph of Chinese birth rates against time without labels on the X-axis, you might have difficulty picking out the point where the policy was implemented, because the birth rate rose for about 7 years afterwards. The estimate of 400 million births prevented comes from the CCP itself, which has just as strong a tendency to exaggerate its own effectiveness as any other cybernetic organization.
I have been viewing YouTube of the Japanese side of WW2. Interesting how many realized that Midway signaled the end of the war but the mindset of the Japanese was to assemble their navy for the one big decisive battle. When those were defeated they collected the remains for that one big decisive battle. Took two bombs to change that mindset. A perfect example of how big projects fail.
Now we have strayed very far from the original intent of the post, but the "two big bombs" were actually more of an initial volley in the cold war than an end to WWII. The Japanese were ready to surrender but the Soviets has not yet invaded Manchuria (as had been planned, including a date, at Yalta I believe it was). The atomic bombings were a signal to the Soviet union, not Japan. Perhaps that signal could be said to have some cybernetic angle though.
And just to expound on a personal hobby horse of mine, the "big project" of building the bombs was not even the real big project of the war. Developing long rang bombers cost 10x as much as developing the bomb. It is not any harder to measure passenger miles traveled than kilotons but people are more interested in one than the other. This is perhaps relevant to cybernetics in that it's not just about ability to measure, but also ability to care about the number.
I am wondering if there's not a link that can be made between cybernetics (especially nested cybernetic systems) and anarchism? Anarchism is about situating control at appropriate levels, which may or may not necessarily be at the very bottom layer ie if we might design an anarchist approach to climate change it might look very much like a set of cybernetic systems. They needn't necessarily be in conflict and a cybernetic 'state' might look very much like an anarchist one?
I think it would be stretching the meaning of anarchism, but definitely have a look at "Designing Freedom", the collection of Beer's CBC lectures.
But Hayek **was** a kindred spirit. But he was also a right-wing nutjob. And the right-wing nutjobbery caused him to get stuck in a local minimum with respect to the issues he really wanted to address... Yours, Brad
Editors didn't let you use "right-wing nutjob" in STU?
The quote from Hayek’s “Theory of complex phenomena” that I reproduced in https://realizable.substack.com/p/metaphors-we-predict-and-control shows exactly in what way he was a kindred spirit of Scott.
is this also called potholing? "local minimum" "the parish that is the capital"
Since my brain is poisoned by quantitative stats, my read of the techne-metis split is slightly different than yours. Techne delineates the sorts of decisions that can be fully automated because they concern concretely measurable outcomes, discrete decisions, and clear rules. The metis are more heuristic, more dependent of local context, less repeatable.
So while I agree you can aid the metis with better information signalling, you still need an agile, trustworthy actor with some powerful levers at the decision-making center. For this reason, I always read Scott as arguing that metis isn't scalable. I think that's a compelling argument! What if there are things that a bureaucratic state simply rules out?
+1 for this. Bang on.
Right, I’ve also been interpreting Scott’s distinction between techne and metis along the lines of Joseph Weizenbaum’s distinction between decision-making (quantitative, rule-governed) and judgment (qualitative, value-laden).
Off-topic:
Hi Dan—I recently started watching the derivative US spy thriller The Bourne Legacy and decided that the plot was driven, scene-to-scene, by attenuation, red-handle channels, and other concepts from The Unaccountability Machine. Went back to the start of the series (Bourne Identity) and the parallels hold up. Then thought of the '70s "paranoia" features: Three Days of the Condor, Parallax Machine, even All the President's Men—and decided you might have inadvertently reviewed a movie genre. Thanks for a wonderful book—I hope to craft a laudatory twitter thread on it sometime.
A plug for a book (maybe I'll make that a couple of books) that I read long before I read Scott (of whom I'm a big fan): Stewart Brand's How Buildings Learn which is all about how buildings adapt to use via user feedback and which draws on some of the same sources as Scott, particularly Jane Jacobs. And then, via Brand, the quirky but consistently interesting Christopher Alexander, A Pattern Language. Some buildings are more adaptable than others which are locked into the architect's original plan and then fail miserably because they can't change. (Maybe there's a metaphor for the US polity and its constitutions there too ....)
How Buildings Learn is enormously entertaining and, IMHO, somewhat slippery as an original contribution. About a third of the book concerns my current neighborhood in California, and other chapters seem to exist because they describe where Stewart lived two decades previously or had a friend who was willing to put him up for two weeks. The references (notably to shearing layers) and photography-rich discussion are a delight.
+1 for Brand and Jacobs. both fabulous. Also - in a different lane, perhaps, Iain McGilchrist. The Master and the Emissary has a lot to say about the limits of robomorphism.
I think Scott would say that when the variable in need of control has agency and therefore you are dealing with the agency problem - that is, the system is *human*, then simple reliance on better design will not fix the problem. Yes, rockets got safer. Air safety measured by deaths per million miles has steadily improved since the 60s, but where the failures are the product of deliberate autonomous human action, the error rate stays much more constant. So, financial crashes: there is no evidence at all that system design — vis, regulation — is making things any better. Indeed, many would say that regulation is only making it worse. End of the day the fundamental metaphor of cybernetics is the turing machine. It may be universal, but it is a very bad metaphor for a human complex adaptive system
"intellectual carcinization" this happens in various "epistemic enclaves" as J. N. Nielsen puts it, ( see https://geopolicraticus.substack.com/p/epistemic-enclaves) which often have an inability or refusal to talk outside the parish of their capital. One example is the re-invention of multiple inheritance as used more recently in OO programming, or anything that uses a hierarchy which attempts to class/box systemically and then use those classes in the real world, and the real world don't play fair. Australian Archivists invented the Australian Series archiving methodology, inpart to cope with the fact that Government departments changed names and responsibilities at the drop of a hat, but often what they did in terms of function, did not change, so they invented something called "functional provenance" which could move around independently of the nominal hierarchy. This freaked out the European archival tradition, who were big on 'respect du fonds' and "original order", and couldn't cope with the idea of anything have multiple provenance, even if this took place only in the metadata --- on cards--- so they ignored the Australian antipodean colonial innovation. Some decades later it is re-invented in programming, and like the boomers who think they invented sex because no one talked about it before them, or at least in front of them, programmers think they invented it.
So I'm not a Scott fan - and I basically put this down to my roots on one side of the family being from a small village in Asia. Techne has lots of failings, but Scott very, very often steps around the fact that techne also tries to address some real problems. They are just not problems a professor from Yale has to worry about on his travels through places like that.
> In particular, big corporations and states need to have information channels going from the bottom to the top – the “red handle signals”, as I call them in my book, that can bypass the normal hierarchy and get information to the decision making centre, in time and in a form which can be understood.
This makes me think of Ashby's 'Law of Requisite Variety', "for a system to be stable, the number of states that its control mechanism is capable of attaining (its variety) must be greater than or equal to the number of states in the system being controlled" (https://www.edge.org/response-detail/27150)
Re: Greek words: even as a former student of theology, I never really grokked techne vs metis, so I'm with you on this one for sure.
Re: the one child policy. I'm confused by this example as it doesn't seem material to cybernetics. It seems like the guy had the information and he had it in a useable form -- he just used it badly based on the typical over-self-confidence of an engineer confronting a problem not amenable to engineering. What am I missing? Can you give an example of the kind of information that a coherent management system could have provided to that engineer, that would have helped him make a less terrible decision?
I probably need to read the damn book already.
I think that probably needs to be another post maybe on Friday - I reached the word limit before I could really say how I thought the one child policy fits into this framework
Navel-gazing process question: Do you rate limit yourself to a fixed word count/post?
I try to keep it to 800 words, which is one page of Word at my default settings. don't know why, just set it as a rule for myself early on because otherwise I tend to ramble like hell
Like a true cyberneticist, I'm all for arbitrary constraints on process.
It is also not clear that the 1-child policy was *cybernetically* effective. It had bad consequences, sure: the skewing of sex ratios towards males, for example. But did it accomplish its goals, considered as a control mechanism? If you were to look at a graph of Chinese birth rates against time without labels on the X-axis, you might have difficulty picking out the point where the policy was implemented, because the birth rate rose for about 7 years afterwards. The estimate of 400 million births prevented comes from the CCP itself, which has just as strong a tendency to exaggerate its own effectiveness as any other cybernetic organization.
it was just a disaster area from all perspectives, more to come on Friday
I have been viewing YouTube of the Japanese side of WW2. Interesting how many realized that Midway signaled the end of the war but the mindset of the Japanese was to assemble their navy for the one big decisive battle. When those were defeated they collected the remains for that one big decisive battle. Took two bombs to change that mindset. A perfect example of how big projects fail.
Now we have strayed very far from the original intent of the post, but the "two big bombs" were actually more of an initial volley in the cold war than an end to WWII. The Japanese were ready to surrender but the Soviets has not yet invaded Manchuria (as had been planned, including a date, at Yalta I believe it was). The atomic bombings were a signal to the Soviet union, not Japan. Perhaps that signal could be said to have some cybernetic angle though.
And just to expound on a personal hobby horse of mine, the "big project" of building the bombs was not even the real big project of the war. Developing long rang bombers cost 10x as much as developing the bomb. It is not any harder to measure passenger miles traveled than kilotons but people are more interested in one than the other. This is perhaps relevant to cybernetics in that it's not just about ability to measure, but also ability to care about the number.
firehose not firehouse