I've always appreciated this phrase (the purpose of a system is what it does) as a form of encouragement to engage with what is actually happening, not what we planned to happen or what we think should happen, etc. Today reading this I thought of the Maya Angelou quote "When a person shows you who they are, believe them." So maybe, "When a system shows you what it does, believe it"? Which maybe doesn't help with the tendency to attribute human motivations to systems, but it captures something important for me.
So to use the terminology from the Unaccountability Machine then, am I right in thinking that POSIWID is ultimately just a way of saying “your analysis will be useless unless you know the System 5 of what you are analysing?”
I always thought of POSIWID as a bullshit detector or a razor like Occam's; when the creators' stated purpose of a system consistently diverges from what the system actually does, POSIWID means it's easier to analyze the system using the latter as a starting point. And to give a more skeptical look at the creators' intentions
I reckon most of the problem is that “purpose” implies a desire on the part of the system for the outcome. I agree on your point that systems don’t have desires, or purposes, and this is just anthropomorphising, but it’s still what gets people’s backs up. If you’re sitting in a pub with a particularly inept covers band playing, it’s jolly hard to suggest that the purpose of this system is to convince people that pub covers bands in general are rubbish, or that Wednesdays ought to be spent at home. The band think they’re doing OK, even if they aren’t. The pub think that having music is a good thing. If the effect of the system is to convince you that three random locals can’t be AC/DC, definitely yes. But “purpose” implies that _something_ is trying to make this happen, even if subconsciously. (Or it means that the purpose of a biscuit factory is to make biscuits, which is just a tautology, of course.) It feels rather like the people wielding POSIWID like a club are slightly smugly implying that there’s a secret desire to make the bad thing happen and only they are perceptive enough to see through the bullshit to spot this. As you say… this could probably have been entirely avoided if the P in POSIWID was some word which didn’t imply desire for the outcome, whether that’s on the part of the people running things or of the wider system this one is embedded in. That is: talking about POSIWID feels a little bit like systems analysis, but quite a lot more like the speaker pointing out how clever they are to have identified this particular thing that the system does, which nobody else was smart enough to have noticed. The purpose of people who say POSIWID is what they do, I suppose. Which is a shame, since the underlying truth of the thing is useful.
«“purpose” implies a desire on the part of the system for the outcome. I agree on your point that systems don’t have desires, or purposes, and this is just anthropomorphising [...] But “purpose” implies that _something_ is trying to make this happen»
Of course “_something_ is trying to make this happen” because most systems don't just spring into existence on their own, they are designed and implemented, and even if they do spring into existence spontaneously they need funding and maintenance to continue working, and therefore are "steered" by explicit individual or collective decisions, otherwise they disappear.
The systems that do not disappear must have some deliberate management and steering, or are benefit someone who matters so they are kept operating, or are damaging someone who matters so they are terminated.
That they are usually funded and shaped to achieve a certain purpose does not mean that they are the fruit of a "conspiracy": plenty of systems happen and people who matter than prune those not in their interests and nurture those that work in their interests.
I wonder if the problem here is the word "purpose" and you could make it work with the word "function." I suppose it is too late to try changing it. It is hard (even when it would be analytically helpful) to get people to use the word "purpose" without thinking about an individual human mind and assigning moral values to the decisions made by that mind. So then you get sidetracked into arguments that are really about whether a system that is functioning in an undesirable way is wrong or bad or evil when it would be more useful to figure out whether and how the system can be made to function in a more desirable way.
I don't really understand why Stafford Beer (who was certainly not averse to coining neologisms) didn't just make up a word like "homeotelos" or something. functionalist explanations always have this problem I am told.
What it's allowed to do, in the same way, every time. Which may be a more convenient handle for metaphor--it summons up the idea of an employee in a role, for instance, which may be helpful when considering these issues. I find one problem in cybernetics and management theory in general is that metaphors and analogies often get short shrift in some misguided attempt at "objectivity." Unfortunately, both of these are fundamental to human understanding and to language, even leaving aside the question of whether understanding and language are the same. This "metaphor shortage" may also explain the ongoing struggle so many have with the concepts of "strategy," "management," etc.
«concluded that “the purpose of the [British] rail network is to disincentivise people from making train journeys”. Is he right?»
Quite right indeed: if it were otherwise it would be fixed, it is entirely possible and not even very expensive to have a rail network that incentivises making train journeys.
It is just politics: in the 1970s a recently-formed right-wing think-tank did an electoral study that changed politics: they proved that people who own a car, own a house, own a share-based personal pension account vote for the right more than people who travel by public transport, rent a house, have a defined-benefit group pension, *even at the same level of income and status*.
This meant that upper and upper-middle class people who used public transport, rented accommodation, had defined benefit pensions voted more to the left than the others, which was irrelevant because there were few, but most importantly it meant that lower-middle and working class people who owned a car, owned a house, had a share-based personal pension account voted more other for the right than the others, and this mattered a great deal because there were many of them.
As a result of that study the Conservative/LibDem and New Labour governments since the late 1970s have worked hard to undermine public transport, rented accommodation, defined benefit pensions. They have been "crapified" (a technical term) deliberately to discourage people from using them.
For an example how that study has influenced actual policy:
«Is it true that when Clegg suggested there needed to be more social housing, Cameron told him it only turned people away from the Tories? “It would have been in a Quad meeting [the committee of Cameron, George Osborne, Clegg and Danny Alexander], so either Cameron or Osborne. One of them – I honestly can’t remember whom – looked genuinely nonplussed and said, ‘I don’t understand why you keep going on about the need for more social housing – it just creates Labour voters.’ They genuinely saw housing as a Petri dish for voters. It was unbelievable.”»
Its fine for cybernetics to have a heuristic for dealing with the kind of systems it deals with. The problem arises when it is stated in such a way as to make a general claim about systems per se..
Cybernetics seems to have the idea that the system is like a brain in that it is constantly adapting to new circumstances and setting itself new tasks. But many systems are more like tools, just performing the same task over and over, more or less reliably.
Often such a system fails to achieve something reliably because the task is hard, as opposed to because some other system is interfering. Consider a Covid test. It's purpose is to distinguish between those who do or don't have Covid, and this is as true of early, unreliable tests, as it is of later, reliable ones. The same principle applies to a bureaucratic system such as an NHS cancer screening programme or indeed the criminal justice system.
In my view POSIWID is straightforward and thought provoking.
The reason cybernetics does not get off the ground is that it is almost impossible to shape into a trite axiom that consultants can sell to clients. Every attempt to do so, including your own, runs into the variety attenuation buffer.
Another fun and insightful post. It strikes me that the title is a great description of democratic political systems. What disturbs these systems is people who see the system as lever to get what they want and only what they want. Right now that variable is threatening the existence of these systems
The "purpose of the system is what it does” might be put another way, but it is a lovely insight.
"What it does” (vs. what it claims to aspire to, what others say it was designed to do) - i.e. trust your observations.
"What it does” may emerge from the interplay of multiple actors with a variety of goals. There needn’t be someone who has the observed purpose in mind for it to be a useful construct, nonetheless.
"Purpose” - what it reliably achieves, even under disturbances and changing circumstances. E.g. the purpose of the cooling system in my flat is to keep the room temperature close to what I set on the thermostat. In a hot summer, that may also result in high electricity costs, a side effect and certainly an impact, but not its purpose.
For instance, in Beer’s The Heart of Enterprise, he talks of the practice of the National Railways to chop off the less profitable portions of routes. Since there will always be less profitable portions, such an approach will slowly consume the entire railway. This approach could emerge from the interplay of profitability pressures, tunnel vision, and lack of systemic vision, and wouldn’t require a System Five anywhere specifically designed to do this. But “consuming itself” could be stable for some years, even under changing economic conditions and managers, so could reasonably be seen as the purpose, based on POSIWID.
"“a variable needs to be taken into account if a change in that variable has the potential to affect the system’s purpose”, which raises the question “how do you define that purpose”."
I think what confuses most people here, including me (a non-computational person), is that there's no person involved. A screwdriver exists to function on certain screws- as directed by a person!
This would be so much more understandable if "purpose" was substituted with "effect". Or otherwise clearly made distinct from "intended purpose", which the word "purpose" is almost always used to mean in the colloquial sense.
Is the purpose of an old, barely maintained system to be dysfunctional compared to what it was when it was new? Or has it, and the systems around it, just changed sufficiently that it doesn't function well anymore?
Is the purpose of COBOL to keep PDP-10s in operation?
"Given past decisions about investment and current decisions about subsidy, its does indeed, to a large extent have to disincentivise people from making the train journeys they might prefer to make if the constraints were different. And that is a steady state for the rail network"
But you just wrote that this is not a steady state as it was different in the past, and presumably will be different in the future. It's just the current state of things.
I honestly think this discussion would never have existed if there weren't the philosophical questions about meaning and purpose to life. Cause and effect are enough, you don't need to add a purpose on to the end as well. The only purpose is from sentient or sapient creatures. Systems are just better or worse means to an end. If something isn't *comparatively* adequate to a certain task, but something else is *comparatively* adequate to that task, then we use the other thing. The purpose of a car is not to keep people from using a horse (as anyone who rides horses for pleasure knows), or even worse to keep people from pulling along a sled by their own power, and thus the purpose of a railway system cannot be to keep people from taking a train trip.
I think it helps a lot to look back at the diagram in the previous post and to bear in mind that the crucial point being made is that the purpose exists as a negotiation between levels of systems. There's a stated purpose, set by a higher level of recursion - but it has to be translated into instructions that are valid and legible to the system under analysis, and it has to be made compatible with the system-under-analysis's own priorities of surviving and managing its environment. So it's not so much "the purpose of a railway system is to keep people from taking a train trip" as "the railway system has to live with constraints and purposes coming from different levels of organisation and its environment, and the consequence of this is that it acts to ration demand - which means, in some cases, stopping people from taking a train trip".
Another objection to this "purpose" is that it basically ignores the plural purposes of persons making a monolith out of disparateness. But I guess this is a general objection to systems thinking at all.
I've always appreciated this phrase (the purpose of a system is what it does) as a form of encouragement to engage with what is actually happening, not what we planned to happen or what we think should happen, etc. Today reading this I thought of the Maya Angelou quote "When a person shows you who they are, believe them." So maybe, "When a system shows you what it does, believe it"? Which maybe doesn't help with the tendency to attribute human motivations to systems, but it captures something important for me.
So to use the terminology from the Unaccountability Machine then, am I right in thinking that POSIWID is ultimately just a way of saying “your analysis will be useless unless you know the System 5 of what you are analysing?”
that's an interesting way to put it, let me think about that
I always thought of POSIWID as a bullshit detector or a razor like Occam's; when the creators' stated purpose of a system consistently diverges from what the system actually does, POSIWID means it's easier to analyze the system using the latter as a starting point. And to give a more skeptical look at the creators' intentions
I reckon most of the problem is that “purpose” implies a desire on the part of the system for the outcome. I agree on your point that systems don’t have desires, or purposes, and this is just anthropomorphising, but it’s still what gets people’s backs up. If you’re sitting in a pub with a particularly inept covers band playing, it’s jolly hard to suggest that the purpose of this system is to convince people that pub covers bands in general are rubbish, or that Wednesdays ought to be spent at home. The band think they’re doing OK, even if they aren’t. The pub think that having music is a good thing. If the effect of the system is to convince you that three random locals can’t be AC/DC, definitely yes. But “purpose” implies that _something_ is trying to make this happen, even if subconsciously. (Or it means that the purpose of a biscuit factory is to make biscuits, which is just a tautology, of course.) It feels rather like the people wielding POSIWID like a club are slightly smugly implying that there’s a secret desire to make the bad thing happen and only they are perceptive enough to see through the bullshit to spot this. As you say… this could probably have been entirely avoided if the P in POSIWID was some word which didn’t imply desire for the outcome, whether that’s on the part of the people running things or of the wider system this one is embedded in. That is: talking about POSIWID feels a little bit like systems analysis, but quite a lot more like the speaker pointing out how clever they are to have identified this particular thing that the system does, which nobody else was smart enough to have noticed. The purpose of people who say POSIWID is what they do, I suppose. Which is a shame, since the underlying truth of the thing is useful.
«“purpose” implies a desire on the part of the system for the outcome. I agree on your point that systems don’t have desires, or purposes, and this is just anthropomorphising [...] But “purpose” implies that _something_ is trying to make this happen»
Of course “_something_ is trying to make this happen” because most systems don't just spring into existence on their own, they are designed and implemented, and even if they do spring into existence spontaneously they need funding and maintenance to continue working, and therefore are "steered" by explicit individual or collective decisions, otherwise they disappear.
The systems that do not disappear must have some deliberate management and steering, or are benefit someone who matters so they are kept operating, or are damaging someone who matters so they are terminated.
That they are usually funded and shaped to achieve a certain purpose does not mean that they are the fruit of a "conspiracy": plenty of systems happen and people who matter than prune those not in their interests and nurture those that work in their interests.
I wonder if the problem here is the word "purpose" and you could make it work with the word "function." I suppose it is too late to try changing it. It is hard (even when it would be analytically helpful) to get people to use the word "purpose" without thinking about an individual human mind and assigning moral values to the decisions made by that mind. So then you get sidetracked into arguments that are really about whether a system that is functioning in an undesirable way is wrong or bad or evil when it would be more useful to figure out whether and how the system can be made to function in a more desirable way.
I don't really understand why Stafford Beer (who was certainly not averse to coining neologisms) didn't just make up a word like "homeotelos" or something. functionalist explanations always have this problem I am told.
What it's allowed to do, in the same way, every time. Which may be a more convenient handle for metaphor--it summons up the idea of an employee in a role, for instance, which may be helpful when considering these issues. I find one problem in cybernetics and management theory in general is that metaphors and analogies often get short shrift in some misguided attempt at "objectivity." Unfortunately, both of these are fundamental to human understanding and to language, even leaving aside the question of whether understanding and language are the same. This "metaphor shortage" may also explain the ongoing struggle so many have with the concepts of "strategy," "management," etc.
«concluded that “the purpose of the [British] rail network is to disincentivise people from making train journeys”. Is he right?»
Quite right indeed: if it were otherwise it would be fixed, it is entirely possible and not even very expensive to have a rail network that incentivises making train journeys.
It is just politics: in the 1970s a recently-formed right-wing think-tank did an electoral study that changed politics: they proved that people who own a car, own a house, own a share-based personal pension account vote for the right more than people who travel by public transport, rent a house, have a defined-benefit group pension, *even at the same level of income and status*.
This meant that upper and upper-middle class people who used public transport, rented accommodation, had defined benefit pensions voted more to the left than the others, which was irrelevant because there were few, but most importantly it meant that lower-middle and working class people who owned a car, owned a house, had a share-based personal pension account voted more other for the right than the others, and this mattered a great deal because there were many of them.
As a result of that study the Conservative/LibDem and New Labour governments since the late 1970s have worked hard to undermine public transport, rented accommodation, defined benefit pensions. They have been "crapified" (a technical term) deliberately to discourage people from using them.
For an example how that study has influenced actual policy:
http://www.theguardian.com/politics/2016/sep/03/nick-clegg-did-not-cater-tories-brazen-ruthlessness
«Is it true that when Clegg suggested there needed to be more social housing, Cameron told him it only turned people away from the Tories? “It would have been in a Quad meeting [the committee of Cameron, George Osborne, Clegg and Danny Alexander], so either Cameron or Osborne. One of them – I honestly can’t remember whom – looked genuinely nonplussed and said, ‘I don’t understand why you keep going on about the need for more social housing – it just creates Labour voters.’ They genuinely saw housing as a Petri dish for voters. It was unbelievable.”»
Its fine for cybernetics to have a heuristic for dealing with the kind of systems it deals with. The problem arises when it is stated in such a way as to make a general claim about systems per se..
Cybernetics seems to have the idea that the system is like a brain in that it is constantly adapting to new circumstances and setting itself new tasks. But many systems are more like tools, just performing the same task over and over, more or less reliably.
Often such a system fails to achieve something reliably because the task is hard, as opposed to because some other system is interfering. Consider a Covid test. It's purpose is to distinguish between those who do or don't have Covid, and this is as true of early, unreliable tests, as it is of later, reliable ones. The same principle applies to a bureaucratic system such as an NHS cancer screening programme or indeed the criminal justice system.
What I like about POSIWID is that it doesn't let people wave off ongoing side-effects/externalities as "not the system's fault".
How about "The System has Been Designed to Do What It Does"?
In my view POSIWID is straightforward and thought provoking.
The reason cybernetics does not get off the ground is that it is almost impossible to shape into a trite axiom that consultants can sell to clients. Every attempt to do so, including your own, runs into the variety attenuation buffer.
Another fun and insightful post. It strikes me that the title is a great description of democratic political systems. What disturbs these systems is people who see the system as lever to get what they want and only what they want. Right now that variable is threatening the existence of these systems
The "purpose of the system is what it does” might be put another way, but it is a lovely insight.
"What it does” (vs. what it claims to aspire to, what others say it was designed to do) - i.e. trust your observations.
"What it does” may emerge from the interplay of multiple actors with a variety of goals. There needn’t be someone who has the observed purpose in mind for it to be a useful construct, nonetheless.
"Purpose” - what it reliably achieves, even under disturbances and changing circumstances. E.g. the purpose of the cooling system in my flat is to keep the room temperature close to what I set on the thermostat. In a hot summer, that may also result in high electricity costs, a side effect and certainly an impact, but not its purpose.
For instance, in Beer’s The Heart of Enterprise, he talks of the practice of the National Railways to chop off the less profitable portions of routes. Since there will always be less profitable portions, such an approach will slowly consume the entire railway. This approach could emerge from the interplay of profitability pressures, tunnel vision, and lack of systemic vision, and wouldn’t require a System Five anywhere specifically designed to do this. But “consuming itself” could be stable for some years, even under changing economic conditions and managers, so could reasonably be seen as the purpose, based on POSIWID.
What if people take your presumably fine book about frauds and use it for creating them?
“the purpose of a screwdriver is to drive screws”
"“a variable needs to be taken into account if a change in that variable has the potential to affect the system’s purpose”, which raises the question “how do you define that purpose”."
I think what confuses most people here, including me (a non-computational person), is that there's no person involved. A screwdriver exists to function on certain screws- as directed by a person!
This would be so much more understandable if "purpose" was substituted with "effect". Or otherwise clearly made distinct from "intended purpose", which the word "purpose" is almost always used to mean in the colloquial sense.
Is the purpose of an old, barely maintained system to be dysfunctional compared to what it was when it was new? Or has it, and the systems around it, just changed sufficiently that it doesn't function well anymore?
Is the purpose of COBOL to keep PDP-10s in operation?
"Given past decisions about investment and current decisions about subsidy, its does indeed, to a large extent have to disincentivise people from making the train journeys they might prefer to make if the constraints were different. And that is a steady state for the rail network"
But you just wrote that this is not a steady state as it was different in the past, and presumably will be different in the future. It's just the current state of things.
I honestly think this discussion would never have existed if there weren't the philosophical questions about meaning and purpose to life. Cause and effect are enough, you don't need to add a purpose on to the end as well. The only purpose is from sentient or sapient creatures. Systems are just better or worse means to an end. If something isn't *comparatively* adequate to a certain task, but something else is *comparatively* adequate to that task, then we use the other thing. The purpose of a car is not to keep people from using a horse (as anyone who rides horses for pleasure knows), or even worse to keep people from pulling along a sled by their own power, and thus the purpose of a railway system cannot be to keep people from taking a train trip.
I think it helps a lot to look back at the diagram in the previous post and to bear in mind that the crucial point being made is that the purpose exists as a negotiation between levels of systems. There's a stated purpose, set by a higher level of recursion - but it has to be translated into instructions that are valid and legible to the system under analysis, and it has to be made compatible with the system-under-analysis's own priorities of surviving and managing its environment. So it's not so much "the purpose of a railway system is to keep people from taking a train trip" as "the railway system has to live with constraints and purposes coming from different levels of organisation and its environment, and the consequence of this is that it acts to ration demand - which means, in some cases, stopping people from taking a train trip".
I get it. I haven't read the previous post yet.
Another objection to this "purpose" is that it basically ignores the plural purposes of persons making a monolith out of disparateness. But I guess this is a general objection to systems thinking at all.
So it's back to Aristotle then?