Agree. I wrote a book Models.Behaving.Badly about different ways of knowing the world and somewhere in there I wrote "There are no “raw” data. Choosing what data to collect takes insight; making good sense of the data collected requires the classic methods. We still need a model, a theory, or intuition to find a cause."
All agreed, but is the work that's being rewarded the econometric work or the Thought Leadership of making up a big thesis that people want to believe, that says big things about world history and how things aught to be, and appropriates it for the field of Econ.
My sense is the econometrics is kind of a ritual followed because serious papers are supposed to have that sort of stuff, "in science we test hypotheses empirically!" and the like but no one is basing their evaluation of Acemoglu on the data, it's all on the strength of priors, anecdote, authority, and skill of rhetoric.
yes to a large extent this is what the prize was awarded for; finding a form of words (and numbers) which was effective in persuading economists to take institutions seriously. which is a big achievement, although as with many recent prizes, it says something really bad about the state of economics that this was necessary at all, let alone one of the biggest achievements of the last few decades.
I'm not an economist, I'm an amateur spreadsheetist, and this sounds very like the way I decide what hotel to stay in - set strict parameters, rank based on reviews, distance and price, tweak the strict parameters if I'm not keen on the answer...
and in two subsequent parts, the second of which takes aim at the Hay system of comparative job evaluation, supposedly "objective" but actually incorporating the value choices you made in the first place. Devastating, I think, not just in these contexts but also the widely-espoused project of pluralistic consequentialism where it is assumed that maximizing a good composed of heterogeneous elements can be done without making some very questionable assumptions about the value of those elements.
all of these things are big problems but as I say - even if you solved all of them (which would be good going as some index number problems are literal impossibility theorems) you would still be faced with the fact that these are just pretend numbers that somebody made up
Oh yes - garbage "data" is problem zero. But the Scientism usually takes the form of applying sound procedures to garbage inputs. This is more like the Science People - the procedures share some superficial similarities to real analysis but not the substance (I recall that you once described a beer bottle bearing golden blobs in imitation of awards). Quant Suff.!
A further problem is that these numbers aren't just a best guess - they are political exercises tweaked to produce the right answers. Free market thinktanks tend to get upset when Scandinavian social democracies score too highly on rankings of economic freedom, for example (though some are willing to bite the bullet and celebrate, say, Sweden as a capitalist success story.
I once designed a format for briefings about financial stability risks, and made the terrible mistake of making the labels in the summary at the top numbers rather than "low", "medium", "high", "very high" (I'd had four years of university maths to drill in that the number symbols were just labels). A year later I was horrified to find some research colleagues trying to do econometrics on these things...
I currently do produce a similar index, and have so far had the strength of character to refuse all academic requests for the data set because I simply don't want it on my conscience
Actually estimates even if fuzzy are often more valuable than nothing, but what is really important is the expected *variance*.
Without variance (and a brief mention of the expected distribution shape) any stochastic measure is not very meaningful. BTW looking at expected variance (and distribution shape) I reckon settles philosophical arguments about probabilities of single events and the endless frequentist/bayesian/subjectivist debate (along with the distinction between stochastic measures and operations research, which is very important).
Important note: if there are no good arguments to support an expected variance (and distribution shape) then estimates seem to me to be acts of faith ("just pretend").
And boy does it annoy other social scientists, especially when, from one direction, Virginia is an extractive society based on slavery, while at the same time being an inclusive society based on property rights, franchise, legislative assembly etc
In a former job, we used to be funded to make indices of things like “the most start-up friendly cities in Europe”; a fun part of the art was constructing the index in such a way that result would be unsurprising enough to be plausible but surprising enough to be interesting. This process was fun and demanding, but kind of orthogonal to truth-seeking.
This also applies with every self assessment device. The questionnaire asks are you x or y? You say x. Later it reports to you that you are x. You go, "wow, it really knows me!" This tool is so accurate!
But there is nothing objective underpinning the assessment. It is a mirror of your own beliefs. If you had answered y, it would have reported you were y. So much of what gets praised in the assessment field is merely mirror imaging and confirmation bias.
Agree. I wrote a book Models.Behaving.Badly about different ways of knowing the world and somewhere in there I wrote "There are no “raw” data. Choosing what data to collect takes insight; making good sense of the data collected requires the classic methods. We still need a model, a theory, or intuition to find a cause."
it was a great book and very influential on me!
All agreed, but is the work that's being rewarded the econometric work or the Thought Leadership of making up a big thesis that people want to believe, that says big things about world history and how things aught to be, and appropriates it for the field of Econ.
My sense is the econometrics is kind of a ritual followed because serious papers are supposed to have that sort of stuff, "in science we test hypotheses empirically!" and the like but no one is basing their evaluation of Acemoglu on the data, it's all on the strength of priors, anecdote, authority, and skill of rhetoric.
yes to a large extent this is what the prize was awarded for; finding a form of words (and numbers) which was effective in persuading economists to take institutions seriously. which is a big achievement, although as with many recent prizes, it says something really bad about the state of economics that this was necessary at all, let alone one of the biggest achievements of the last few decades.
Spot on. A ritual, cargo cult mathematics.
I'm not an economist, I'm an amateur spreadsheetist, and this sounds very like the way I decide what hotel to stay in - set strict parameters, rank based on reviews, distance and price, tweak the strict parameters if I'm not keen on the answer...
So yes, this is the "indexing problem" isn't it? as discussed by Robert Paul Wolff here
https://robertpaulwolff.blogspot.com/2013/01/the-indexing-problem-part-one.html
and in two subsequent parts, the second of which takes aim at the Hay system of comparative job evaluation, supposedly "objective" but actually incorporating the value choices you made in the first place. Devastating, I think, not just in these contexts but also the widely-espoused project of pluralistic consequentialism where it is assumed that maximizing a good composed of heterogeneous elements can be done without making some very questionable assumptions about the value of those elements.
Snap! I made much the same point before reading this.
Heterogeneous elements are just one part of the problem. The complete project requires several parts:
1. Treating ordinals in a single dimension as quantities - is 2 four times better than 8?.
2. Treating primary data as integers but derived data like averages as real numbers - could there be a 3.5 and if not why not?
3. Aggregating incommensurate quantities - where is the Pareto-efficient frontier of a space whose dimensions are meters and kilograms?
4. Choice of a metric in the aggregate space - usually the taxicab metric.
None of these parts is sound individually, let alone jointly.
all of these things are big problems but as I say - even if you solved all of them (which would be good going as some index number problems are literal impossibility theorems) you would still be faced with the fact that these are just pretend numbers that somebody made up
Oh yes - garbage "data" is problem zero. But the Scientism usually takes the form of applying sound procedures to garbage inputs. This is more like the Science People - the procedures share some superficial similarities to real analysis but not the substance (I recall that you once described a beer bottle bearing golden blobs in imitation of awards). Quant Suff.!
A further problem is that these numbers aren't just a best guess - they are political exercises tweaked to produce the right answers. Free market thinktanks tend to get upset when Scandinavian social democracies score too highly on rankings of economic freedom, for example (though some are willing to bite the bullet and celebrate, say, Sweden as a capitalist success story.
I once designed a format for briefings about financial stability risks, and made the terrible mistake of making the labels in the summary at the top numbers rather than "low", "medium", "high", "very high" (I'd had four years of university maths to drill in that the number symbols were just labels). A year later I was horrified to find some research colleagues trying to do econometrics on these things...
I currently do produce a similar index, and have so far had the strength of character to refuse all academic requests for the data set because I simply don't want it on my conscience
Actually estimates even if fuzzy are often more valuable than nothing, but what is really important is the expected *variance*.
Without variance (and a brief mention of the expected distribution shape) any stochastic measure is not very meaningful. BTW looking at expected variance (and distribution shape) I reckon settles philosophical arguments about probabilities of single events and the endless frequentist/bayesian/subjectivist debate (along with the distinction between stochastic measures and operations research, which is very important).
Important note: if there are no good arguments to support an expected variance (and distribution shape) then estimates seem to me to be acts of faith ("just pretend").
And boy does it annoy other social scientists, especially when, from one direction, Virginia is an extractive society based on slavery, while at the same time being an inclusive society based on property rights, franchise, legislative assembly etc
In a former job, we used to be funded to make indices of things like “the most start-up friendly cities in Europe”; a fun part of the art was constructing the index in such a way that result would be unsurprising enough to be plausible but surprising enough to be interesting. This process was fun and demanding, but kind of orthogonal to truth-seeking.
Yes, but it is made up stuff all the way down.
This also applies with every self assessment device. The questionnaire asks are you x or y? You say x. Later it reports to you that you are x. You go, "wow, it really knows me!" This tool is so accurate!
But there is nothing objective underpinning the assessment. It is a mirror of your own beliefs. If you had answered y, it would have reported you were y. So much of what gets praised in the assessment field is merely mirror imaging and confirmation bias.