“The Fastest Animals In The World, as certified by the Academy of Moles:
Moles
Badgers
Rabbits”
Related to that, an insect has been flying around the wearable beehive that I laughingly call my “bonnet”, and I have decided this is the week to pin the buzzy little sucker down. On the subject of “Superforecasters”, the people who systematically win forecasting contests, and seem to have a specific box of tricks that you can copy.
For the most part, I’ll agree that the generic lessons from Superforecaster websites are valuable. Remember base rates, compare notes with others who have different biases, shade your estimates in the direction of the median and look for a variety of evidence rather than getting fixated on specific technicalities exactly right. I personally would set a lot less store by quantifying things in terms of numbers and talking with religious fervour about how you’re a Bayesian, but some people are more impressed by arithmetic than I am and chacunasongout.
But there’s also a trick here. Superforecasters tend to judge their competitions by scoring measures which give credit for assigning a high probability to something that happens and a low probability to something which doesn’t, with various bells and whistles to reward earlier forecasts and so on. When they are advertising themselves or reviewing their performance in blog posts, they will do something similar, saying that “40% of the time I gave something a 40% probability, it happened”, and so on.
That isn’t a bad method of judging a contest; I doubt one could run them any other way. But it introduces a very subtle but potentially huge bias.
Specifically, it means that there is a Rule Zero of being a superforecaster – before you start working on any of the other tips, you need to realise that the key principle is:
“Only make predictions about things with a definite result, where that result will happen in a time frame appropriate to the judging system”.
Even this Rule Zero can be defended; fans of forecasting competitions will say that it forces people who claim to be able to predict the future to make falsifiable claims; it’s bringing the idea of forecasting into the realm of science. In order to judge things, you have to make them observable and quantifiable. Imagine the opposite to Rule Zero - “only make predictions about things where the end result is arguable and debatable, without a specific time frame because it won’t have a definite observable event”. A charlatan’s charter, amirite?
But this is the sort of thing that you can imagine the Academy of Moles saying. A tunnel is measurable and objective; you can run a tape measure down to the end. Other animals can make whatever sort of claims they like about how fast they moved above the ground, but the moles are only able to go by the sorts of things that moles can observe. So the facts are, according to the MoleSpeed Measure (kilograms of earth shifted per hour), the fastest animals in the world are moles, then badgers, then rabbits.
The universe isn’t under any obligation to be conveniently measurable. In fact, most of the important things that we want to know about the future do not have any definite finish date, and don’t have clear or objective criterion establishing their outcomes. You can say that you’re going to judge the Chinese Revolution of 1949 on the basis of GDP per capita after fifty years, but this is pretty obviously a subjective choice, and calling it a score and putting it into a table isn’t going to add much.
Does this matter? I think so. Even if we were able to correct what Taleb identifies as the biggest problem with Superforecasting (they judge themselves on binary yes/no successes for the most part, while in the real world it matters much much more the extent to which you’re right or wrong; consider a VC fund where most of the investments go bankrupt but one or two become Facebook), there is still another huge problem which is that any kind of prediction that can be judged at all is, by that token, not the most important kind.
It would be a bad idea to allow people to gain credibility by running up the scores on a particular and unusually tractable set of problems, then treat them as if they knew the answer to everything we wanted to know. We’ve made that mistake once already, with economists.
Well said. It actually surprised me when I read the book/looked at the website was how fundamentally uninteresting most of the questions were. (I spent a good 10 years doing futures work that clients kept paying for, so I have Views.)
Quick eg from the website today: "Will the World Health Organization (WHO) declare a Public Health Emergency of International Concern (PHEIC) regarding H5N1 avian influenza before 1 January 2025?"
Unsurprisingly it's 99% no from the forecasters at the moment.
I'm also not into Bayesian and fake precision with percentage predictions - but as can be seen from this example, it's usually the dates that really drain out the meaning. (I can bore on about why, but it's probably obvious to most people, so I'll stop shaking my fist at passing clouds for now.)
"You can say that you’re going to judge the Chinese Revolution of 1949 on the basis of GDP per capita after fifty years"
Chou-En lai made the same general point about the French Revolution, IIRC