8 Comments

Well said. It actually surprised me when I read the book/looked at the website was how fundamentally uninteresting most of the questions were. (I spent a good 10 years doing futures work that clients kept paying for, so I have Views.)

Quick eg from the website today: "Will the World Health Organization (WHO) declare a Public Health Emergency of International Concern (PHEIC) regarding H5N1 avian influenza before 1 January 2025?"

Unsurprisingly it's 99% no from the forecasters at the moment.

I'm also not into Bayesian and fake precision with percentage predictions - but as can be seen from this example, it's usually the dates that really drain out the meaning. (I can bore on about why, but it's probably obvious to most people, so I'll stop shaking my fist at passing clouds for now.)

Expand full comment

"You can say that you’re going to judge the Chinese Revolution of 1949 on the basis of GDP per capita after fifty years"

Chou-En lai made the same general point about the French Revolution, IIRC

Expand full comment

Richard Fyneman won the weekly WW2 forecasting contest at Los Alamos by predicting nothing newsworthy would be reported.

Expand full comment

I think this is the central problem of "scientific" - aka "cybernetic" - management: to exercise systematic control with feedback over a large organization, you want to measure success and failure by specific numerical and time-bound criteria (ideally graduated numerical criteria), but most of the really important factors that determine success are fuzzy, difficult to measure, not time-bound. Taken to an extreme, scientific management has perverse outcomes, whether you prefer to think of the problem as Goodhart's Law or McNamara's Fallacy.

We used to say that if a spreadsheet could do a manager's job, then why shouldn't shareholders hire a spreadsheet to replace management? It's a lot cheaper. The advent of more powerful AI raises the unsettling possibility that "spreadsheets", written large, really could do a better job.

I guess I had imagined that a lot of your book would be consumed by this problem.

Expand full comment
author

not necessarily explicitly but implicitly it's a lot of the whole thing. in Stafford Beer's system, it shows up as the question of balancing "here-and-now" with "there-and-then", with all kinds of reasonably forecastable problems of this sort counting as "here-and-now" because they are in the immediate information environments of the operations.

the essence of management is balancing these kinds of decisions with things that are *not* part of the information environment of the operations, and so consequently need a special intelligence function to go out looking for them.

Expand full comment

So the payoff for correct prediction is 1, and incorrect 0, with some kind of time-to-event weighting?

Would be interesting to see performance where payoff is related to {proportion forecasting x} at time of forecast. So 80:20 yes:no forecast at t-100 would reward a ‘yes’ with (100-20 = 80). With each change attracting a frictional cost (in terms of reduced potential reward) of 1.

Expand full comment
author

"So the payoff for correct prediction is 1, and incorrect 0, with some kind of time-to-event weighting?"

Basically yes in spirit, but they use a mean-square measure: https://www.cultivatelabs.com/crowdsourced-forecasting-guide/what-is-a-brier-score-and-how-is-it-calculated

and then normalise it by the median on each day of the contest and calculate an average (which reduces the scores of late arrivals)

https://www.cultivatelabs.com/crowdsourced-forecasting-guide/what-is-a-brier-score-and-how-is-it-calculated

Expand full comment
author

I think there are some measures like that, but iirc it doesn't make all that much difference to the results (in the competitions, in real life I think it does a lot!).

Expand full comment