29 Comments

You might really enjoy the section in McGilchrist's The Matter with Things about Franck Mourier- a horse trainer turned bookie who beat the professional oddsmakers. Yet he had no idea how to codify his expert intuition and any attempt to do so degraded the signal.

Expand full comment
author

I'll check that out, thanks very much. Having read a few books like "It Can Be Done" by Kevin Blake (and tried my hand at systematic betting a couple of times myself), I ended up concluding that a) you can adapt the same techniques and manners of thinking that I used as a stock analyst to make a reasonably reliable profit, but b) to do so is at least as much work as having an actual job, and intolerably boring to someone who is not "a bit funny about horses".

Expand full comment
Aug 28Liked by Dan Davies

This is true of a lot of activities. ("It's a hard way to make an easy living.") There is a certain amount of money floating around in the backgammon world, but if you're smart enough and diligent enough to extract a decent living from it, you could almost certainly do better in some more traditional career.

[Of course the guys who used to earnestly explain this to me tended to (a) be world-class backgammon players, and (b) have (or have had) successful careers at Susquehanna.]

Expand full comment

The entertaining book Everybody Lies tells the awesome story of how a man called Jeff Seder attempted to decode what characteristics would determine a racehorse’s success. He sorted through a virtually limitless number of data-points to try and find what matters. He even developed a proprietary form of ultrasound to scan a horse’s internal organs. Eventually he discovered that that the size of the heart’s left ventricle was a major advantage. His analysis uncovered a previously unheralded horse that an Egyptian beer magnate was trying to offload. But it had a left ventricle that was in the 99.61st percentile. Seder told him not to sell “Horse No. 85,” and he ended up having to buy it back at auction. Horse No. 85 was renamed American Pharaoh; the first horse in three decades to win the Triple Crown.

Expand full comment

Causality beats prediction.

Expand full comment

Miguel Indurain (although it was also the EPO era in cycling) famously had a larger heart (and lower resting rate) than the rest of the peloton

Expand full comment
Aug 28Liked by Dan Davies

Agree with you. Most of the important things in the world are not susceptible to statistics, and most of the errors humans make are not statistical ones.

BTW :-) it's "parimutuel" https://en.wikipedia.org/wiki/Parimutuel_betting not "peri-mutual" unless you mean the portuguese version they used in Portugals' African colonies: https://www.thespicehouse.com/products/peri-peri-mozambique-blend?srsltid=AfmBOopTwOkvGXuCNI_ZXyRtPRvXxgO1OJzk9JGwe5DveZVLjsIWLL-0

Expand full comment
author

I'm leaving the typo because now every time I see it I daydream about betting screens in Nando's!

Expand full comment
Aug 28Liked by Dan Davies

There is a tension between techne and metis inherent in any practical application of probability, adumbrated by R.W. Hamming in his book "The Art of Probability":

"In normal life we use the word 'probable' in many different ways [...]

"In science (and engineering) we also use probability in many was [...]

"Thus there are many different kinds of probability, and any attempt to give only one model of probability will not meet today's needs, let alone tomorrow's [...]

For example, I would describe Silver's model of probability as what Gelman & Shalizi call "Bayesian modeling with frequentist checking". That is, he is willing to treat, say, the mass of Jupiter as a random variable - not a coherent concept in frequentist *philosophy* - while also believing that there is an objective fact of the matter about what that mass actually is. His epistemological ground truth is established empirically by frequencies, or deductively from symmetries (e.g. poker hand frequencies.) This is radically different from the epistemological Bayesianism of de Finetti or Ramsey, where probability has at best a secondary ontological status ("Probability does not exist" is how de Finetti's book famously begins.)

And there's nothing wrong with that! It's a perfectly sensible use of probability! But philosophically he is no Bayesian. And in my view, a man who doesn't know his own philosophy of probability is singularly ill-suited to write a book about the philosophy of probability.

Expand full comment
Aug 28Liked by Dan Davies

FWIW, I don't think Silver has thought deeply about the philosophy of probability. In his book, he's pretty attached to the idea that his forecasts are handicapping for bettors. And that all forecasting should be used to maximize gambling odds. It's closer to a literal, naive reading of de Finetti and Ramsey than you might expect it to be.

And yet, you are right: Silver uses frequentist calibration as a way to create fake authority around his own accuracy. But he doesn't talk about this at all in the new book.

Expand full comment
Sep 3Liked by Dan Davies

I want to push back gently on the importance of having only one or two tracks. I worked off and on for a decade for a similar operation in Japan, where the JRA operates ten tracks. The other ingredients were there: a lot of racing, a relatively closed population of horses, and very deep betting pools (JRA's annual betting handle is comparable to Hong Kong's). Nobody made a billion dollars, but our principal made a very nice career out of it, and kept quite a few people employed for about fifteen years.

Of course 10 tracks is a lot fewer than 57. And I speculate that the JRA tracks are more similar to each other (or, let's say, less distinctive) than the UK tracks.

Expand full comment
Aug 28Liked by Dan Davies

Really modern neural network AIs are all about sparse mixture-of-experts architectures, so the idea is to create a lot of separately trained higher layer structures with decorrelated response functions. See the Google DeepMind paper on PEER: https://arxiv.org/pdf/2407.04153

Expand full comment
author

at some level, I suppose, you can train your brain to pick horses so you can train a neural net to pick horses. But I suspect that it's not going to look much like Moneyball and it's not going to be based on the formbook databases because they're just too small relative to the dimensionality of the problem. If I was given the task of creating an LLM tipster and the budget to do so, I think I'd probably start by trying to train it on a lot of race videos, trying to get it to recognise winning horses by the way they ran and their physical characteristics rather than trying to infer this from results.

Expand full comment
Aug 28Liked by Dan Davies

the basic principle of MoE training is that chunks of the model are trained on randomly selected subsets of the training data, and a router network is trained to select which chunk to use at inference time (in fact, which chunk on each layer). It's possible to pull out some of the chunks (aka experts) and use them independently, but interestingly they're not usually obvious breaks (eg Mandarin, Spanish, mathematics and logic, function calling and utilities, general knowledge), but....weird (eg texture). So, you know, getting an expert for "weird Irish Hunt tracks" is on the cards, and the DeepMind team has been pushing on this for years. Back in 2021 they showed that neural networks seem to scale better along the domain of more variety than more size (the GLaM paper).

Expand full comment
author

In principle I think you might be more likely to develop "spotter of horses that Martin Pipe thinks have a chance at Ascot" or something like, finding patterns over multiple races that end in a higher win chance. But in practice, there's like 59 racecourses in the UK, multiplied by at least ten standard distances in four or five ground conditions with wind speed, class of the opposition, plus jockeys make a difference as does the draw ... I just feel like there's about 10,000 horse races in a year, so you are definitely not going to get a million usable data points.

Although I suppose you don't actually need a particularly good classifier; just one that's better than William Hill.

Expand full comment

This is very important! If you want to train a neural network to play Go (or backgammon), you can have as much data as you can afford compute power ("play another million games"). In horse racing, once you've fed All The Races into your system, you're done. You can't get any more data. And it's just not that much data.

Expand full comment

On the other hand, it's quality data.

Expand full comment
Aug 28Liked by Dan Davies

I don't know if you've ever looked into golf betting, but it seems like a relatively similar problem. You are essentially trying to identify each players strengths and weaknesses, and then map that to the needs of each course. I know that people have successfully built betting models for golf, but I guess the individual shot data allows you more granularity than the results of a horse race does?

Expand full comment
author

I think it's more that a) a much more consistent population of players competing against each other all the time, and b) although golf courses are very different, they're not *that* different to the extent that someone can win on one course and have no chance on another. I know that there are people with statistical systems for golf and that they make money, yeah.

Expand full comment
Aug 28Liked by Dan Davies

This post is almost annoyingly insightful and may force me to incorporate it into my understanding of Scott.

Expand full comment
Aug 28Liked by Dan Davies

You could just use all the data you have to train a neural net? Use drop out etc to deal with the fact that you only have a small amount of data. Add the data as a feature as well, so the model can learn to weigh older data less heavily when predicting the future.

Expand full comment
author

you could definitely do that, and I'd imagine you'd get some performance improvement over a linear classifier because you have more structure to deal with interaction effects, but I wouldn't think it's really going to give an economically meaningful improvement, because the curse of dimensionality is a real problem - there just isn't very much data compared to the size of the search space.

Expand full comment
Aug 28·edited Aug 28Liked by Dan Davies

The absolute scarcity of the data matters,.yes; but I don't think 'dimensionality' is a problem.

Just adding more 'features' to a problem doesn't make it harder for a neural net. (Nor does dimensionality impact Monte Carlo integration nearly as much as 'regular' deterministic integration techniques.)

Keep in mind that you only have to guess better than the humans, you don't have to guess well on an absolute level.

I wonder what extra data about the horses you could feed.into your network. It would be absolutely crazy to find a way to just feed a video of the horses during training (just like a trained human expert might look at a horse and make a judgment.)

Expand full comment
Aug 29Liked by Dan Davies

I am trained in statistical mechanics and have always been a bit sceptical of ML/AI. This is due to regression (of which MC and ML in general being closely related problems) being very dependent on the actual realized paths. Indeed there are many algorithms and techniques applied in domains where you know a generating function that basically refine a low dimensional path by "smartly" iteratively looking at realized outcomes. Now, "wins the race" is particularly low dimensional and thus any projections from the high dimensional space of reality will necessarily be noisy precisely because you need to try to align the low dimensional projection in some way with the high dimensional data. Thus the dimensionality absolutely matters, because you will need not just more observables but more observation series to construct a good alignment.

Tl;dr I think if you managed to build a horse betting NN you would never know if you got lucky or not. This is pretty much the point of everything written by Taleb.

Expand full comment

I think somewhere Mickey Spillane has Mike Hammer say "there's a lot of experience goes into what people call intuition"

Expand full comment
author

I must do this as a post because that's a load of really interesting literature in this regard - surprising amount of it based on a particular case study of firefighters' ability to tell when a floor is going to collapse, because it starts feeling "spongy". there is very strong agreement among experienced firefighters as to whether a floor has become spongy, inexperienced firefighters think all floors are spongy, and sponginess doesn't seem too much up particularly well to any physical quantity that can be easily measured in a burning building

Expand full comment

I think there’s another general principle at work there as well: the key to success at any (non abstract) activity is understanding how the materials will behave. It’s how a plasterer judges plaster, a baker judges dough, grannies judge pasta, japanese sword makers judge steel… and firefighters judge floors. That knowledge of the material comes from experience too - it feels right - but I think of intuition as something less readily consciously apprehended.

Expand full comment

Ha.

Wrote code in Perl to do this about twenty years ago. Worked OK.

Expand full comment

The great thing about a Systems Blog is that topics like philosophy of probability, neural network architectures, and horse race gambling all fit together just fine. And I do hope we get to hear more about the Dan Davies Horse Racing System!

Expand full comment