20 Comments
User's avatar
Dan Davies's avatar

I can't be bothered to do two updates, but on reading back, I don't understand why I said that in game two your expected wealth "goes to infinity really quickly". It's infinity from the first play and you still have infinite expected wealth all the way through even while your actual wealth is getting closer and closer to zero

Expand full comment
Ben's avatar

Anthropic Opus 4.1 sticks to its guns, interestingly, and stays with game 1. https://claude.ai/share/6911f55a-45c4-4ef7-a4bb-7d5b53bbb472

Expand full comment
Dan Davies's avatar

That is an interesting one because it states correctly that it's a random walk with unequal step sizes but doesn't seem to notice that there is no win criterion

Expand full comment
Dan Davies's avatar

It is really interesting that they are so very convinced that the ruin probability is less than one. I think this is because most of the training data describes sensible problems in which there is a win criterion, but I have set this one up so that it goes on forever

Expand full comment
Dan Davies's avatar

(arguably this is my fault because the original stipulation is ambiguous. The probability of ruin for an asymmetric random walk is asymptomatically less than one as the win criterion tends to infinity; although my simple argument that " nonzero probability of loss means non-zero probability of a capital destroying run of losses, which means certainty in infinite play" is, I think, correct, it's not the limit of the series and it's properly debatable whether "infinitely repeated" is a sensible concept here particularly since I told the computer that it would be just a "very large" number.)

Expand full comment
dribrats's avatar

The probability of ruin really is less than one; Nick Banner is correct. There is a robust literature on asymmetric random walks; "return time" and "hitting time" are useful search terms.

I am more interested in a subtle defect in the design of Game 2: we are betting too much! The expected value of a single $10 bet is 50 cents; the Kelly fraction for this game is .05. The "full Kelly" betting strategy would be to risk 5% of our bankroll (to gain 5.5%) each time. This makes a big difference to the expected log math; in particular, (.95)(1.055) > 1.

This means that there is a very important difference between Game 2 and anything that Taleb or Thorp (or Warren Buffett, or Bill Ziemba) would advocate. Many commentators have observed that "full Kelly" is more swing-y than most investors are prepared to tolerate; "half Kelly" is commonly proposed as a practical compromise. "Double Kelly" is totally not a thing, for reasons that the analysis of Game 2 makes clear. You really would go broke that way.

Expand full comment
Dan Davies's avatar

I see what you mean - I always thought of this as a genuine paradox, but have updated the footnote accordingly.

Expand full comment
Nick Banner's avatar

GPT-5 is not convinced either one is a great option, but doesn't ask for clarification. I get:

Game 1.

Here’s why—very short:

• Game 2 (±% of wealth)

Each round multiplies wealth by 1.11 (heads) or 0.90 (tails).

Time-average (Kelly) growth = ½·ln(1.11) + ½·ln(0.90)

= ½·(0.104360… − 0.105360…) ≈ −0.000500… < 0.

So with many rounds and no ability to stop, your wealth almost surely drifts toward 0, even though the expected wealth rises (skewed distribution).

• Game 1 (± fixed dollars)

Per round expectation = ½·(+$11) + ½·(−$10) = +$0.50.

With many rounds, wealth has positive drift and (absent an absorbing “you must stop at $0” rule) there’s a strictly less-than-1 chance of ruin and a positive probability of unbounded growth; the expected wealth grows roughly linearly: $100 + 0.5·(rounds).

• Intuition:

Over long horizons you should maximize time-average growth (expected log wealth), not the one-step arithmetic mean. Game 2’s attractive +0.5% arithmetic mean per flip hides a negative log growth, which dooms you over many rounds. Game 1’s fixed edge compounds in your favor as your bankroll grows and the $10 loss becomes a smaller and smaller fraction of wealth.

So, if I must pick once and play a very large number of rounds, I choose Game 1.

>but in Game 1, you lose all of your money with probability 1 in finite time

Not quite—Game 1 does not lead to ruin with probability 1.

Treat wealth as a random walk with iid steps: +$11 (p=0.5) or −$10 (p=0.5). The mean step is +$0.50, i.e., positive drift. For such a walk with an absorbing barrier at 0 and no upper cap, the probability of ever hitting 0 is strictly less than 1.

A standard bound (Lundberg/Cramér) uses the exponent \theta>0 solving

\mathbb{E}\big[e^{-\theta X}\big]=\tfrac12 e^{-11\theta}+\tfrac12 e^{10\theta}=1.

This gives \theta \approx 0.009084. Starting from $100, the ruin probability satisfies

\Pr(\text{ruin}) \;\le\; e^{-\theta\cdot 100} \approx e^{-0.9084} \approx \mathbf{0.40},

and the true value is very close to that (simulations land around ~0.39). So there’s a substantial chance of ruin—but definitely not probability 1—and a positive probability of unbounded growth. Meanwhile, expected wealth grows linearly at $0.50 per flip.

Contrast with Game 2 (±%): the time-average growth rate is

g=\tfrac12\ln(1.11)+\tfrac12\ln(0.90)\approx -0.000500<0,

so wealth almost surely trends to 0 over many rounds.

So the choice still stands: Game 1.

Expand full comment
John Quiggin's avatar

My instant assumption was that with game 1 you would certainly be ruined eventually and I missed the point that there is an expected loss in game 2. Not sure what that says about my PhD.

Expand full comment
Richard Kennaway's avatar

Your analysis of Game 1 is not correct. There is only a finite probability of ever going bankrupt. The game approximates Brownian motion with a positive drift, a positive starting value, and an absorbing barrier at zero. The probability of ever hitting the barrier is finite for all such games. It is true that after any number of rounds, there is still a finite probability of eventually going bankrupt, but it decreases exponentially with your current bankroll. For the given parameters I estimate a probability of 0.6 of surviving indefinitely.

Expand full comment
Dan Davies's avatar

I've written a comment to that effect, although I think this means that in actually infinite time you are ruined, but this is more controversial as a view in philosophy of mathematics than I made it out to be

Expand full comment
Richard Kennaway's avatar

There is nothing controversial here. Limits of infinite products are as well-understood as limits of infinite sums. In particular, an infinite product of the form (1-p1)(1-p2)(1-p3)... where each pi is between 0 and 1 converges to a positive number if and only if the sum of all the pi converges.

Expand full comment
Dan Davies's avatar

I've updated the footnote, but I do think this is a paradox - I have lots of problems with the way that probability theory helps itself to "almost surely" and limit arguments, but probably shouldn't have brought them in here!

Expand full comment
Joe Jordan's avatar

I am not sure about this. An infinite number of coin flips will (eventually) have an infinite stings of tails, so you will always lose (eventually). All this shows is that infinite series (or products) are not always helpful for reasoning.

Expand full comment
Richard Kennaway's avatar

I am sure. An infinite series of coin flips will have strings of tails of every finite length, but with probability 1 it will not have an infinite string of tails. With Game 1, the longer you survive without going bankrupt, the greater your likely bankroll by that point, and the longer a string of losses you would have to have to lose everything. The probability of this happening decreases, the longer it goes without happening. It decreases fast enough that in the limit, the probability of it ever happening remains below 1.

This is all standard properties of Brownian motion.

Expand full comment
John Harvey's avatar

Is this how economists think? Seems more like the mindset of gamblers, or drug addicts. "Number go up," to infinity? Do they no longer care about how well companies, and anybody or anything else are actually doing?

Myself, I did a random walk down Chat Street today and learned a few things.

Q. How much chat could a chatbot bot, if a chatbot could bot chat?

"Sorry, no relevant information was found in our search."

Q. How many ducks does it take to get all your ducks lined up in. a row?

'Sorry, no relevant information was found in our search."

Q. Do causes create effects, or do effects create causes?

"Causes create effects..."

Q. Does a chatbot have Buddha nature?

"Since chatbots are artificial intelligence and lack consciousness or spiritual essence, they do not possess Buddha nature."

All good to know!

About numbers and obsession:

https://en.wikipedia.org/wiki/Pi_(film)

Expand full comment
John Quiggin's avatar

As you hint, log utility provides a conclusive argument in favour of 2. For any finite T, wealth under 2 is strictly bounded away from zero. By contrast, for any T>=10, option 1 gives you a positive probabiity of ruin, which makes expected utility infinitely negative.

Expand full comment
Matthias Görgens's avatar

That's a pretty silly use of log utility.

Anyone playing this game would still have eg their kidneys, so would never go to zero even if they went bust in this game.

Log utility is just an approximation, and we need to be careful when going to its extreme edges.

Just like assuming log normal distributions for returns from equity is usually a reasonable model, but ignores that stocks can occasionally go to zero with non-zero probability.

Expand full comment
Sandy's avatar

Thought for 57s

What it knows and why

Setup.

Game 1 (additive): each round +$11 on Heads, −$10 on Tails (fair coin).

Expected increment per round: E[ΔW]=0.5(11)+0.5(−10)=+0.5\mathbb{E}[\Delta W]=0.5(11)+0.5(-10)=+0.5E[ΔW]=0.5(11)+0.5(−10)=+0.5 dollars. Linear positive drift.

Game 2 (multiplicative): each round ×1.11 on Heads, ×0.9 on Tails.

Expected log growth per round:

g=12ln⁡(1.11)+12ln⁡(0.9)=ln⁡ ⁣(1.11⋅0.9)=ln⁡(0.9994998749)≈−0.00050025<0.g=\tfrac12\ln(1.11)+\tfrac12\ln(0.9)=\ln\!\big(\sqrt{1.11\cdot 0.9}\big)=\ln(0.9994998749)\approx -0.00050025<0.g=21​ln(1.11)+21​ln(0.9)=ln(1.11⋅0.9​)=ln(0.9994998749)≈−0.00050025<0.

Time-average growth factor eg≈0.9994999<1e^g \approx 0.9994999<1eg≈0.9994999<1. Over many rounds, wealth tends to 0 almost surely.

Ruin probability in Game 1.

Let r(w)r(w)r(w) be the probability of eventual ruin starting from wealth www. The recursion for integer dollars with absorbing 0 is

r(w)=12r(w+11)+12r(w−10)r(w)=\tfrac12 r(w+11)+\tfrac12 r(w-10)r(w)=21​r(w+11)+21​r(w−10), with r(0)=1r(0)=1r(0)=1 and r(w)→0r(w)\to 0r(w)→0 as w→∞w\to\inftyw→∞.

Trial solution r(w)=λwr(w)=\lambda^wr(w)=λw gives 12(λ11+λ−10)=1\tfrac12(\lambda^{11}+\lambda^{-10})=121​(λ11+λ−10)=1. Besides the trivial λ=1\lambda=1λ=1, the root in (0,1)(0,1)(0,1) is λ≈0.9909571\lambda\approx 0.9909571λ≈0.9909571.

Hence r(w)=λw⇒r(100)≈(0.9909571)100≈0.4032r(w)=\lambda^{w} \Rightarrow r(100)\approx (0.9909571)^{100}\approx 0.4032r(w)=λw⇒r(100)≈(0.9909571)100≈0.4032.

So there is ~40.3% chance of eventual ruin and ~59.7% chance of never hitting ruin (and drifting to +∞+\infty+∞).

What it infers and why

Choose Game 1.

Game 2 has negative geometric (time-average) growth; with many rounds it almost surely decays to zero regardless of starting wealth.

Game 1 has positive expected additive drift and a strictly less than 1 ruin probability; with ~60% chance it survives indefinitely and wealth increases without bound.

What it suspects and why

If utility is logarithmic (Kelly criterion), Game 2 is strictly dominated (negative log-growth).

If the objective is maximizing expected terminal wealth over a fixed large horizon without hard ruin, Game 1 also dominates due to positive drift. With the “stop at ruin” rule, the survival probability (~60%) still makes Game 1 superior for virtually any reasonable objective (median, high quantiles, or expected log).

What it doesn't know

Your precise objective function (maximize expected terminal wealth, maximize probability of exceeding a threshold, minimize ruin, etc.).

Any constraints on maximum rounds or house limits.

Whether fractional dollars are allowed (assumed integer; result is insensitive here).

Decision: Game 1.

Expand full comment
mattmcd's avatar

On your March 2024 post 'why I don't fear the robot overlords', Ken MacLeod's Fall Revolution series addresses this to an extent, particularly 'The Stone Canal'.

Expand full comment