12

You find 2 dollars in your pocket and decide to go gambling. Fortunately, the game you're playing has very favourable odds: each time you play, you gain 1 dollar with probability 3/4 and lose $1 with probability 1/4.

Suppose you continue playing so long as you have money in your pocket. If you lose your first two bets, you go broke and go home after only two rounds; but if you win forever, you'll play forever.

What's the probability you'll eventually go broke?

I think I am overthinking this:

To go broke, you have to end on two Losses, which have p(L) = 1/4

Before the two losses there has to have been an even number of losses and win to have a balance of 2 dollars before the final two losses.

$P(Broke)=\left( \dfrac {1}{4}\right) ^{2}\sum ^{\infty }_{i=0}\left( \dfrac {1}{4}\right) ^{i}\left( \dfrac {3}{4}\right) ^{i}N_{i}$

But then for each summand I need to multiply by the number of ways it is possible to get to an equal number of losses is without going broke beforehand. so for i = 1 LW and WL, are allowed, for i = 2 WLLW , WLWL, LWWL, LWLW, LLWW are allowed.

So my final calculatin is $P(Broke) = \left( \dfrac {1}{4}\right) ^{2}\sum ^{\infty }_{i=0}\left( \dfrac {1}{4}\right) ^{i}\left( \dfrac {3}{4}\right) ^{i}\left( \begin{pmatrix} 2i \\ i \end{pmatrix}-\sum ^{i-1}_{k=1}\begin{pmatrix} 2k \\ n-1 \end{pmatrix}\right) $

But this doesn't equal anything and seems rather complicated.

Tina
  • 543
  • 5
  • 14
  • It seems actually similar to https://math.stackexchange.com/questions/153123/biased-random-walk-on-line (even if it goes against my other comment... seems I got wrong [again!] in probability problems) – Xenos Jun 12 '18 at 08:45

5 Answers5

7

Let $p(n)$ be the probability of going broke starting with $n$ dollars. On the one hand, we have $$ p(n)=p(1)^n\tag1 $$ because moving from $\$n$ to $\$0$ is the same as moving from $\$k$ to $\$k-1$ a total of $n$ times, for $k=n,n-1,n-2,\dots,1$. On the other hand, we must have $$ p(1)=\frac34 p(2)+\frac14\qquad (n\ge1)\tag2 $$ Substituting $(1)$ into $(2)$, and setting $n=1$, you get $p(1)=\frac34 p(1)^{2}+\frac14$, the solution to which is $p(1)=1/3$ or $p(1)=1$. If we had $p(1)=1$, then we would have $p(n)=1$ for all $n$, so that you certainly go broke no matter how much money you start with, which is clearly incorrect.$^*$ Therefore, $$ \bbox[3pt,border: 1.5pt black solid]{p(n)=(1/3)^n.} $$ $^*$A rigorous proof of this certainly exists, perhaps by applying Stirling's approximation to an exact combinatorial expression for $p(n)$, or with some central limit theorem argument.

Edit: For a more convincing argument, suppose that the process does not stop when you hit zero. If $p(1)=1$, you would certainly hit $\$0$, and from there certainly hit $\$(-1)$, and from there $\$(-2)$, and so on. This means you would drift arbitrarily far down. This contradicts the law of large numbers, which says $S_n/n$ converges to $1/2$ almost surely as $n\to\infty$, where $S_n$ is your winnings after $n$ plays.

Mike Earnest
  • 75,930
  • 1
    Saying we play forever and considering the odds, there will be a consecutive sequence of N loosing bets for any N, including N = current amount of dollars in my pocket (which is finite)? So I'll always end up broken and P(broke) = 1? Note that games odds would then change nothing (except if you win every time)... – Xenos Jun 11 '18 at 17:00
  • The claim that if P(A) converges to one, and P(B|A) converges to one, then P(B) converges to one, is not trivial, and I'm not even sure it's true. – Acccumulation Jun 11 '18 at 17:50
  • 2
    @Xenos This is not correct. An infinite sequence of independent events with the same probability will eventually occur. However, the event "lose N times in a row," where N is your current wealth, has a changing probability. What happens is that as time goes on, your wealth N generally grows, so P(N losses) goes to zero fast enough that the infinite product of $(1-P(N\text{ losses}))$ is nonzero. – Mike Earnest Jun 11 '18 at 18:04
  • I don't think I understand your argument in the edit. $p(1)=1$ does not mean "you always lose money", it means "you eventually reach zero". – gone Jun 11 '18 at 18:09
  • 1
    @Raeven0 You can think of $p(1)$ as the probability of (eventually) moving from $1 to $0. This the same as the probability of moving from $k to $(k-1), for any k, by the symmetry of this process. I know that $p(1)$ was defined as the probability of reaching zero (starting from 1), but a consequence of this definition is that it is also the probability of losing a dollar starting from any amount. – Mike Earnest Jun 11 '18 at 18:14
  • @Raeven0 "solution cannot be p(1)=1" seems to be what I underlined in my comment (which leads to "No matter the odds, no matter the finite starting wage, you'll always end up loosing someday"). I would like to see the rigourous proof of p(1) != 1 then. Btw, "lose N times in a row" is not an "infinite" sequence of events?! But I get the counter-intuitive "you will always loose while odds are in your favor" – Xenos Jun 12 '18 at 08:09
  • The more convincing argument for why $p(1) \neq 1$ is not so clear to me. Think of brownian motion; it hits infinitely often numbers that are arbitrarily large and negative, but that doesn't stop it from satisfying the law of large number (i.e. $\lim \frac{W_t}t = 0$). – Ant Jun 13 '18 at 06:45
4

As another possible method. Consider a slightly modified game where the game can end if the money reaches $0$ or $N$ dollars (we can take the limit later). Let $P_i$ be the probability that you win starting with $i$ dollars. Suppose the probability of winning a single game is $p$ and $q = 1- p$ is the probability of losing a single game. We can observe the relation $$ P_i = p P_{i+1} + q P_{i-1}. \tag{1} \label{1}$$ This is a linear difference equation which can easily be solved for roots (via $P_i = z^i$). The result is $$P_i = \begin{cases} A + B \left(\frac{q}{p} \right)^i & \text{if } p \neq q, \\ A + B i & \text{if } p = q = 1/2. \end{cases}$$ Then noting the boundary conditions $P_0 = 0$ and $P_N = 1$, we obtain, $$P_i = \begin{cases} \frac{1-\left(\frac{q}{p} \right)^i}{1- \left(\frac{q}{p} \right)^N} & \text{if } p \neq q, \\ \frac{i}{N} & \text{if } p = q = 1/2. \end{cases}$$ Taking the limit as $N \to \infty$, yields the desired result.

Gregory
  • 3,641
  • 2
    I agree that it's plausible the $N \to \infty$ limit yields the desired result, but I want to emphasize that has not yet been argued that the probability of winning the OP's game should be related to the limiting probability of winning the $(0,N)$ game. –  Jun 11 '18 at 23:35
  • Ok, This is neat, and the first answer, I have looked at. I see that you have reduced the problem to gambler's ruin (https://en.wikipedia.org/wiki/Gambler%27s_ruin). I have no experience solving linear difference equations, is the standard method what is suggested here (https://www.cl.cam.ac.uk/teaching/2003/Probability/prob07.pdf), try 1) $P_{i}=Az^{i}$, 2) take i = 2 case, 3) Solve, make linear combination as a guess 4) Sub in guess to original to check it is right – Tina Jun 12 '18 at 08:22
  • @Hurkyl, Is it not the same problem exactly, with N tending to infinity? – Tina Jun 12 '18 at 08:24
  • @Tinatim: You'll have to tell me what it means to play this game "with N tending to infinity", and why the probability of winning that game is the limit of the probability of winning the $(0,N)$ games. If you meant that the original problem is the limit as N tends to infinity... you'll have to explain what it means to take the limit of a sequence of games and why the probability of winning should be a continuous function of the game. –  Jun 12 '18 at 08:48
  • 1
    @Tinatim: ... for example, one way to make sense of a limiting idea here is that events leading to winning the game in the OP has the property that, for every $N$, it starts with a sequence of events that describes a winning $(0,N)$-game. Why is the probability of winning the OP game the limit of the $(0,N)$-games? A limit-like argument can probably be made in this case, but it the situation is not so clear that it goes without saying. I emphasize this precisely because people gloss over this sort of issue without even recognizing it can be a problem. –  Jun 12 '18 at 08:53
0

Unfortuneately i don't have a very intuitive understanding of the problem, but i thought about it before and that's what i came up with:

Denote $S_n$ be the your money in step $n$ and $S_0=x$ the money you start with. Let $y$ be the (for the time) finite amount of money the bank starts with. Let $p$ be the probabilty you win $1\$$ and $(1-p)$ the probabily that you lose $1\$$.

Notice that $\left(\frac{1-p}{p}\right)^{S_n}$ is a martingale, which means $$\mathbb{E}\left[\left(\frac{1-p}{p}\right)^{S_n} \huge|\normalsize \left(\frac{1-p}{p}\right)^{S_k} \right] = \left(\frac{1-p}{p}\right)^{S_k}$$ for $k\leq n$.

Let $\tau = \operatorname{inf}\{k: S_n\text{ is constant for all }n\geq k \}$, that is let $\tau$ be the time the game ends (you won all the money from your opponent or you lost all your money).

By the optional stopping theorem,

$$\mathbb{P}\left(S_\tau = 0 \right) \left(\frac{1-p}{p}\right)^{0} + \left(1- \mathbb{P}\left(S_\tau = 0 \right) \right)\left(\frac{1-p}{p}\right)^{x+y} = \mathbb{E}\left[\left(\frac{1-p}{p}\right)^{S_\tau}\right] = \mathbb{E}\left[\left(\frac{1-p}{p}\right)^{S_0}\right] = \left(\frac{1-p}{p}\right)^{x},$$ so $$\mathbb{P}\left(S_\tau = 0 \right) = \frac{\left(\frac{1-p}{p}\right)^{x} - \left(\frac{1-p}{p}\right)^{x+y}}{1 - \left(\frac{1-p}{p}\right)^{x+y}}.$$

For $(1-p) < p$ we see, by letting $y\rightarrow \infty$, that for a bank with unlimited resources, we get $$\mathbb{P}\left(S_\tau = 0 \right) = \left(\frac{1-p}{p}\right)^{x}.$$

cdwe
  • 671
  • 1
    I agree that it's plausible the $y \to \infty$ limit yields the desired result, but I want to emphasize that has not yet been argued that the probability of winning the OP's game should be related to the limiting probability of winning the bounded version of the game. –  Jun 11 '18 at 23:40
  • @Hurkyl That is true and im not sure how to do that. – cdwe Jun 11 '18 at 23:44
0

Let $p(n)$ be the probability of ever reaching $n-1$ coins (dollars) starting with $n$ coins. We have that

$$p(n) = 1/4 + 3/4 \cdot p(n+1) \cdot p(n)$$

Since we can either reach $n-1$ coins directly by losing or win one coin then reach $n$ coins with probability $p(n+1)$ and then reach $n-1$ coins with probability $p(n)$.

But we can see that $p(n+1) = p(n)$, because it's the same the probability of eventually losing one coin, it doesn't matter if we have $n$ or $n+1$ coins to begin with. Substituting, we have

$$p(n) = 1/4 + 3/4 \cdot p(n)^2$$

Solving this quadratic equation we have $p(n) = 1/3$ or $p(n) = 1$. We can ignore the case where $p(n) = 1$ since the probability of losing is not 1.

Finally, the probability of losing is

$$p(2) * p(1) = (1/3)^2 = 1/9$$

0

It was still bugging me that we have 2 roots of the quadratic equation in the solution and I wanted a rigorous proof that probability of going broke is not 1, no matter how intuitive it seems.

I think I came up with the bound:

Suppose that we start with 1 dollar. To go broke we obviously need odd number of bets, suppose it is 2n+1. To go broke after 2n+1 bets we need n winning bets and n+1 losing bets. According to binomial distribution probability of going broke after 2n+1 bets:

P_broke(2n+1) <= C(2n+1, n) * (3/4)^n * (1/4)^(n+1)

(C() denotes binomial coefficient)

<= because sequences of bets where you go broke before 2n+1 are not valid (cannot go into debt). In fact, for any n > 0, <= will actually be < because we exclude the sequences where you go broke before 2n+1 bets.

Total probability of going broke will be bounded by summation of the above with n from 0 to infinity.

It is easy to show that C(2(n+1)+1,n+1) = C(2n+1,n)2(2n+3)/(n+2) < 4*C(2n+1,n) from which it is easy to show by induction that C(2n+1,n) < 4^n. Thus we have:

P_broke(2n+1) < 1/4 * (3/4)^n

Total probability of going broke P_broke = sum(0, inf) P_broke(2n+1), thus

P_broke < 1/4 sum(0, inf) (3/4)^n = 1/4 * 1 / (1 - 3/4) = 1

Thus P_broke < 1.

This is the best i could come up with so far, i m sure that for given probabilities a tighter bound can be proven, please post if you know.