1

In the response to this question (question and response below), Did mentions that the probability that we eventually reach $0$ from $1$ is labeled as $r$. This is distinct from the probability that given you are at $1$, on the next step, you go to $0$, which is $p = \tfrac13$.

To my understanding, Did's logic is as follows: the probability that we eventually arrive at $0$ from $2$ is equal to the probability that we eventually arrive at $1$ from $2$, namely $r$, multiplied by the probability that we then arrive at $0$ from $1$, or $r$, so we get $r^2$.

My question is: Why do we multiply these probabilities? Normally we multiply probabilities when the events are independent.

Is this why they are independent?

  • Event 1: probability that, starting from $1$, we eventually get to $0$.
  • Event 2: probability that, starting from $2$, we eventually get to $1$.

Let's say event $1$ does not occur. This does not give us any information about event $2$, since event $1$ and event $2$ start at different locations. Therefore, knowing that, starting from $1$, we never got to $0$, tells us no information about whether or not that starting from $2$, we will ever get to $1$. As a result, we can multiply probabilities, since the events are independent.

A potential counter argument to independence: Let's say we are looking at event $1$. One of the possibilities is that we go from $1$ to $2$. Then, we are basically at event $2$, since we have to go back to $1$ to get to $0$. Then, wouldn't knowing the outcome of event $2$, give us some information about the outcome of event $1$?

Which thought process is correct?


Original Question: Lets say we start at point $1$. Each successive point you have a, say, $\tfrac23$ chance of increasing your position by $1$ and a $\tfrac13$ chance of decreasing your position by $1$. The walk ends when you reach $0$.

The question, what is the probability that you will eventually reach $0$?

Partial Answer: One asks for the probability $r$ starting at $1$ to eventually reach $0$. The dynamics is invariant by translations hence $r$ is also the probability starting at $2$ to eventually reach $1$. Consider the first step of a random walk starting at $1$. Either the first step is to $0$ then one hits $0$ eventually since one is already at $0$. Or the first step is to $2$ then to hit $0$, one must first hit $1$ starting from $2$ and after that, hit $0$ starting from $1$. This yields the equation $r=\frac13+\frac23r^2$, whose solutions are $r=1$ and $r=\frac12$.

RobPratt
  • 45,619
mk0219
  • 149
  • Do you know about the strong markov property? (there are a lot of nice intuitive explanations, but the only formal explanation I can think of requires referencing the strong markov property) – Brian Moehring Mar 07 '24 at 23:03
  • 1
    When events are not independent we can still multiply conditional probabilities. If $A$ is the event that we are at 1 at some future time and at 0 some time later, and $B$ is the event that we are at 2 now and at 1 at some future time, then the probability we get to 0 from 2 is $P(A\cap B) = P(A\mid B) P(B).$ In this case $P(A\mid B) = P(B) = r.$ – David K Mar 07 '24 at 23:07
  • @BrianMoehring I don't, but would appreciate any intuitive explanation. – mk0219 Mar 09 '24 at 00:02
  • @DavidK Thank you for the reply. So you are saying that A and B are not independent? Do you mind explaining why P(A|B) = P(B)? – mk0219 Mar 09 '24 at 00:04
  • 1
    If $A$ is the event that we travel from $1$ to $0$ and $B$ the event we travel from $2$ to $1$ and we start at $2$, it's not possible for $A$ to occur without $B$. To be more thorough, we might define $C$ as the event that we start at $2$, and $P(B\mid C) = r$. The reason this is the same as $P(A\mid B)$ has to be deduced from the symmetry of the random walk: any sequence of left/right moves is equally likely starting at $1$ as at any other number. Now if we set $P(C)=1$ (we definitely start at $2$), then $P(A\cap B\mid C)$, which is the $r^2$ in Did's answer, is $P(A\mid B)P(B\mid C)$. – David K Mar 09 '24 at 04:06

0 Answers0