Asymmetric Random Walk on the Set of Integers

Homework Statement

Give the value of u_0.

Homework Equations

Let p>q>0 with p+q = 1 and a = q/p < 1. Let X_n denote the random walk with transitions
X_{n+1} = CASE 1: X_n + 1 with probability p and CASE 2: X_n - 1 with probability q. For i ≥ 0, we set u_i = P(X_n = 0 for some n ≥ 0|X_0 = i).

The Attempt at a Solution

So, we have u_0 = P(X_n = 0 for some n ≥ 0|X_0 = 0) = sum_{n=-infinity}^{infinity}(2n choose n) (pq)^n. Is this on the right track? If so, do I need to go further? How would I? Thanks.

Last edited:

I was going to try and point out something wrong, but I see you can edit the above message now. This is my first time. Hope to get some good feedback! :)

Ray Vickson
Homework Helper
Dearly Missed

Homework Statement

Give the value of u_0.

Homework Equations

Let p>q>0 with p+q = 1 and a = q/p < 1. Let X_n denote the random walk with transitions
X_{n+1} = CASE 1: X_n + 1 with probability p and CASE 2: X_n - 1 with probability q. For i ≥ 0, we set u_i = P(X_n = 0 for some n ≥ 0|X_0 = i).

The Attempt at a Solution

So, we have u_0 = P(X_n = 0 for some n ≥ 0|X_0 = 0) = sum_{n=-infinity}^{infinity}(2n choose n) (pq)^n. Is this on the right track? If so, do I need to go further? How would I? Thanks.
In a discrete-time Markov chain with transition probability matrix P = (pij). the probability fij(n) of first reaching state j at time n, starting from state i at time 0, is given by
$$f_{ij}(1) = p_{ij} \text{ and }f_{ij}(n) = \sum_{k \neq j} p_{ik} f_{kj}(n-1), n \geq 2.$$
In words: in order to reach j for the first time in n>1 steps, we must reach a state k other than j in step 1, then reach j from k for the first time in (n-1) steps. This is just standard textbook stuff.

RGV

Thanks. I understand this. I guess the question was just really austerely written. Plus, I'm new to stochastic processes and it's been a while since I've taken probability. I'm wondering if you can walk me through a few more.

By conditioning according to the possible values of X_1 (first-step analysis), prove that

u_i = (p)(u_{i+1}) + (q)(u_{i-1})
and
u_{i+1} - u_i = a(u_i - u_{i-1}) = a^{i}(u_1 - u_0).

I see how these equations "work," but I'm having trouble getting started on a *proof.*

Next, deduce that

u_i - u_0 = {(1-a^1 / 1-a)}(u_1 - u_0).

I don't know how to start this.

Next, using the fact that lim_{i→∞}u_i = 0, deduce that u_1 = a.

I don't know how to start this.

Finally, conclude that u_i = a^i.

Again, I don't know how to start this.

I don't want answers, just help in thinking through these. Thanks!

Ray Vickson
Homework Helper
Dearly Missed
Thanks. I understand this. I guess the question was just really austerely written. Plus, I'm new to stochastic processes and it's been a while since I've taken probability. I'm wondering if you can walk me through a few more.

By conditioning according to the possible values of X_1 (first-step analysis), prove that

u_i = (p)(u_{i+1}) + (q)(u_{i-1})
and
u_{i+1} - u_i = a(u_i - u_{i-1}) = a^{i}(u_1 - u_0).

I see how these equations "work," but I'm having trouble getting started on a *proof.*

Next, deduce that

u_i - u_0 = {(1-a^1 / 1-a)}(u_1 - u_0).

I don't know how to start this.

Next, using the fact that lim_{i→∞}u_i = 0, deduce that u_1 = a.

I don't know how to start this.

Finally, conclude that u_i = a^i.

Again, I don't know how to start this.

I don't want answers, just help in thinking through these. Thanks!
It is very hard to offer help, since I have no idea of your background or the level of rigour you work at, or what you already know and are allowed to use. I will offer a few hints, however. Given a Markov chain with transition probability matrix P, suppose we are interested in returns to a specific state j0, so let f_i(n) be what I previously called f_{i,j0}(n); that is, we always look at final state j0, so we can drop it from the notation. (j0 = 0 in your case.) Let the event E_i(n) = {first visit to j0, starting from state i, occurs at step n}, which can be written as E_i(n) = union {X_0=i,X_1 = k1, X_2 = k2, ... ,X_{n-1} = k_{n-1), X_n = j0}, where the union is over all k1,k2,..., k_{n-1} that are not equal to j0. The probability f_i(n) = P{E_i(n)|X0 = i}}
= sum P(i,k1)*P(k1,k2)*...*P(k_{n-2},k_{n-1})*P(k,_{n-1},j0), where the sum s over all the relevant k_m. You can use the Markov property to demonstrate that
f_i(n) = sum{P(i,k)*f_k(n-1):k =/= j0}, as I wrote before, with f_i(1) = P(i,j0). Note that even though the state space may be infinite, all sums in the above are finite.

Now let v_i = sum{f_i(n): n=1..infinity}; this is the probability that the chain ever visits state j0, starting from state i. This is what you called u_i in your problem. Anyway, using the above equation for f_i(n) and summing from n = 2 to infinity we get
v_i - P(i,j0) = sum{P(i,k)*v_k: k =/= j0}.

That gives your recursive equation for u_i: u_i = q*u_{i-1} + p*u_{i+1}, with u_1 = q + p*u_2. Now just try out the suggested form of solution.

RGV

u_{i+1} - u_i = P(X_1 = 0|X_0 = -1) - P(X_1 = 0|X_0 = 1)q + P(X_1 = 0|X_0 = -1)p
u_{i+1} - u_i = ?
u_{i+1} - u_i = (q/p)[P(X_1 = 0|X_0 = 1)q + P(X_1 = 0|X_0 = -1)p - P(X_1 = 0|X_0 = 1)]
u_{i+1} - u_i = a(u_i - u_{i-1})
by mathematical induction it's easy to get:
a^i(u_1 - u_0).

But it's the second step that's still giving me trouble. What you've said isn't helping me. Sorry to be dense.

vela
Staff Emeritus
Homework Helper
Hint: u3-u0 = u3-u2 + u2-u1 + u1-u0

Ray Vickson
Homework Helper
Dearly Missed
u_{i+1} - u_i = P(X_1 = 0|X_0 = -1) - P(X_1 = 0|X_0 = 1)q + P(X_1 = 0|X_0 = -1)p
u_{i+1} - u_i = ?
u_{i+1} - u_i = (q/p)[P(X_1 = 0|X_0 = 1)q + P(X_1 = 0|X_0 = -1)p - P(X_1 = 0|X_0 = 1)]
u_{i+1} - u_i = a(u_i - u_{i-1})
by mathematical induction it's easy to get:
a^i(u_1 - u_0).

But it's the second step that's still giving me trouble. What you've said isn't helping me. Sorry to be dense.
You are doing things the hard way. Let's start again: we have the recursions
u_i = q*u_{i-1} + p*u_{i+1}, i > 1
u_1 = q + p*u_2.
Also: we need 0 ≤ u_i ≤ 1, since u_i is a probability.

There is one more bit of input we need to supply, and it can be done either using intuition or using some previously proven properties about the system. Here is the issue: what do we expect to see if p > q? In this case the system tends to "drift to the right", since p > 1/2 means that the move i -> i+1 has greater than 50% chance. We might expect, therefore, that there is a chance the system never hits 0, starting from i >= 1. Let us assume that, so u_{i_0} = f < 1 for some i_0 > 1. (That is where intuition comes into play.) In order to hit 0 from 2i_0, we must first hit i_0, then go from i_0 to 0. However, hitting i_0 from 2i_0 is the same as hitting 0 from i_0, so u_{2i_0} = u_{i_0}*u_{i_0} = f^2. In general, u_{ki_0} = f^k → 0 as k → ∞; that is, we have u_i → 0 as i → ∞.

At this point we just have an algebra problem involving the solution of a recursion, together with some bounds and boundary conditions. Now there are two ways to go: (1) guess the form of solution; or (2) use a systematic approach via generating functions. Let's leave (2) for another time, and go with (1): we guess a solution of the form u_i = r^i. The recursion implies r^i = q*r^(i-1) + p*r^(i+1), or r = q + p*r^2. If p ≠ q this quadratic equation has two roots: r1 = 1 and r2 = q/p (remembering that q = 1-p). So, we have
u_i = A + B*(q/p)^i.
(i) If q > p, what must be the value of B? Why? Using both the recursion and the formula for u_1, what does that say about A?
(ii) If q < p the algebra alone does not supply enough information to give a unique solution, but we have already obtained the result u_i → 0 for large i. For u_i = A + (q/p)^i, what does this behavior say about A? Now, what is the value of B?
(ii) p = q = 1/2. This is a "boundary"case. Now we have a recursion in which r = 1 is a double root of the quadratic, and it is known that when a root r is double, both r^i and i*r^i satisfy the recursion. So, we have u_i = A + B*i. Using the conditions on u_i, you can easily determine A and B.

RGV

Last edited: