Find Lambda for Simple Walk on Z (Markov Chain)

RoshanBBQ
Messages
277
Reaction score
0

Homework Statement


Find \lambda= \{\lambda_i\} with \lambda_0 = 1 if P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1


Homework Equations


\lambda P = \lambda


The Attempt at a Solution


I used my relevant equation to write out:
(1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}
and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!
 
Physics news on Phys.org
RoshanBBQ said:

Homework Statement


Find \lambda= \{\lambda_i\} with \lambda_0 = 1 if P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1


Homework Equations


\lambda P = \lambda


The Attempt at a Solution


I used my relevant equation to write out:
(1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}
and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!

Use the Balance equations, which follow from \lambda = \lambda P; see http://en.wikipedia.org/wiki/Balance_equation

Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
\text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V), where
P(u \rightarrow V) = \sum_{v \in V} p_{u,v}. Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

So, if we set U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \} we have \text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p). Thus, \lambda_1 = \rho \lambda_0, where
\rho = \frac{p}{1-p}. Continuing like that you can get all \lambda_n in terms of \lambda_0.

RGV
 
Ray Vickson said:
Use the Balance equations, which follow from \lambda = \lambda P; see http://en.wikipedia.org/wiki/Balance_equation

Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
\text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V), where
P(u \rightarrow V) = \sum_{v \in V} p_{u,v}. Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

So, if we set U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \} we have \text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p). Thus, \lambda_1 = \rho \lambda_0, where
\rho = \frac{p}{1-p}. Continuing like that you can get all \lambda_n in terms of \lambda_0.

RGV

Shouldn't
V = \{\pm1, \pm2, \pm3, ...\}
meaning
P(u \rightarrow V) = \sum_{v \in V} p_{u,v}
P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1
Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

What do you think?

Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.
 
Last edited:
RoshanBBQ said:
Shouldn't
V = \{\pm1, \pm2, \pm3, ...\}
meaning
P(u \rightarrow V) = \sum_{v \in V} p_{u,v}
P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1
Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

What do you think?

Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.

OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

RGV
 
Ray Vickson said:
OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

RGV

This one seems to have done the trick. I got

lambda_j = ρ^j

This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?
 
RoshanBBQ said:
This one seems to have done the trick. I got

lambda_j = ρ^j

This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?

Of course, for state-space = Z you will find the is NO equilibrium distribution: the solution would have to have \lambda_j = \rho^j \lambda_0 \text{ for all } j \in Z, and the series \sum_{j=-\infty}^{\infty} \rho^j is divergent for any ρ. However, if the state-space is N, then for ρ < 1 we have a well-defined equilibrium distribution. This makes good intuitive sense: if ρ > 1 there is a drift to the right in the long run, so the state X(t) → ∞ as t → ∞. If ρ < 1 there is a drift to the left, and in the case of Z that means that X(t) → -∞ as t → ∞; but in the case of N there the left-drift stops at 0, so between left-drift and right-reflection you end up with a well-defined equilibrium.

As to the technique: you can find it in various sources under the general topic of "balance equations", etc. See, eg., http://www.cs.wayne.edu/~hzhang/courses/7290/Lectures/14 - Review of Markov Chain Theory.pdf pages 21 ... or
http://www.netlab.tkk.fi/opetus/s38143/luennot/E_markov.pdf (which is for continuous-time chains, but the ideas are the same).

Note: there is a difference between "detailed balance" (which works for reversible chains) and "global balance", which works in general.

RGV
 
Ray Vickson, my book says
We say that the chain {Xn ; n \in [0, inf)} is reversible if B = P, which holds if and only if the following detailed balance equation is satisfied: pi_x P_xy = pi_y P_yx for all x,y in the state space E

It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite, nonzero invariant probability measure.

Thank you for your time and help!
 
Last edited:
RoshanBBQ said:
Ray Vickson, my book says




It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite invariant probability measure.

Thank you for your time and help!


I never said the chain was reversible.

Infinite chains are a bit tricky, but the fact is: if an invariant measure exists, it will satisfy the balance equations (NOT the reversiblity conditions), so if the balance equations lead to a contradiction, that must mean that there is no invariant measure.

RGV
 
Back
Top