Find Lambda for Simple Walk on Z (Markov Chain)

Click For Summary

Homework Help Overview

The discussion revolves around finding the invariant measure \(\lambda = \{\lambda_i\}\) for a simple random walk on the integers \(\mathbb{Z}\), where transitions depend on probabilities \(p\) and \(1-p\) for moving to adjacent states. The original poster presents a balance equation \(\lambda P = \lambda\) and expresses difficulty in managing the infinite equations arising from the setup.

Discussion Character

  • Exploratory, Assumption checking, Mathematical reasoning

Approaches and Questions Raised

  • Participants discuss the formulation of balance equations and the implications of splitting the state space into subsets. There are inquiries about the nature of the random walk, particularly regarding reversibility and the existence of an invariant probability measure. Some participants express confusion about the infinite nature of the state space and its impact on the solution.

Discussion Status

The conversation is ongoing, with various interpretations of the balance equations being explored. Some participants have suggested specific forms for \(\lambda_j\) in terms of \(\lambda_0\) and have noted the divergence of certain series. There is no explicit consensus yet, but productive avenues for exploration have been identified.

Contextual Notes

Participants note that the random walk is defined on \(\mathbb{Z}\), which raises questions about the existence of a well-defined equilibrium distribution due to the infinite state space. The discussion also touches on the difference between detailed and global balance equations.

RoshanBBQ
Messages
277
Reaction score
0

Homework Statement


Find [tex]\lambda= \{\lambda_i\}[/tex] with [tex]\lambda_0 = 1[/tex] if [tex]P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1[/tex]


Homework Equations


[tex]\lambda P = \lambda[/tex]


The Attempt at a Solution


I used my relevant equation to write out:
[tex](1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}[/tex]
and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!
 
Physics news on Phys.org
RoshanBBQ said:

Homework Statement


Find [tex]\lambda= \{\lambda_i\}[/tex] with [tex]\lambda_0 = 1[/tex] if [tex]P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1[/tex]


Homework Equations


[tex]\lambda P = \lambda[/tex]


The Attempt at a Solution


I used my relevant equation to write out:
[tex](1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}[/tex]
and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!

Use the Balance equations, which follow from [itex]\lambda = \lambda P[/itex]; see http://en.wikipedia.org/wiki/Balance_equation

Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
[tex]\text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V),[/tex] where
[tex]P(u \rightarrow V) = \sum_{v \in V} p_{u,v}.[/tex] Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

So, if we set [itex]U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \}[/itex] we have [itex]\text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p).[/itex] Thus, [itex]\lambda_1 = \rho \lambda_0,[/itex] where
[tex]\rho = \frac{p}{1-p}.[/tex] Continuing like that you can get all [itex]\lambda_n[/itex] in terms of [itex]\lambda_0.[/itex]

RGV
 
Ray Vickson said:
Use the Balance equations, which follow from [itex]\lambda = \lambda P[/itex]; see http://en.wikipedia.org/wiki/Balance_equation

Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
[tex]\text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V),[/tex] where
[tex]P(u \rightarrow V) = \sum_{v \in V} p_{u,v}.[/tex] Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

So, if we set [itex]U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \}[/itex] we have [itex]\text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p).[/itex] Thus, [itex]\lambda_1 = \rho \lambda_0,[/itex] where
[tex]\rho = \frac{p}{1-p}.[/tex] Continuing like that you can get all [itex]\lambda_n[/itex] in terms of [itex]\lambda_0.[/itex]

RGV

Shouldn't
[tex]V = \{\pm1, \pm2, \pm3, ...\}[/tex]
meaning
[tex]P(u \rightarrow V) = \sum_{v \in V} p_{u,v}[/tex]
[tex]P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1[/tex]
Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

What do you think?

Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.
 
Last edited:
RoshanBBQ said:
Shouldn't
[tex]V = \{\pm1, \pm2, \pm3, ...\}[/tex]
meaning
[tex]P(u \rightarrow V) = \sum_{v \in V} p_{u,v}[/tex]
[tex]P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1[/tex]
Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

What do you think?

Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.

OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

RGV
 
Ray Vickson said:
OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

RGV

This one seems to have done the trick. I got

lambda_j = ρ^j

This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?
 
RoshanBBQ said:
This one seems to have done the trick. I got

lambda_j = ρ^j

This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?

Of course, for state-space = Z you will find the is NO equilibrium distribution: the solution would have to have [itex]\lambda_j = \rho^j \lambda_0 \text{ for all } j \in Z,[/itex] and the series [itex]\sum_{j=-\infty}^{\infty} \rho^j[/itex] is divergent for any ρ. However, if the state-space is N, then for ρ < 1 we have a well-defined equilibrium distribution. This makes good intuitive sense: if ρ > 1 there is a drift to the right in the long run, so the state X(t) → ∞ as t → ∞. If ρ < 1 there is a drift to the left, and in the case of Z that means that X(t) → -∞ as t → ∞; but in the case of N there the left-drift stops at 0, so between left-drift and right-reflection you end up with a well-defined equilibrium.

As to the technique: you can find it in various sources under the general topic of "balance equations", etc. See, eg., http://www.cs.wayne.edu/~hzhang/courses/7290/Lectures/14 - Review of Markov Chain Theory.pdf pages 21 ... or
http://www.netlab.tkk.fi/opetus/s38143/luennot/E_markov.pdf (which is for continuous-time chains, but the ideas are the same).

Note: there is a difference between "detailed balance" (which works for reversible chains) and "global balance", which works in general.

RGV
 
Ray Vickson, my book says
We say that the chain {Xn ; n \in [0, inf)} is reversible if B = P, which holds if and only if the following detailed balance equation is satisfied: pi_x P_xy = pi_y P_yx for all x,y in the state space E

It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite, nonzero invariant probability measure.

Thank you for your time and help!
 
Last edited:
RoshanBBQ said:
Ray Vickson, my book says




It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite invariant probability measure.

Thank you for your time and help!


I never said the chain was reversible.

Infinite chains are a bit tricky, but the fact is: if an invariant measure exists, it will satisfy the balance equations (NOT the reversiblity conditions), so if the balance equations lead to a contradiction, that must mean that there is no invariant measure.

RGV
 

Similar threads

Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
7
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K