Find Lambda for Simple Walk on Z (Markov Chain)

In summary, the conversation discusses finding a solution for \lambda= \{\lambda_i\} with \lambda_0 = 1, given a set of equations and conditions. The technique used is known as balance equations, where the state space is split into two parts and the rates of flow between them are equated. The final solution is \lambda_j = \rho^j, but this only applies for state-spaces N, as for Z there is no equilibrium distribution. The technique can be found under the topic of "balance equations" in various sources.
  • #1
RoshanBBQ
280
0

Homework Statement


Find [tex]\lambda= \{\lambda_i\}[/tex] with [tex]\lambda_0 = 1[/tex] if [tex]P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1[/tex]


Homework Equations


[tex]\lambda P = \lambda[/tex]


The Attempt at a Solution


I used my relevant equation to write out:
[tex] (1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}[/tex]
and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!
 
Physics news on Phys.org
  • #2
RoshanBBQ said:

Homework Statement


Find [tex]\lambda= \{\lambda_i\}[/tex] with [tex]\lambda_0 = 1[/tex] if [tex]P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1[/tex]


Homework Equations


[tex]\lambda P = \lambda[/tex]


The Attempt at a Solution


I used my relevant equation to write out:
[tex] (1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}[/tex]
and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!

Use the Balance equations, which follow from [itex] \lambda = \lambda P[/itex]; see http://en.wikipedia.org/wiki/Balance_equation

Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
[tex] \text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V), [/tex] where
[tex] P(u \rightarrow V) = \sum_{v \in V} p_{u,v}.[/tex] Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

So, if we set [itex] U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \}[/itex] we have [itex] \text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p).[/itex] Thus, [itex] \lambda_1 = \rho \lambda_0, [/itex] where
[tex] \rho = \frac{p}{1-p}.[/tex] Continuing like that you can get all [itex] \lambda_n [/itex] in terms of [itex] \lambda_0.[/itex]

RGV
 
  • #3
Ray Vickson said:
Use the Balance equations, which follow from [itex] \lambda = \lambda P[/itex]; see http://en.wikipedia.org/wiki/Balance_equation

Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
[tex] \text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V), [/tex] where
[tex] P(u \rightarrow V) = \sum_{v \in V} p_{u,v}.[/tex] Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

So, if we set [itex] U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \}[/itex] we have [itex] \text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p).[/itex] Thus, [itex] \lambda_1 = \rho \lambda_0, [/itex] where
[tex] \rho = \frac{p}{1-p}.[/tex] Continuing like that you can get all [itex] \lambda_n [/itex] in terms of [itex] \lambda_0.[/itex]

RGV

Shouldn't
[tex]V = \{\pm1, \pm2, \pm3, ...\}[/tex]
meaning
[tex] P(u \rightarrow V) = \sum_{v \in V} p_{u,v}[/tex]
[tex] P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1[/tex]
Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

What do you think?

Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.
 
Last edited:
  • #4
RoshanBBQ said:
Shouldn't
[tex]V = \{\pm1, \pm2, \pm3, ...\}[/tex]
meaning
[tex] P(u \rightarrow V) = \sum_{v \in V} p_{u,v}[/tex]
[tex] P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1[/tex]
Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

What do you think?

Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.

OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

RGV
 
  • #5
Ray Vickson said:
OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

RGV

This one seems to have done the trick. I got

lambda_j = ρ^j

This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?
 
  • #6
RoshanBBQ said:
This one seems to have done the trick. I got

lambda_j = ρ^j

This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?

Of course, for state-space = Z you will find the is NO equilibrium distribution: the solution would have to have [itex] \lambda_j = \rho^j \lambda_0 \text{ for all } j \in Z, [/itex] and the series [itex] \sum_{j=-\infty}^{\infty} \rho^j [/itex] is divergent for any ρ. However, if the state-space is N, then for ρ < 1 we have a well-defined equilibrium distribution. This makes good intuitive sense: if ρ > 1 there is a drift to the right in the long run, so the state X(t) → ∞ as t → ∞. If ρ < 1 there is a drift to the left, and in the case of Z that means that X(t) → -∞ as t → ∞; but in the case of N there the left-drift stops at 0, so between left-drift and right-reflection you end up with a well-defined equilibrium.

As to the technique: you can find it in various sources under the general topic of "balance equations", etc. See, eg., http://www.cs.wayne.edu/~hzhang/courses/7290/Lectures/14 - Review of Markov Chain Theory.pdf pages 21 ... or
http://www.netlab.tkk.fi/opetus/s38143/luennot/E_markov.pdf (which is for continuous-time chains, but the ideas are the same).

Note: there is a difference between "detailed balance" (which works for reversible chains) and "global balance", which works in general.

RGV
 
  • #7
Ray Vickson, my book says
We say that the chain {Xn ; n \in [0, inf)} is reversible if B = P, which holds if and only if the following detailed balance equation is satisfied: pi_x P_xy = pi_y P_yx for all x,y in the state space E

It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite, nonzero invariant probability measure.

Thank you for your time and help!
 
Last edited:
  • #8
RoshanBBQ said:
Ray Vickson, my book says




It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite invariant probability measure.

Thank you for your time and help!


I never said the chain was reversible.

Infinite chains are a bit tricky, but the fact is: if an invariant measure exists, it will satisfy the balance equations (NOT the reversiblity conditions), so if the balance equations lead to a contradiction, that must mean that there is no invariant measure.

RGV
 

FAQ: Find Lambda for Simple Walk on Z (Markov Chain)

1. What is a Markov Chain?

A Markov Chain is a mathematical model that represents a system that transitions from one state to another over time. The state at any given time only depends on the previous state, and not on any other states before that.

2. How does a Markov Chain work?

A Markov Chain works by assigning probabilities to each state transition. The current state and the transition probabilities determine the next state. This process continues until a stopping condition is met, such as a specific number of transitions or reaching a certain state.

3. What is Lambda in a Markov Chain?

Lambda, also known as the transition rate, is a parameter that represents the average number of state transitions per unit time. It is an important factor in determining the stability and behavior of a Markov Chain.

4. How do you find Lambda for a Simple Walk on Z Markov Chain?

To find Lambda for a Simple Walk on Z Markov Chain, you need to first determine the transition probabilities. These can be found by dividing the number of possible transitions from one state to another by the total number of possible transitions. Then, Lambda can be calculated by dividing the number of transitions in a given time period by the number of time steps in that period.

5. What is the significance of finding Lambda for a Simple Walk on Z Markov Chain?

Finding Lambda for a Simple Walk on Z Markov Chain allows us to understand the behavior and stability of the system. It helps us predict how the system will evolve and whether certain states will be reached or avoided. It is also useful in applications such as modeling random processes and predicting future outcomes.

Back
Top