1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Find Lambda for Simple Walk on Z (Markov Chain)

  1. Mar 23, 2012 #1
    1. The problem statement, all variables and given/known data
    Find [tex]\lambda= \{\lambda_i\}[/tex] with [tex]\lambda_0 = 1[/tex] if [tex]P(X_n=i+1|X_n=i) = p, P(X_n = i-1|X_n = i) = 1 - p, i \in Z, 0<p<1[/tex]


    2. Relevant equations
    [tex]\lambda P = \lambda[/tex]


    3. The attempt at a solution
    I used my relevant equation to write out:
    [tex] (1-p)\lambda_0 + P\lambda_{-2} = \lambda_{-1}[/tex]
    and there are infinitely more such equations, just add or subtract k from all three indices into lambda. The problem is every time you add an equation, you add an unknown. I'm a bit stuck here!
     
  2. jcsd
  3. Mar 23, 2012 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Use the Balance equations, which follow from [itex] \lambda = \lambda P[/itex]; see http://en.wikipedia.org/wiki/Balance_equation

    Basically, you can split up the state space into two parts U and V, then say rate(U->V) = rate(V->U), where
    [tex] \text{rate}(U->V) = \sum_{u \in U} \lambda_u P(u \rightarrow V), [/tex] where
    [tex] P(u \rightarrow V) = \sum_{v \in V} p_{u,v}.[/tex] Note: this is convenient, because the only flows that matter are between U and V; flows inside U (that is, from U to U) or inside V cancel out.

    So, if we set [itex] U = \{0\} \text{ and } V = \{ 1,2,3, \ldots \}[/itex] we have [itex] \text{rate}(U \rightarrow V) = \lambda_0 p \text{ and } \text{rate}(V \rightarrow U) = \lambda_1 (1-p).[/itex] Thus, [itex] \lambda_1 = \rho \lambda_0, [/itex] where
    [tex] \rho = \frac{p}{1-p}.[/tex] Continuing like that you can get all [itex] \lambda_n [/itex] in terms of [itex] \lambda_0.[/itex]

    RGV
     
  4. Mar 23, 2012 #3
    Shouldn't
    [tex]V = \{\pm1, \pm2, \pm3, ...\}[/tex]
    meaning
    [tex] P(u \rightarrow V) = \sum_{v \in V} p_{u,v}[/tex]
    [tex] P(0 \rightarrow V) = \sum_{v \in V} p_{0,v}=p+1-p = 1[/tex]
    Which makes intuitive sense, I think. If you are in state 0, you MUST leave it since you will either walk up to 1 or down to -1. So this is saying the flow out of state 0 into any other state will happen with probability 1.

    What do you think?

    Continuing on, I then end up with the same equation I wrote in my attempt at the solution. I could be applying something incorrectly, though.
     
    Last edited: Mar 23, 2012
  5. Mar 23, 2012 #4

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    OK: I missed the fact that you were in Z, not N. You could take U = {...-3,-2,-1,0} and V = {1,2,3,...}. You would still find λ[1] = ρ λ[0], etc.

    RGV
     
  6. Mar 23, 2012 #5
    This one seems to have done the trick. I got

    lambda_j = ρ^j

    This stuff isn't covered in my book, and I can't find stuff on the internet about it. (your wiki link doesn't really have much information to a newcomer either!). What are some names this by which this technique goes?
     
  7. Mar 24, 2012 #6

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Of course, for state-space = Z you will find the is NO equilibrium distribution: the solution would have to have [itex] \lambda_j = \rho^j \lambda_0 \text{ for all } j \in Z, [/itex] and the series [itex] \sum_{j=-\infty}^{\infty} \rho^j [/itex] is divergent for any ρ. However, if the state-space is N, then for ρ < 1 we have a well-defined equilibrium distribution. This makes good intuitive sense: if ρ > 1 there is a drift to the right in the long run, so the state X(t) → ∞ as t → ∞. If ρ < 1 there is a drift to the left, and in the case of Z that means that X(t) → -∞ as t → ∞; but in the case of N there the left-drift stops at 0, so between left-drift and right-reflection you end up with a well-defined equilibrium.

    As to the technique: you can find it in various sources under the general topic of "balance equations", etc. See, eg., http://www.cs.wayne.edu/~hzhang/courses/7290/Lectures/14 - Review of Markov Chain Theory.pdf pages 21 ... or
    http://www.netlab.tkk.fi/opetus/s38143/luennot/E_markov.pdf (which is for continuous-time chains, but the ideas are the same).

    Note: there is a difference between "detailed balance" (which works for reversible chains) and "global balance", which works in general.

    RGV
     
  8. Mar 27, 2012 #7
    Ray Vickson, my book says


    It looks like your method is just the sum of a bunch of balance equations. How do I know, however, that this random walk on Z is reversible Markov chain? In other words, how do I know I can use the balance equations? Further, it looks like the form of the balance equations in my book and on Wikipedia use invariant probability measures pi whereas my problem asks simply for an equation for the jth invariant measure. The difference is the random walk on Z is recurrent, so it intuitively cannot have a nonzero invariant probability measure due to its infinite state space. It can, however, have the jth invariant measure on P for all it means is the sum of all these measures must approach infinity so that the normalization of these invariant measures does not give a finite, nonzero invariant probability measure.

    Thank you for your time and help!
     
    Last edited: Mar 27, 2012
  9. Mar 27, 2012 #8

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper


    I never said the chain was reversible.

    Infinite chains are a bit tricky, but the fact is: if an invariant measure exists, it will satisfy the balance equations (NOT the reversiblity conditions), so if the balance equations lead to a contradiction, that must mean that there is no invariant measure.

    RGV
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Find Lambda for Simple Walk on Z (Markov Chain)
  1. Markov chains (Replies: 0)

  2. Markov Chain (Replies: 1)

  3. Markov chains (Replies: 0)

  4. Markov Chain (Replies: 1)

Loading...