1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Limit of a continuous time Markov chain

  1. Jan 2, 2016 #1
    1. The problem statement, all variables and given/known data
    Calculate the limit
    $$lim_{s,t→∞} R_X(s, s+t) = lim_{s,t→∞}E(X(s)X(s+t))$$

    for a continuous time Markov chain

    $$(X(t) ; t ≥ 0)$$

    with state space S and generator G given by

    $$S = (0, 1)$$

    $$ G=
    \begin{pmatrix}
    -\alpha & \alpha \\
    \beta & -\beta\ \\
    \end{pmatrix}
    $$

    respectively, where α, β > 0 are given constants.

    2. Relevant equations


    3. The attempt at a solution

    The chain is irreducible with stationary distribution $$π = ( \frac{\beta}{\alpha + \beta} \frac{\alpha}{\alpha + \beta})$$

    (as this gives πG = 0).

    This i figured out on my own and understand but the next part of the solution i dont get at all.
    If anyone can give me an explanation or some hints or point me to an online or offline source
    that covers this i would really appreciate it.

    Noting that X(s)X(t) = 1 when both X(s) and X(t) are 1 while X(s)X(t) = 0 otherwise
    it follows that
    $$E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t) = (\mu^{(0)}P_{s})_{1} p_{11}(t) = (\mu^{(0)})_{0} p_{01}(s)+(\mu^{(0)})_{1} p_{11}(s) p_{11}(t) $$
     
  2. jcsd
  3. Jan 2, 2016 #2

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    If ##P(t) = \exp{( t G)}## (which can be obtained by solving the DE system ##P'(t) = G P(t)## or ##P'(t) = P(t) G##), then for any time ##\tau## the state probability distribution row vector is ##p(\tau) = p(0) P(\tau)##. Thus, ##p(t+s) = p(0) P(t+s)= p(0) P(s) P(t) = p(s) P(t)##, so ##p_1(t+s) = p_0(s) P_{01}(t) +p_1(s)P_{11}(t)##. This implies ##R_X(s,s+t) = p_0(s) p_1(s) P_{01}(t) + p_1(s)^2 P_{11}(t)##.
     
  4. Jan 3, 2016 #3
    Thanks! Im getting there slowly!

    I think im with you this far

    "If ##P(t) = \exp{( t G)}## (which can be obtained by solving the DE system ##P'(t) = G P(t)## or ##P'(t) = P(t) G##), then for any time ##\tau## the state probability distribution row vector is ##p(\tau) = p(0) P(\tau)##. Thus, ##p(t+s) = p(0) P(t+s)= p(0) P(s) P(t) = p(s) P(t)##, so ##p_1(t+s) = p_0(s) P_{01}(t) +p_1(s)P_{11}(t)##. "

    But i dont understand how this implies ##(i)##

    "##R_X(s,s+t) = p_0(s) p_1(s) P_{01}(t) + p_1(s)^2 P_{11}(t)##"

    And from the solution i dont understand this step ##(ii)##

    $$E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t) $$

    Your explaination made me understand how this leads to

    $$= (\mu^{(0)}P_{s})_{1} p_{11}(t) $$

    and im fairly sure your explanation made me understand this step too, its just the state probability distribution row vector multiplied by the first column of P(S)

    $$= ((\mu^{(0)})_{0} p_{01}(s)+(\mu^{(0)})_{1} p_{11}(s)) * p_{11}(t) $$

    What i didnt post in my OP was the final steps of the solution

    $$→ ((\mu^{(0)})_{0} \pi_{1}+(\mu^{(0)})_{1} \pi_{1}) * \pi_{1} = \pi_{1}^2$$
    $$= (\frac{\alpha^2}{\alpha + \beta^2})$$

    as s, t → ∞.

    Here i understand that because X is irreducible the p's goes to the stationary distribution ##\pi## in the limit.
    But what happens to the ##(\mu^{(0)})##'s? ##(iii)##

    So there are three things im still unsure about ##(i)-(iii)##
     
  5. Jan 3, 2016 #4

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    The ##p(0)##s are irrelevant, because the chain is ergodic; that is, for any starting state-distribution vector ##p(0) = (p_0(0), p_1(0))## we have ##p(s) = p(0) \cdot P(s) \to (\pi_0,\pi_1)## for large ##s##. Your ##\mu## = my ##p##.
     
  6. Jan 3, 2016 #5
    Thanks. Now its only these two left to understand :)

    ##E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t)##

    ##R_X(s,s+t) = p_0(s) p_1(s) P_{01}(t) + p_1(s)^2 P_{11}(t)##


    Whats the definitions, theorems, equations....that leads to these expressions.
    E.g. how is ##E(X(s)X(t))## defined in this setting. Its one component of the probability distribution vector at time s and the 11 entry of the P matrix at time t...where does this come from...and the same question for ##R_X(s,s+t)##
    Im stuck and have spent hours searching online for sources covering this....
     
  7. Jan 3, 2016 #6
    I think the first equality should be ##E(X(s)X(s+t))=(\mu^{(s)})_1 p_{11}(t)##.

    Note that the state space of the Markov chain is ##\{0,1\}##.

    By the definition of expectation, we have ##E(X(s)X(s+t))=\sum\limits_{i=0}^{1}\sum\limits_{j=0}^{1}ijP(X(s)=i, X(s+t)=j)=P(X(s)=1, X(s+t)=1)##.

    We know from conditional probability that ##P(X(s)=1, X(s+t)=1)=P(X(s)=1)P(X(s+t)=1\vert X(s)=1)=(\mu^{(s)})_1 p_{11}(t)##.

    Therefore, we have ##E(X(s)X(s+t))=(\mu^{(s)})_1 p_{11}(t)##.

    We then use the law of total probability to express ##(\mu^{(s)})_1## in terms of the initial probability distribution ##\mu^{(0)}## and the transition matrix ##p(s)##. We have ##(\mu^{(s)})_1=P(X(s)=1)=P(X(0)=0)P(X(s)=1\vert X(0)=0)+P(X(0)=1)P(X(s)=1\vert X(0)=1)=(\mu^{(0)})_0p_{01}(s)+(\mu^{(0)})_1 p_{11}(s)##.

    Putting this expression for ##(\mu^{(s)})_1## into ##E(X(s)X(s+t))=(\mu^{(s)})_1 p_{11}(t)##, we obtain ##E(X(s)X(s+t))=((\mu^{(0)})_0 p_{01}(s)+(\mu^{(0)})_1 p_{11}(s)) p_{11}(t)##.

    This latest expression for ##E(X(s)X(s+t))## allows us to evaluate the limit as ##s## and ##t## approach infinity. We have ##\lim\limits_{s,t\to\infty} R_X(s,s+t)=\lim\limits_{s,t\to\infty} E(X(s)X(s+t))=\lim\limits_{s,t\to\infty}((\mu^{(0)})_0 p_{01}(s)+(\mu^{(0)})_1 p_{11}(s)) p_{11}(t)=((\mu^{(0)})_0 \pi_1+(\mu^{(0)})_1 \pi_1) \pi_1=\pi_1^2##.

    We used the fact that ##(\mu^{(0)})_0+(\mu^{(0)})_1=1## to obtain the final equality.
     
  8. Jan 3, 2016 #7

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    You yourself had the solution at the very start: you said that ##X(s) X(s+t) = 0 ## unless both ##X(s)=1## and ##X(s+t) = 1##, in which case the product = 1. Thus, ##E X(s) X(s+t) = p_1(s) p_1(s+t)##. Now just express ##p_1(s+t)## in terms of ##p_0(s), p_1(s)## and the matrix ##P(t)##. If one of the other things you do not understand here is that ##P(s+t) = P(s) P(t) = P(t) P(s)##, then you really do not understand what Markov processes are all about. This is a fundamental property, found in any halfway decent textbook.
     
  9. Jan 4, 2016 #8
    Big thank you pizzasky!

    That i have understood from the very beginning. I think i was very clear to explain what i did not understand:
    ##E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t)## (That it really should have been E(X(s)X(t+s)) didnt make things easier.)

    And pizzasky's use of the definintion, the conditional probability and the law of total probability was exactly what i needed
    and that was a crystal clear explanation.

    This is my first course in a very long time. I quit the university long before graduating because i got a good job and now i am back 10 years later. Im working hard to relearn the basics and how and where to apply them but im not there yet and that is the reason i couldnt connect ##E(X(s)X(t)) = (\mu^{(s)})_{1} p_{11}(t)## to the basic definition, conditional probability and the law of total probability.

    That we have a crappy textbook does not make things easier. E(X(s)X(t)) for continuous markov chains is not mentioned anywhere and its a lot of other things that are also left out.

    So basically you are of course right. I am far from really understanding the theory about continuous markov processes.
     
    Last edited: Jan 4, 2016
  10. Jan 4, 2016 #9

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    My apologies: I made a stupid mistake that maybe confused and mis-lead you. Rather than what I did write, I should have written that ##E X(s) X(s+t) = P(X(s)=1 \: \& \; X(s+t) = 1)## ##= P(X(s) = 1) P(X(s+t) = 1|X(s) = 1) = p(s) P_{11}(t)##. From that, it follows that ##R_X(s,s+t) \to \pi_1^2## as ##s,t \to \infty##.

    Actually, I think you are doing pretty well, despite your apparently poor textbook. This is a hard subject, and I think you are doing better than most. Sure, you may lack some background, and need a lot of problem-solving practice, but you are off t a good start. It is an easy subject to make mistakes in (look at my previous post!) but with practice it will become easier.

    I can certainly recommend you obtain a supplementary alternative text, of which many are available free on-line as pdf files, mostly. Look for documents with titles like "notes on probability" or "notes on Markov chains" or just plain "Markov chains", and you will soon find many excellent sources. If I were recommending one probability book, it would be "Introduction to Probability Theory", by Feller, Wiley, primarily Volume I, but later also Volume II. It is an old book (dated from the 1970s), but is still one of the very best and by far my favorite. It is a true pleasure to read, but not "trivial". Try checking out a copy from a library, to see what I mean.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Limit of a continuous time Markov chain
  1. Markov chains (Replies: 0)

  2. Markov Chain (Replies: 1)

Loading...