Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Markov chain problem

  1. May 9, 2007 #1
    [SOLVED] Markov chain problem

    Hi, all! This is not a homework thing or something like that. I'm just curious.

    Suppose a Markov chain p(n+1)=Q*p(n), where Q=[q11, q12; q21, q22] is the transition probability matrix and p(n)=[p1(n);p2(n)] is the probability vector.

    Given Q and p(1), we can generate a realization from it, i.e., a sequence of ones and zeros.

    Now I want to do the inverse. Given a long enough sequence of ones and zeros, Q can be obtained by counting the number of changes from zero to zero(which will be q11), zero to one(q12), one to zero(q21), and one to one(q22). But, how can I get the initial probability vector p(1) from this sequence? Maximum Likelihood Estimation, blah blah? I don't know about that. Please, someone give me a clue how I can do this. Or is it impossible to do that from just only one realization sequence?

    Thanks in advance.
     
  2. jcsd
  3. May 9, 2007 #2

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    How is the realization r linked to p? If p1 > 0.5 then r = 1, otherwise r = 0?
     
  4. May 9, 2007 #3
    Oh, I forgot that. The realization is not deterministic but stochastic through a kind of random number generation according to the probability vector. If p1=.4 and p2=.6, then the probability that a zero will come out is 40 percent and the probability that a one will come out is 60 percent.
     
  5. May 9, 2007 #4

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    So how do you construct your sequence of 1's and 0's? Toss a coin, with a 1 on one side and a 0 on the other?
     
  6. May 9, 2007 #5
    It's a good idea. You can use any sequence if it is composed of zeros and ones.
     
  7. May 9, 2007 #6
    For example, you are given this sequence: 010100101101001011010010111010.
    Then, you can get Q as [3/14, 11/15; 11/14, 4/15].
    And, I have no idea how I can get p(1) from this.
    Additionally, is it possible anyway?
     
  8. May 9, 2007 #7

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Two ideas:
    1. assume uniform initial probability, p = 0.5
    2. assume stationary Markov, then solve p = Qp.
     
  9. May 9, 2007 #8

    NateTG

    User Avatar
    Science Advisor
    Homework Helper

    It's impossible to completely determine the probability matrix from samples, even if we stipulate that it's a markov process, but it is possible to use statistics to find what the probabilities are likely to be.

    Any probability matrix where all entries are non-zero is can produce any sequence. For example, a process with the probability matrix:
    [tex]\[ \left[ \begin{array}{cc}
    \frac{1}{2}& \frac{1}{2} \\
    \frac{1}{2} & \frac{1}{2} \\ \end{array} \right]\] [/tex]
    Will produce any particular sequence of length [itex]n[/itex] with probability [itex]\frac{1}{2^n}[/itex].

    So, assuming even initial distribution it would generate 010100101101001011010010111010 about
    [itex]\frac{1}{2^{30}}[/itex] of the time.

    The probability matrix you produced
    [tex]\[ \left[ \begin{array}{cc}
    \frac{3}{14}& \frac{11}{15} \\
    \frac{11}{14} & \frac{4}{15} \\ \end{array} \right]\] [/tex]
    Is just the one that has the highest likelihood of producing the sequence you gave, but, because the sample is relatively small, there really isn't a whole lot confidence that it's correct.

    This 'reversing probability' stuff is a bit more advanced.
     
  10. May 9, 2007 #9
    What about MAP(Maximum A Posteriori) esimation, [tex]\hat{x}[/tex]=argmax[tex]p(x|y)[/tex], since we're interested in the random variable [tex]x[/tex], p(1) in this case, given data as the realization [tex]y[/tex], in this case the sequence? I feel like this will work. The a posteriori pdf [tex]p(x|y)[/tex] can be calculated using Bayes's theorem [tex]p(x|y)=\frac{p(y|x)}{p(y)}p(x)[/tex] where [tex]p(x)[/tex] is the a priori pdf, [tex]p(y)[/tex] is the marginal pdf, and [tex]p(y|x)[/tex] is proportional to the likelihood function. And we can put [tex]\left[\begin{array}{c} .5\\ .5\\ \end{array}\right][/tex] in the place of the a priori pdf [tex]p(x)[/tex] using maximum entropy principle as EnumaElish did in the first idea. But, I don't know how to calculate the rest of things. Now I'm seeking for the calculation in some books, but I haven't found yet. I guess it's just MLE(Maximum Likelihood Estimation) after all. But, I don't know how to do that.
     
    Last edited: May 9, 2007
  11. May 9, 2007 #10
    I don't like the stationarity assumption because I want to deal with a sequence even like this: 010100100100000000000000000000000110100101.
     
  12. May 9, 2007 #11

    EnumaElish

    User Avatar
    Science Advisor
    Homework Helper

    Isn't there is a p that solves p = Qp for any arbitrary Q (hence an arbitrary sequence)?
     
  13. May 10, 2007 #12
    I did a mistake. You're right EnumaElish. :wink: Any Q has it's own eigenvectors at least one. So your stationarity assumption will be okay. And it seems the best idea for my problem currently.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Markov chain problem
  1. Markov Chains (Replies: 6)

  2. Markov chains (Replies: 10)

Loading...