Markov chain problem

  • Thread starter celestra
  • Start date
  • #1
18
0

Main Question or Discussion Point

[SOLVED] Markov chain problem

Hi, all! This is not a homework thing or something like that. I'm just curious.

Suppose a Markov chain p(n+1)=Q*p(n), where Q=[q11, q12; q21, q22] is the transition probability matrix and p(n)=[p1(n);p2(n)] is the probability vector.

Given Q and p(1), we can generate a realization from it, i.e., a sequence of ones and zeros.

Now I want to do the inverse. Given a long enough sequence of ones and zeros, Q can be obtained by counting the number of changes from zero to zero(which will be q11), zero to one(q12), one to zero(q21), and one to one(q22). But, how can I get the initial probability vector p(1) from this sequence? Maximum Likelihood Estimation, blah blah? I don't know about that. Please, someone give me a clue how I can do this. Or is it impossible to do that from just only one realization sequence?

Thanks in advance.
 

Answers and Replies

  • #2
EnumaElish
Science Advisor
Homework Helper
2,304
124
How is the realization r linked to p? If p1 > 0.5 then r = 1, otherwise r = 0?
 
  • #3
18
0
Oh, I forgot that. The realization is not deterministic but stochastic through a kind of random number generation according to the probability vector. If p1=.4 and p2=.6, then the probability that a zero will come out is 40 percent and the probability that a one will come out is 60 percent.
 
  • #4
EnumaElish
Science Advisor
Homework Helper
2,304
124
So how do you construct your sequence of 1's and 0's? Toss a coin, with a 1 on one side and a 0 on the other?
 
  • #5
18
0
It's a good idea. You can use any sequence if it is composed of zeros and ones.
 
  • #6
18
0
For example, you are given this sequence: 010100101101001011010010111010.
Then, you can get Q as [3/14, 11/15; 11/14, 4/15].
And, I have no idea how I can get p(1) from this.
Additionally, is it possible anyway?
 
  • #7
EnumaElish
Science Advisor
Homework Helper
2,304
124
Two ideas:
1. assume uniform initial probability, p = 0.5
2. assume stationary Markov, then solve p = Qp.
 
  • #8
NateTG
Science Advisor
Homework Helper
2,450
5
It's impossible to completely determine the probability matrix from samples, even if we stipulate that it's a markov process, but it is possible to use statistics to find what the probabilities are likely to be.

Any probability matrix where all entries are non-zero is can produce any sequence. For example, a process with the probability matrix:
[tex]\[ \left[ \begin{array}{cc}
\frac{1}{2}& \frac{1}{2} \\
\frac{1}{2} & \frac{1}{2} \\ \end{array} \right]\] [/tex]
Will produce any particular sequence of length [itex]n[/itex] with probability [itex]\frac{1}{2^n}[/itex].

So, assuming even initial distribution it would generate 010100101101001011010010111010 about
[itex]\frac{1}{2^{30}}[/itex] of the time.

The probability matrix you produced
[tex]\[ \left[ \begin{array}{cc}
\frac{3}{14}& \frac{11}{15} \\
\frac{11}{14} & \frac{4}{15} \\ \end{array} \right]\] [/tex]
Is just the one that has the highest likelihood of producing the sequence you gave, but, because the sample is relatively small, there really isn't a whole lot confidence that it's correct.

This 'reversing probability' stuff is a bit more advanced.
 
  • #9
18
0
What about MAP(Maximum A Posteriori) esimation, [tex]\hat{x}[/tex]=argmax[tex]p(x|y)[/tex], since we're interested in the random variable [tex]x[/tex], p(1) in this case, given data as the realization [tex]y[/tex], in this case the sequence? I feel like this will work. The a posteriori pdf [tex]p(x|y)[/tex] can be calculated using Bayes's theorem [tex]p(x|y)=\frac{p(y|x)}{p(y)}p(x)[/tex] where [tex]p(x)[/tex] is the a priori pdf, [tex]p(y)[/tex] is the marginal pdf, and [tex]p(y|x)[/tex] is proportional to the likelihood function. And we can put [tex]\left[\begin{array}{c} .5\\ .5\\ \end{array}\right][/tex] in the place of the a priori pdf [tex]p(x)[/tex] using maximum entropy principle as EnumaElish did in the first idea. But, I don't know how to calculate the rest of things. Now I'm seeking for the calculation in some books, but I haven't found yet. I guess it's just MLE(Maximum Likelihood Estimation) after all. But, I don't know how to do that.
 
Last edited:
  • #10
18
0
I don't like the stationarity assumption because I want to deal with a sequence even like this: 010100100100000000000000000000000110100101.
 
  • #11
EnumaElish
Science Advisor
Homework Helper
2,304
124
Isn't there is a p that solves p = Qp for any arbitrary Q (hence an arbitrary sequence)?
 
  • #12
18
0
I did a mistake. You're right EnumaElish. :wink: Any Q has it's own eigenvectors at least one. So your stationarity assumption will be okay. And it seems the best idea for my problem currently.
 

Related Threads for: Markov chain problem

  • Last Post
Replies
3
Views
1K
  • Last Post
Replies
24
Views
8K
  • Last Post
Replies
6
Views
2K
  • Last Post
Replies
17
Views
4K
  • Last Post
Replies
10
Views
3K
  • Last Post
Replies
1
Views
2K
  • Last Post
Replies
7
Views
1K
  • Last Post
Replies
1
Views
2K
Top