MHB Hidden Markov Model Calculation

  • Thread starter Thread starter betsyrocamora
  • Start date Start date
  • Tags Tags
    Calculation Model
betsyrocamora
Messages
2
Reaction score
0
Can someone help me with this question.

Let us define a Markov hidden model with 2-states A and B, such that

P(X0=A)=0.6,P(X0=B)=0.4,P(X1=A/X0=A)=0.3,P(X1=B/X0=B)=0.8

what is the value of P(X3=A) ??
 
Last edited:
Physics news on Phys.org
Hello betsyrocamora,

Welcome to MHB! (Wave)

Can you show what you have tried and where you are stuck so our helpers have a better idea how best to help you?
 
betsyrocamora said:
Can someone help me with this question.

Let us define a Markov hidden model with 2-states A and B, such that

P(X0=A)=0.6,P(X0=B)=0.4,P(X1=A/X0=A)=0.3,P(X1=B/X0=B)=0.8

what is the value of P(X3=A) ??

Welcome to MHB, betsyrocamora! :)

This looks like a question that is intended to learn what a hidden Markov model actually is.
Do your notes perhaps contain a worked example?
Or perhaps an example for a Markov chain?
 
I like Serena said:
Welcome to MHB, betsyrocamora! :)

This looks like a question that is intended to learn what a hidden Markov model actually is.
Do your notes perhaps contain a worked example?
Or perhaps an example for a Markov chain?

No, I know what is a hidden markov model, but with this one I am a little lost, I have tried to let j denote the state A. Used p3ij:=P(X3=j|X0=i) that satisfies the Chapman Kolmogorov equations and after that I think is the total probabilities ecuation but I am lost in there hahha
 
betsyrocamora said:
No, I know what is a hidden markov model, but with this one I am a little lost, I have tried to let j denote the state A. Used p3ij:=P(X3=j|X0=i) that satisfies the Chapman Kolmogorov equations and after that I think is the total probabilities ecuation but I am lost in there hahha

It seems to me you're making it unnecessarily complex.

From your given data we can deduce that $P(X_1=B\ |\ X_0=A)=0.7$ and $P(X_1=A\ |\ X_0=B)=0.2$.
You can write these numbers in a matrix M.
Then, assuming the Markov property being independent of n, we get:
$$\begin{bmatrix}{P(X_n=A) \\ P(X_n=B)}\end{bmatrix} = M^n \begin{bmatrix}{P(X_0=A) \\ P(X_0=B)}\end{bmatrix}$$
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top