Hidden Markov Models - Likelihood of

  • Thread starter Thread starter SamTaylor
  • Start date Start date
  • Tags Tags
    Likelihood Models
SamTaylor
Messages
20
Reaction score
0
Hi,

I try to teach myself Hidden Markov Models. I am using this text
"www.cs.sjsu.edu/~stamp/RUA/HMM.pdf" as Material. The Introduction with
the example was reasonable but now I have trouble in unterstanding
some of the derivation.

I can follow the math and use the formulas to get results but
I also want to understand the meaning behind it.

One question that arise at hidden markov models is to determine
the likelihood of the oberserved sequence O with the given Model
as written below:

b = probability of the observation
a = transiting from one state to another
\pi = starting probability
O = observation
X = state sequence
\lambda = hidden markov model

( Section 4, Page 6 in the Text )

P(O | X, \lambda) = b_{x0}(<img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f635.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt="O_o" title="Er... what? O_o" data-smilie="12"data-shortname="O_o" />)* b_ {x1}(<img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f635.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt="O_o" title="Er... what? O_o" data-smilie="12"data-shortname="O_o" />) *\cdots* b_{xT-1}(O_{T-1})
P(X|\lambda) = \pi_{x_0}a_{x_0,x_1}a_{x_1,x_2}*\cdots* a_{x_{T-1},x_{T-2}}
P(O, X|\lambda) = \frac{P(O \cap X \cap \lambda)}{P( \lambda)}
P(O | X, \lambda) *P(X|\lambda) = \frac{P(O \cap X \cap \lambda)}{P(X \cap \lambda)} \frac{P(X \cap \lambda)}{P(\lambda)}=\frac{P(O \cap X \cap \lambda)}{P(\lambda)}
P(O, X|\lambda) = P(O | X, \lambda) * P(X|\lambda)
P(O | \lambda) = \sum\limits_{X} P(O,X|\lambda)
P(O | \lambda) = \sum\limits_{X} P(O | X, \lambda) * P(X|\lambda)

My question is: Why do I get the likelihood of the oberserved sequence
by summing up over all all possible state sequences. Can someone please
explain it differently?
 
Last edited by a moderator:
Physics news on Phys.org
SamTaylor said:
My question is: Why do I get the likelihood of the oberserved sequence
by summing up over all all possible state sequences. Can someone please
explain it differently?

You could start by thinking of a simpler situation. Think of the Venn diagram approach to probability spaces. You have the whole space of outcomes ("points') S, which is a big circle. Inside S, there is an event ( as set of points) defined by a smaller circle A. Create another set of events that partitions S into the subsets s1,s2,...,sN. ( You can imagine drawing an irregular grid on S and letting s1,s2,.. be the "squares" of the grid.)

P(A) = \sum_{i=1}^N P(A \cap s_i) = \sum_{i=1}^N p(A | s_i) p(s_i)

The inconvenient thing about reading applied probability is understanding what spaces of outcomes are being discussed. Authors often don't make this clear. Usually several different spaces of outcomes are involved. Conditional probabilites in a given article refer to events in a different probability space than unconditional probabilities. In the above example, the probablity p(A | s_i) is the probability of a set of outcomes, and this set contains the same outcomes ("points") as the set A \cap s_i but we are considering this set of outcomes to be in a different probability space. The "whole space" for P(A | s_i) is the set s_i instead of S.

My guess about the article you linked is that you should think of the whole space of outcomes as all possible sequences of information that tell both the sequence of states of the Markov process and the sequence of observations. So a "point" in the whole space is a sequence (i.e. a vector) with both kinds of information. When you draw a circle A that represents a particular sequence of observations, it contains more than one point, because it contains all the vectors that agree with the sequence of observations and have otherwise arbitrary sequences of states of the Markov process.
 
That’s how I thought about it as well. I tried to think about it like
in total probability. When you look at the picture above you can
see the events S_n which compose the universal set.
Like you said one can define a smaller set A which comprises
outcomes in some or all of the events S_1 through S_n
If the events S_i are mutually exclusive and don't have outcomes
in common then S_i \cap A have none in common you get:
Pr(S_i \cap A) + Pr(S_2 \cap A) + \cdots + Pr(S_n \cap A) = Pr(A)

I tried to connect this with the formula for the likelihood. I hoped that
this would explain me why adding up over all x would result in P(O|\lambda)
But as you mentioned above one single event represents here a sequence
rather than a single point. In the example with the total probability all events
compose the universal set which result in 1 .

Since Pr(S_1) + Pr(S_2) + \cdots + Pr(S_n) = 1.

So in return it makes sense that Pr(S) get eliminated.

But I can't associate this with the formula P(O|λ)=∑XP(O|X,λ)∗P(X|λ)
since there are multiple dependencies
 

Attachments

  • venn_diagram.jpg
    venn_diagram.jpg
    15.6 KB · Views: 485
SamTaylor said:
But I can't associate this with the formula P(O|λ)=∑XP(O|X,λ)∗P(X|λ)
since there are multiple dependencies

I don't understand what you mean by multiple dependencies. Since all the terms in expressiong have the "| \lambda" in them we can say that we are in the probability space defined by the subset \lambda and , with that understanding, leave out the | \lambda from the formula.

As I see the space, a point (an "outcome") in the space looks like (o_1,o_2,o_3,...,o_T, x_1,x_2,...,x_T) where the o_i give a sequence of observations and the x_i give a sequence of states.

Asking for the probabiliy of an event O asks for the probability of the set of outcomes which have the form (o_1,o_2,...,o_T, *,*,...) where the o_i are the particular observations that define the event O and the '*' are any sequence of states. So it seems natural to me that you sum over all possible sequences of states.
 
Last edited:
Asking for the probabiliy of an event O asks for the probability of the set of outcomes which have the form (o_1,o_2,...,o_T, *,*,...) where the o_i are the particular observations that define the event O and the '*' are any sequence of states. So it seems natural to me that you sum over all possible sequences of states.

Ok, that way its pretty clear to me. I don't know why it didnt come to my mind that way right away. However the \lambda is still a bit confusion to me in the derivation. Looking at the quote above one could have written down the formula right away without the derivation.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top