HMM with continuous observation - PDFs to probabilities

rynlee
Messages
44
Reaction score
0
So I am working with a Hidden Markov Model with continuous observation, and something has been bothering me that I am hoping someone might be able to address.

Going from a discrete-observation HMM to continuous-observation HMM is actually quite straightforward (for example see Rabiner's 1989 tutorial on HMM). You just change the probability of observing a symbol k in state i, bi(k), to a PDF of a distribution (typically Gaussian) bi(vk)around some mean value.

Here's the thing, if you do that, then the forward and backward coefficients become products of PDFs, instead of products of probabilities, which gives them increasingly higher-order density units. That is quite disconcerting, for example if you use the Baum formalism for re-estimating the parameters defining the HMM, training against some data set, then you end up setting the probability of observing each state in the first timestep, πi, equal to a PDF (well a product of two PDFs, again see Rabiner), which doesn't make any sense...

Looking at several papers in the literature however, that seems to be what people are doing, nobody comments on the shift from probabilities to probability densities, so I'm hoping that someone can explain how this situation is OK/what I am missing.

Thank you!
 
Physics news on Phys.org
rynlee said:
... if you do that, then the forward and backward coefficients become products of PDFs, instead of products of probabilities, which gives them increasingly higher-order density units...
That doesn't quite sound right, can you give an example?
 
Sure, suppose your Gaussian mixture model only has one mixed state for the sake of argument:

N(mu_i, sigma_i, Ot) = 1/(sqrt(2*pi)*sigma_i) * exp(-(Ot-mu_i)^2/(2*sigma_i^2) )

and
b_i(Ot) = N(mu_i, sigma_i, Ot)

for each state 1<= i <= N

Because of the leading term in the gaussian, b_i(Ot) has units 1/sigma, as we would expect for a probability density.

So when you calculate alpha inductively,

alpha_j(t) = sum[alpha_i(t-1) * a_ij, i=1:N] * b_j(Ot)

each successive alpha has units of (1/sigma)^t. That is opposed to alpha being a unitless probability.

Now, the Baum-Welch algorithm shouldn't care if alpha and beta have units or not, since it's just looking for relative quantities, but it remains highly disconcerting as the interpretations of the parameters breaks down.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top