MHB Are components in a gaussian mixture independent?

ConfusedCat
Messages
3
Reaction score
0
Hello all,

I have used Expectation Maximization algorithm to approximate a probability density function (pdf) using a mixture of gaussians.

I need to square the pdf corresponding to the mixture of gaussians( it is a weighted sum of normal distributions with non-identical parameters). Rather than square a sum of gaussians, I thought that if the constituent normal pdfs of the mixture were independent, they could be summed resulting in a single gaussian, which I could then square.

I don't know how to prove (or disprove) the independence of the components. For a regular joint distribution f(x,y), independence can be proved if f(x,y)=f(x)f(y). Here, I have a series of real values for each of the normal components in the distribution. How do I find the joint distribution?

Any thoughts would be welcome.

Cheers
 
Physics news on Phys.org
ConfusedCat said:
Hello all,

I have used Expectation Maximization algorithm to approximate a probability density function (pdf) using a mixture of gaussians.

I need to square the pdf corresponding to the mixture of gaussians( it is a weighted sum of normal distributions with non-identical parameters). Rather than square a sum of gaussians, I thought that if the constituent normal pdfs of the mixture were independent, they could be summed resulting in a single gaussian, which I could then square.

I don't know how to prove (or disprove) the independence of the components. For a regular joint distribution f(x,y), independence can be proved if f(x,y)=f(x)f(y). Here, I have a series of real values for each of the normal components in the distribution. How do I find the joint distribution?

Any thoughts would be welcome.

Cheers

Hi ConfusedCat! Welcome to MHB! ;)

We would need to test for independence of 2 continuous variables.
For instance this link gives a couple of possible approaches.
 
I like Serena said:
Hi ConfusedCat! Welcome to MHB! ;)

We would need to test for independence of 2 continuous variables.
For instance this link gives a couple of possible approaches.

Thank you for that link - it does look very useful.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.

Similar threads

Replies
30
Views
4K
Replies
9
Views
4K
Replies
2
Views
2K
Replies
2
Views
2K
Replies
7
Views
6K
Replies
1
Views
4K
Replies
4
Views
1K
Back
Top