How to derive the multivariate normal distribution

jone
Messages
4
Reaction score
0
If the covariance matrix \mathbf{\Sigma} of the multivariate normal distribution is invertible one can derive the density function:

f(x_1,...,x_n) = f(\mathbf{x}) = \frac{1}{(\sqrt(2\pi))^n\sqrt(\det(\mathbf{\Sigma)}}\exp(-\frac{1}{2}(\mathbf{x}-\mathbf{\mu})^T\mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{\mu}))

So, how do I derive the above?
 
Physics news on Phys.org
Start with a normal distribution where all the variables are independent and then do a change of variables.
 
I was on that track before, make use of the CDF and then differentiate back to get the PDF. This is how far I get: Let Y be a standard i.i.d. Gaussian vector. Then use the transformation

<br /> \mathbf{X} = \mathbf{A}\mathbf{Y} + \mathbf{\mu}<br />

<br /> P(\mathbf{X} &lt; \mathbf{x}) = P(\mathbf{A}\mathbf{Y} + \mathbf{\mu} &lt; \mathbf{x}) = P(\mathbf{Y} &lt; \mathbf{A}^{-1}(\mathbf{x}-\mathbf{\mu}))<br />
Now I differentiate this to get the PDF

<br /> f_{\mathbf{X}}(\mathbf{x}) = f_{\mathbf{Y}}(\mathbf{A}^{-1}\mathbf{x-\mu})\det(\mathbf{A}^{-1}) = f_{\mathbf{Y}}(\mathbf{A}^{-1}\mathbf{x-\mu})\frac{1}{\det(\mathbf{A})} = \frac{1}{(2\pi)^{n/2}\det(A)}\exp\left(\frac{1}{2}(\mathbf{x-\mu})^{T}(\mathbf{AA^T})^{-1}(\mathbf{x-\mu})\right)<br />

So \det(\mathbf{A})} pops out in the denominator, instead of \det(\mathbf{AA^T})} it as it should be. Something is wrong in my differentiation here but I can't figure it out.
 
jone said:
So \det(\mathbf{A})} pops out in the denominator, instead of \det(\mathbf{AA^T})} it as it should be. Something is wrong in my differentiation here but I can't figure it out.

Why do you think the denominator should be \det(\mathbf{AA^T})}.

That would give you something analogies to the variance while the denominator of the Gaussian function is the standard deviation.

You want:

\sqrt{|\mathbf{AA^T}|}=\sqrt{|\mathbf{A}|}\sqrt{|\mathbf{A^T}|}=|\mathbf{A}|
 
Ok, so now it works out. \mathbf{\Sigma} = \mathbf{A}\mathbf{A}^T is the covariance matrix. Thank you for your help!
 
jone said:
Ok, so now it works out. \mathbf{\Sigma} = \mathbf{A}\mathbf{A}^T is the covariance matrix. Thank you for your help!

exactly! And, your welcome :)
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Back
Top