How to derive the multivariate normal distribution

AI Thread Summary
The discussion focuses on deriving the density function of the multivariate normal distribution from its covariance matrix. It begins with a standard normal distribution and involves a change of variables using a transformation that includes a Gaussian vector. The key point is that the determinant of the transformation matrix, denoted as \(\mathbf{A}\), must be correctly related to the covariance matrix, leading to \(\mathbf{\Sigma} = \mathbf{A}\mathbf{A}^T\). A misunderstanding in the differentiation process initially led to an incorrect denominator, but clarification revealed that the correct determinant should be \(\det(\mathbf{AA^T})\). The conversation concludes with an acknowledgment of the resolution of the issue.
jone
Messages
4
Reaction score
0
If the covariance matrix \mathbf{\Sigma} of the multivariate normal distribution is invertible one can derive the density function:

f(x_1,...,x_n) = f(\mathbf{x}) = \frac{1}{(\sqrt(2\pi))^n\sqrt(\det(\mathbf{\Sigma)}}\exp(-\frac{1}{2}(\mathbf{x}-\mathbf{\mu})^T\mathbf{\Sigma}^{-1}(\mathbf{x}-\mathbf{\mu}))

So, how do I derive the above?
 
Physics news on Phys.org
Start with a normal distribution where all the variables are independent and then do a change of variables.
 
I was on that track before, make use of the CDF and then differentiate back to get the PDF. This is how far I get: Let Y be a standard i.i.d. Gaussian vector. Then use the transformation

<br /> \mathbf{X} = \mathbf{A}\mathbf{Y} + \mathbf{\mu}<br />

<br /> P(\mathbf{X} &lt; \mathbf{x}) = P(\mathbf{A}\mathbf{Y} + \mathbf{\mu} &lt; \mathbf{x}) = P(\mathbf{Y} &lt; \mathbf{A}^{-1}(\mathbf{x}-\mathbf{\mu}))<br />
Now I differentiate this to get the PDF

<br /> f_{\mathbf{X}}(\mathbf{x}) = f_{\mathbf{Y}}(\mathbf{A}^{-1}\mathbf{x-\mu})\det(\mathbf{A}^{-1}) = f_{\mathbf{Y}}(\mathbf{A}^{-1}\mathbf{x-\mu})\frac{1}{\det(\mathbf{A})} = \frac{1}{(2\pi)^{n/2}\det(A)}\exp\left(\frac{1}{2}(\mathbf{x-\mu})^{T}(\mathbf{AA^T})^{-1}(\mathbf{x-\mu})\right)<br />

So \det(\mathbf{A})} pops out in the denominator, instead of \det(\mathbf{AA^T})} it as it should be. Something is wrong in my differentiation here but I can't figure it out.
 
jone said:
So \det(\mathbf{A})} pops out in the denominator, instead of \det(\mathbf{AA^T})} it as it should be. Something is wrong in my differentiation here but I can't figure it out.

Why do you think the denominator should be \det(\mathbf{AA^T})}.

That would give you something analogies to the variance while the denominator of the Gaussian function is the standard deviation.

You want:

\sqrt{|\mathbf{AA^T}|}=\sqrt{|\mathbf{A}|}\sqrt{|\mathbf{A^T}|}=|\mathbf{A}|
 
Ok, so now it works out. \mathbf{\Sigma} = \mathbf{A}\mathbf{A}^T is the covariance matrix. Thank you for your help!
 
jone said:
Ok, so now it works out. \mathbf{\Sigma} = \mathbf{A}\mathbf{A}^T is the covariance matrix. Thank you for your help!

exactly! And, your welcome :)
 
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top