MHB Proof that covariance matrix is positive semidefinite

AlanTuring
Messages
6
Reaction score
0
Hello,

i am having a hard time understanding the proof that a covariance matrix is "positive semidefinite" ...

i found a numbe of different proofs on the web, but they are all far too complicated / and/ or not enogh detailed for me.

View attachment 3290

Such as in the last anser of the link :
probability - What is the proof that covariance matrices are always semi-definite? - Mathematics Stack Exchange

(last answer, in particular i don't understad how they can passe from an expression
E{u T (x−x ¯ )(x−x ¯ ) T u}
to
E{s^2 }

... from where does thes s^2 "magically" appear ?

;)
 

Attachments

  • covmatrix.jpg
    covmatrix.jpg
    22.8 KB · Views: 150
Physics news on Phys.org
Machupicchu said:
Hello,

i am having a hard time understanding the proof that a covariance matrix is "positive semidefinite" ...

i found a numbe of different proofs on the web, but they are all far too complicated / and/ or not enogh detailed for me.

View attachment 3290

Such as in the last anser of the link :
probability - What is the proof that covariance matrices are always semi-definite? - Mathematics Stack Exchange

(last answer, in particular i don't understad how they can passe from an expression
E{u T (x−x ¯ )(x−x ¯ ) T u}
to
E{s^2 }

... from where does thes s^2 "magically" appear ?

;)
Hi Machupicchu, and welcome to MHB!

Three basic facts about vectors and matrices: (1) if $w$ is a column vector then $w^{\mathsf{T}}w \geqslant0$; (2) for matrices $A,B$ with product $AB$, the transpose of the product is the product of the transposes in reverse order, in other words $(AB)^{\mathsf{T}} = B^{\mathsf{T}}A^{\mathsf{T}}$; (3) taking the transpose twice gets you back where you started from, $(A^{\mathsf{T}})^{\mathsf{T}} = A$.

You want to show that $v^{\mathsf{T}}Cv\geqslant0$, where $$C = \frac1{n-1}\sum_{i=1}^n(\mathbf{x}_i - \mathbf{\mu}) (\mathbf{x}_i - \mathbf{\mu})^{\mathsf{T}}$$. Since $$v^{\mathsf{T}}Cv = \frac1{n-1}\sum_{i=1}^nv^{\mathsf{T}}(\mathbf{x}_i - \mathbf{\mu}) (\mathbf{x}_i - \mathbf{\mu})^{\mathsf{T}}v$$, it will be enough to show that $v^{\mathsf{T}}(\mathbf{x}_i - \mathbf{\mu}) (\mathbf{x}_i - \mathbf{\mu})^{\mathsf{T}}v$ (for each $i$). But by those basic facts above, $v^{\mathsf{T}}(\mathbf{x}_i - \mathbf{\mu}) (\mathbf{x}_i - \mathbf{\mu})^{\mathsf{T}}v = \bigl((\mathbf{x}_i - \mathbf{\mu})^{\mathsf{T}}v\bigr)^{\mathsf{T}} \bigl((\mathbf{x}_i - \mathbf{\mu})^{\mathsf{T}}v\bigr) \geqslant0$.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top