weetabixharry
- 111
- 0
I'm trying to understand what makes a valid covariance matrix valid. Wikipedia tells me all covariance matrices are positive semidefinite (and, in fact, they're positive definite unless one signal is an exact linear combination of others). I don't have a very good idea of what this means in practice.
For example, let's assume I have a real-valued covariance matrix of the form:
\mathbf{R}=\left[<br /> \begin{array}{ccc}<br /> 1 & 0.7 & x \\<br /> 0.7 & 1 & -0.5 \\<br /> x & -0.5 & 1<br /> \end{array}<br /> \right]
where x is some real number. What range of values can x take?
I can sort of see that x is constrained by the other numbers. Like it can't have magnitude more than 1, because the diagonals are all 1. However, it is also constrained by the off-diagonals.
Of course, for my simple example, I can solve the eigenvalue problem for eigenvalues of zero to give me the range of values (roughly -0.968465844 to 0.268465844)... but this hasn't really given me any insight in a general sense.
I feel like there might be a neat geometrical interpretation that would make this obvious.
Can anyone offer any insight?
For example, let's assume I have a real-valued covariance matrix of the form:
\mathbf{R}=\left[<br /> \begin{array}{ccc}<br /> 1 & 0.7 & x \\<br /> 0.7 & 1 & -0.5 \\<br /> x & -0.5 & 1<br /> \end{array}<br /> \right]
where x is some real number. What range of values can x take?
I can sort of see that x is constrained by the other numbers. Like it can't have magnitude more than 1, because the diagonals are all 1. However, it is also constrained by the off-diagonals.
Of course, for my simple example, I can solve the eigenvalue problem for eigenvalues of zero to give me the range of values (roughly -0.968465844 to 0.268465844)... but this hasn't really given me any insight in a general sense.
I feel like there might be a neat geometrical interpretation that would make this obvious.
Can anyone offer any insight?
Last edited: