What Are the Constraints of a Valid Covariance Matrix?

Click For Summary

Discussion Overview

The discussion revolves around the constraints that define a valid covariance matrix, particularly focusing on the properties of positive semidefiniteness and the implications of specific values within a given covariance matrix. Participants explore theoretical aspects, mathematical reasoning, and potential interpretations related to covariance matrices derived from random variables and sample data.

Discussion Character

  • Exploratory
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant notes that all covariance matrices are positive semidefinite and discusses the implications of this property, particularly in relation to a specific example matrix with a variable x.
  • Another participant suggests using the expectation of squared combinations of random variables to derive bounds on the variable x, though they express uncertainty about the completeness of this approach.
  • A participant raises a concern about the ambiguity in the terminology of covariance matrices, distinguishing between those for random variables and those computed from samples, and questions the validity of treating them as equivalent.
  • Further elaboration on the bounds for x is provided, with a participant listing inequalities derived from the expectation of squared combinations, indicating a need for deeper understanding.
  • Another participant seeks clarification on the differences between covariance matrices for random variables and those computed from samples, specifically questioning the necessity of being Hermitian positive semidefinite.

Areas of Agreement / Disagreement

Participants express differing views on the interpretation and implications of covariance matrices, with no consensus reached on the distinctions between types of covariance matrices or the completeness of proposed methods for deriving constraints on x.

Contextual Notes

Participants mention various mathematical approaches and properties related to covariance matrices, but there are unresolved assumptions regarding the definitions and implications of these properties, particularly in relation to sample versus population covariance matrices.

weetabixharry
Messages
111
Reaction score
0
I'm trying to understand what makes a valid covariance matrix valid. Wikipedia tells me all covariance matrices are positive semidefinite (and, in fact, they're positive definite unless one signal is an exact linear combination of others). I don't have a very good idea of what this means in practice.

For example, let's assume I have a real-valued covariance matrix of the form:

\mathbf{R}=\left[<br /> \begin{array}{ccc}<br /> 1 &amp; 0.7 &amp; x \\<br /> 0.7 &amp; 1 &amp; -0.5 \\<br /> x &amp; -0.5 &amp; 1<br /> \end{array}<br /> \right]
where x is some real number. What range of values can x take?

I can sort of see that x is constrained by the other numbers. Like it can't have magnitude more than 1, because the diagonals are all 1. However, it is also constrained by the off-diagonals.

Of course, for my simple example, I can solve the eigenvalue problem for eigenvalues of zero to give me the range of values (roughly -0.968465844 to 0.268465844)... but this hasn't really given me any insight in a general sense.

I feel like there might be a neat geometrical interpretation that would make this obvious.

Can anyone offer any insight?
 
Last edited:
Physics news on Phys.org
I don't know if this a complete answer. However assume you have three random variables X, Y, Z each with variance 1, cov(X,Y) = 0.7, cov(X,Z) = x, and cov(Y,Z) = -0.5. For simplicity assume all means = 0.
Consider E((X±Y±Z)2)≥0 for all possible sign combinations. This will give you four bounds on x. This may be the best, although I am not sure.
 
weetabixharry said:
I'm trying to understand what makes a valid covariance matrix valid.

The terminology "covariance matrix" is ambiguous. There is a covariance matrix for random variables and there is a covariance matrix computed from samples of random variables. I don't think it works to claim that the sample covariance matrix is just the covariance matrix of a population consisting of the sample because the usual way to compute the sample covariance involves using denominators of n-1 instead of n.
 
mathman said:
Consider E((X±Y±Z)2)≥0 for all possible sign combinations. This will give you four bounds on x.
I'll have to give this some thought. It's not obvious to me how this works.
 
Stephen Tashi said:
There is a covariance matrix for random variables and there is a covariance matrix computed from samples of random variables.
What's the difference between these two? Do either/both have to be Hermitian positive semidefinite? That's the sort I'm interested in.
 
weetabixharry said:
I'll have to give this some thought. It's not obvious to me how this works.
1.5+cov(X,Y)+cov(X,Z)+cov(Y,Z)≥0
1.5-cov(X,Y)+cov(X,Z)-cov(Y,Z)≥0
1.5+cov(X,Y)-cov(X,Z)-cov(Y,Z)≥0
1.5-cov(X,Y)-cov(X,Z)+cov(Y,Z)≥0
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 0 ·
Replies
0
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 4 ·
Replies
4
Views
7K