How Can I Derive a Covariance Matrix from Known Eigenvalues and an Eigenvector?

Click For Summary
To derive a covariance matrix from known eigenvalues and an eigenvector in a 2D Gaussian distribution, one can utilize the relationship between eigenvectors and the covariance matrix. Given the first eigenvector and the variances along the eigenvectors, the covariance matrix can be expressed as P = VDV^T, where V is the matrix formed by the eigenvectors and D is a diagonal matrix containing the eigenvalues. This approach leverages the orthogonality of the eigenvectors, ensuring that the resulting matrix is symmetric and positive semi-definite. The challenge lies in having two equations for three unknowns, but the formulation allows for the construction of the covariance matrix. This method provides a systematic way to derive the covariance matrix when certain parameters are known.
Lindley
Messages
6
Reaction score
0
I have a Gaussian distribution. I know the variance in the directions of the first and second eigenvectors (the directions of maximum and minimum radius of the corresponding ellipse at any fixed mahalnobis distance), and the direction of the first eigenvector.

Is there a simple closed form equation to derive the corresponding covariance matrix? It seems like there should be, since if H0 is the first eigenvector and H1 is the second, then

H0*P*H0^T = var0
H1*P*H1^T = var1

However, this is only two equations and there are 3 unknowns (Pxx, Pxy, and Pyy). Any help?
 
Physics news on Phys.org
I don't 100% understand you formulation... however i read as following:
- the problem is 2D guassian distribution

you want to find the 2x2 covariance matrix,
P = \begin{pmatrix} P_{xx} & P_{xy} \\ P_{yx} & P_{yy} \end{pmatrix}

which is a symmetric (Pxy = Pyx) positive semi-definite matrix, you know the eigenvalues, and an eigenvector of the matrix

in fact as it it symmetric, eigenvectors will be orthogonal, you know the direction and of both eigenvectors (call them v1, v2)

why not use the eigenvectors to create a matrix to diagonalise, call it V, then
V = [v_1, v_2] [/tx]<br /> V^T P V = D<br /> <br /> where D is a diagonal matrix with eignevalues on the diagonal, then <br /> P = V D V^T
 
If there are an infinite number of natural numbers, and an infinite number of fractions in between any two natural numbers, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and an infinite number of fractions in between any two of those fractions, and... then that must mean that there are not only infinite infinities, but an infinite number of those infinities. and an infinite number of those...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 2 ·
Replies
2
Views
9K
  • · Replies 27 ·
Replies
27
Views
4K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
2K
Replies
8
Views
3K