Eigenvalues of Gramian Matrix

In summary, the Gramian Matrix for a column vector has a single non-zero eigenvalue, which is equal to the square of the vector's norm. This can be easily proved by substituting a non-zero multiple of the vector as the eigenvector and solving for the eigenvalue. There is a much simpler proof compared to solving for the eigenvalues using characteristic polynomials.
  • #1
682
16
if x is a column vector, then a matrix G = x*xT is a Gramian Matrix.
When I tried calculating the matrix G and its eigenvalues for cases when x = [x1 x2]' and [x1 x2 x3]'
by actually working out the algebra, it turned out (if I didn't do any mistakes) that the eigen values are all zeros except one which is equal to (x12+x22 OR x12 + x22 + x32) depending upon the case.
Is this a standard result for a Gramian Matrix to have a single non-zero eigenvalue? If, yes, is there a simpler proof?

Thank you.
 
Mathematics news on Phys.org
  • #2
Yes that is a standard and pretty simple fact. How did you compute the eigenvalues? Did you compute characteristic polynomials and then find their roots? If so, then yes, there is a much simpler proof.
 
  • #3
Hawkeye18 said:
Yes that is a standard and pretty simple fact. How did you compute the eigenvalues? Did you compute characteristic polynomials and then find their roots? If so, then yes, there is a much simpler proof.
Yes, I solved the roots of characteristic equation, and it was a nasty business for even a 3x3 matrix. :D Would love to know the simpler method.
 
  • #4
Let ##\lambda \ne 0## be an eigenvalue, and ##\mathbf v\ne\mathbf 0## be he corresponding eigenvector. That means ##G\mathbf v =\lambda\mathbf v##. But $$G \mathbf v = \mathbf x (\mathbf x^T \mathbf v)$$ and ##(\mathbf x^T \mathbf v)## is a scalar. Therefore $$(\mathbf x^T \mathbf v) \mathbf x = \lambda \mathbf v,$$ so the eigenvector ##\mathbf v## must be a non-zero multiple of ##\mathbf x##. Substituting ##a\mathbf x## (where ##a\ne0## is a scalar) in the above equation, we get that indeed ## a\mathbf x ## is an eigenvector corresponding to ##\lambda= \mathbf x^T\mathbf x##.
 
  • Like
Likes I_am_learning
  • #5
Hawkeye18 said:
Let ##\lambda \ne 0## be an eigenvalue, and ##\mathbf v\ne\mathbf 0## be he corresponding eigenvector. That means ##G\mathbf v =\lambda\mathbf v##. But $$G \mathbf v = \mathbf x (\mathbf x^T \mathbf v)$$ and ##(\mathbf x^T \mathbf v)## is a scalar. Therefore $$(\mathbf x^T \mathbf v) \mathbf x = \lambda \mathbf v,$$ so the eigenvector ##\mathbf v## must be a non-zero multiple of ##\mathbf x##. Substituting ##a\mathbf x## (where ##a\ne0## is a scalar) in the above equation, we get that indeed ## a\mathbf x ## is an eigenvector corresponding to ##\lambda= \mathbf x^T\mathbf x##.
I cannot see why the bold part should be true?
But, doing the underlined part, i.e. substitution v=ax, I can see that it gives a solution, is that from this you infered that the bold part should hold?

Thank you for your help.
 
  • #6
I_am_learning said:
I cannot see why the bold part should be true?
But, doing the underlined part, i.e. substitution v=ax, I can see that it gives a solution, is that from this you infered that the bold part should hold?

Thank you for your help.
No, the "bold" part is true independently of the "underlined" part, they both prove different parts of the statement.

For the "bold" part: we know that ##(\mathbf x^T\mathbf v)## is a number, let call it ##\beta##. Then the equation is rewritten as ##\beta\mathbf x = \lambda\mathbf v##, and solving it for ##\mathbf v ## gives us ##\mathbf v = (\beta/\lambda) \mathbf x##.

Now, the constant ##\beta## depends on the unknown ##\mathbf v##, and we do not know what ##\lambda## is, so we cannot say from here that ##\mathbf v = (\beta/\lambda) \mathbf x= a\mathbf x## is an eigenvector. But what we can say is that if ##\mathbf v## is an eigenvector corresponding to a non-zero eigenvalue ##\lambda##, then it must be a non-zero multiple of ##\mathbf x##.

Substituting then ##\mathbf v=a\mathbf x## we get that it is indeed an eigenvector and find ##\lambda##. So the "underlined" part give you that ##\mathbf v=a\mathbf x## is an eigenvector, and gives the corresponding eigenvalue. The "bold"" part shown that there are no other eigenvectors corresponding to a non-zero eigenvalue.
 
  • Like
Likes I_am_learning

1. What is the purpose of calculating eigenvalues of a Gramian matrix?

The eigenvalues of a Gramian matrix can provide information about the stability and dynamics of a system. They can also be used to determine the controllability and observability of a system, and to design control systems.

2. How do you calculate eigenvalues of a Gramian matrix?

The eigenvalues of a Gramian matrix can be calculated by taking the determinant of the matrix and solving the characteristic equation, which is given by det(A-λI) = 0, where A is the Gramian matrix and λ is the eigenvalue.

3. What do the eigenvalues of a Gramian matrix represent?

The eigenvalues of a Gramian matrix represent the squared singular values of the system. They provide information about the energy distribution and the dominant modes of the system.

4. Can the eigenvalues of a Gramian matrix be negative?

No, the eigenvalues of a Gramian matrix are always non-negative. This is because the Gramian matrix is a positive semi-definite matrix, meaning that all of its eigenvalues are non-negative.

5. How do the eigenvalues of a Gramian matrix relate to the system's state variables?

The eigenvalues of a Gramian matrix are related to the poles of the system's transfer function. They determine the stability and convergence rate of the system's state variables.

Suggested for: Eigenvalues of Gramian Matrix

Replies
1
Views
1K
Replies
14
Views
1K
Replies
12
Views
1K
Replies
2
Views
617
Replies
3
Views
811
Replies
4
Views
861
Replies
3
Views
1K
Replies
2
Views
1K
Back
Top