Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Clues for the eigenstuff proof

  1. May 15, 2005 #1

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    In a physics book that I have, the author states a very important result but without proving it. He does provide clues for the proof though! It goes like this..

    Consider the eigenvalue equation

    [tex](M^{-1}K-\omega ^2 I)\vec{v} = \vec{0}[/tex]

    where M is an n x n diagonal matrix whose elements are all positive (first clue) and K is a symetric (second clue) n x n matrix. Then the matrix [itex]M^{-1}K[/itex] has exactly n linearly independant eigenvectors.

    Edit: The elements of the K matrix are all positive too!
     
    Last edited: May 15, 2005
  2. jcsd
  3. May 16, 2005 #2
    Well it depends on which field you ask this....

    for counter example : [tex] M=I_2\quad K=\left(\begin{array}{cc} 1 & 2\\2&1\end{array}\right)[/tex]

    Then you get the equation : [tex] \omega^2=1\pm2[/tex]...which means you can diagonalize on C but not R...

    In fact a symmetric matrix is always orthodiagonalizable over R...but the eigenvalues can be negative (which is not possible with your ansatz above).
     
  4. May 16, 2005 #3

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Suppose the eigenvalues [itex]\omega_i^2[/itex] are all positive. Could you please show me how this implies that [itex]M^{-1}K[/itex] is diagonalizable?
     
  5. May 16, 2005 #4
    It was a huge proof..I don't remember, but I can take the most important nodes somewhere :

    Let A be a symmetric nxn matrix. It has n complex roots. Hence it can be triangularized (Thm a, By recurrence over the eigenvalues). Since A is asymmetric matrix over R it is a hermitian matrix over C, then all the roots of A are real (because on the diagonal : conj(d)=d->imag(d)). Since all the roots are real, it has n real eigenvalues, hence it is triangularizable over R (same as thm a). Since it's over R, then it's unitarily triangularizable into B (Schur's lemma), and hence, since A is symmetric, then B is too, but a triangular matrix which is symmetric is diagonal. Hence A is diagonalizable over R.

    Something like that.
     
  6. May 16, 2005 #5

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    But wait, the product [itex]M^{-1}K[/itex] is not a symetric matrix (unless the elements of [itex]M^{-1}[/itex] are all equal).
     
  7. May 16, 2005 #6
    quasar987:

    Not all elements of K have to be positive. The off-diagonal elements can be negative. However, for any displacement vector x,

    [tex]V(x) = \frac{1}{2} \ x^T K x \geq 0[/tex]

    because it represents the increase in potential energy from the minimum of the potential, which is at x = 0 by definition.
     
  8. May 16, 2005 #7

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Oops, that's true.
     
  9. May 18, 2005 #8
    Here's what you do. Let the diagonal elements of M be [itex]m_1, ... , m_n[/itex]. Construct the diagonal matrix P whose elements are [itex]p_i = 1 / \sqrt{m_i}[/itex]. This is why the elements of M have to be positive, so we can take their square roots. Notice that P, being diagonal, is symmetric: [itex]P = P^T[/itex]. P is also invertible--just take the reciprocal of the elements.

    We have our eigenvalue equation (rearranged a bit):

    [tex]K \vec{\xi} = \omega^2 M \vec{\xi}[/tex]

    now stick [itex]PP^{-1} = I[/itex] in both sides, and multiply both sides on the left by [itex]P = P^T[/itex]:

    [tex]P^{T} K P P^{-1} \vec{\xi} = \omega^2 P^{T} M P P^{-1} \vec{\xi}[/tex]

    Then we note that [itex]P^{T} K P = K^\prime[/itex] is symmetric since K is symmetric. And [itex]P^{T} M P = I[/itex], the identity matrix. If we set [itex]P^{-1}\vec{\xi} = \vec{e}[/itex], then we are left with the eigenvalue equation

    [tex]K' \ \vec{e} = \omega^2 \ \vec{e}[/tex]

    Since K' is symmetric, it has n orthonormal eigenvectors:

    [tex]K' \ \vec{e}_k = \omega_k^2 \ \vec{e}_k[/tex]

    [tex] (\vec{e}_j, \vec{e}_k) \ = \ \delta_{jk}[/tex]

    Now we can show that the n eigenvectors [itex] \vec{\xi}_j = P \vec{e}_j[/itex] of the original equation are independent. Suppose that

    [tex] c_1 \vec{\xi}_1 + c_2 \vec{\xi}_2 + \ ... \ + c_n \vec{\xi}_n \ = \ 0[/tex]

    Mutiply through by P^-1 and use [itex]P^{-1} \vec{\xi}_j = \vec{e}_j[/itex]:

    [tex] c_1 \vec{e}_1 + c_2 \vec{e}_2 + \ ... \ + c_n \vec{e}_n \ = \ 0[/tex]

    The e_i 's are orthogonal, so they must be independent. So all the c_i 's must be zero. That means the original eigenvectors are independent too.
     
    Last edited: May 18, 2005
  10. May 21, 2005 #9

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    You're my hero! Most definitely.

    Did you figure all of this out or have you seen it in a book? If the later, which one?
     
  11. May 21, 2005 #10
    It is a special case of a more general theorem that says if you have two real symmetric NxN matrices A and B, and B is positive definite, then the "eigenvalue equation"

    [tex]A \vec{\xi} = \lambda B \vec{\xi}[/tex]

    has N independent eigenvector solutions. In your problem, K played the role of A, and M played the role of B. A symmetric matrix B is called positive definite if all its eigenvalues are strictly positive. This condition on B is needed so that it can be transformed to the identity matrix with the following kind of transformation

    [tex]P^T B P = I[/tex]

    Note that in general P is not symmetric, as it happened to be for your case.

    I *think* the theorem is proved in most books on linear algebra, but I'm not sure. Sometimes they state the result in a different way. I got it from "Lectures on Linear Algebra" by Gelfand. He said it something like: Two (symmetric) quadratic forms A(x, x) and B(x, x), with B positive definite, can be brought simultaneously into diagonal form (a sum of squares) by a transformation of coordinates. The connection between quadratic forms and matrices is this: the quadratic form A(x,x) is associated with a symmetric matrix A by the relation

    [tex]A(x, x) = \sum_{i,j} a_{ij}x_i x_j = x^T A x[/tex]

    where x is a column vector.

    The theorem is also stated in a couple different ways in "A Survey of Modern Algebra" by Garrett Birkhoff and Saunders MacLane. They even mention that it is important in the theory of small vibrations.
     
  12. May 21, 2005 #11

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Thanks for reminding me I have to take a look at what those quadratic forms are about sometime this summer. :tongue2:
     
  13. May 22, 2005 #12

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Ok, so now we know that the set of normal modes is an orthogonal basis of [itex]\mathbb{R}^n[/itex]. The main idea however, was to prove that any solution to the set of equations of motion

    [tex]M\frac{d^2}{dt^2}\vec{X} = -K\vec{X}[/tex]

    could be written as a superposition of the normal modes:

    [tex]\vec{X} = \sum_{i=1}^n \vec{\xi}_i cos(w_it)[/tex]

    But [itex]\sum_{i=1}^n \vec{\xi}_i cos(w_it)[/itex] can't be the general solution because it contains only n arbitrary constants (the values [itex]\xi_i[/itex] of the relative amplitudes), while the gen. sol. would require 2n. grr.
     
  14. May 22, 2005 #13
    Each mode will have a different phase in general. So the time evolution will be more like

    [tex]\vec{X} = \sum_{i=1}^n \vec{\xi}_i cos(w_it + \delta_i)[/tex]

    where the deltas are phases. I'm assuming that you've lumped the amplitude of the i_th mode into eigenvector [itex]\vec{\xi}_i[/itex]. Also, while the normal modes do form a basis, the [itex]\vec{\xi}_i[/itex]s aren't necessarily orthogonal.
     
  15. May 23, 2005 #14

    quasar987

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    My linear algebra is too far to even attempt to do a general proof, but it wouldn't surprise me if they were because the [itex]\hat{e}_i[/itex]'s of Post #8 are, and btw the [itex]\hat{e}_i[/itex] and the [itex]\xi_i[/itex]'s, there is only a linear transformation. It seems natural that a linear transformation preserves orthogonality since it preserves linear independance.

    Also, we did a lot of problems on normal modes in my wave course that had as a subquestion "verify that the normal modes are orthogonal".. and they always were.


    As for,

    [tex]\vec{X} = \sum_{i=1}^n \vec{\xi}_i cos(w_it + \delta_i)[/tex]

    thanks for pointing that out. I discarded this option too quickly.
     
  16. May 23, 2005 #15
    A linear transformation does not generally preserve orthogonality. Even if the transformation is nonsingular, and thus preserves independence, it usually transforms a set of orthogonal vectors into a set of inclined vectors.

    Try it for a two degree-of-freedom system that is not completely symmetrical. For instance, two masses and three springs, where the two outer springs are attached to unmovable walls. Let all the springs be the same stiffness, say k=1, but let one mass be twice the other. I think that if you find the two eigenvectors of this system, their dot product will not be zero. But they will be independent.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Clues for the eigenstuff proof
  1. Some eigenstuff (Replies: 7)

  2. A proof (Replies: 1)

Loading...