How do inverse matrices affect eigenvalues?

  • Thread starter Thread starter Old Guy
  • Start date Start date
  • Tags Tags
    Proof
Click For Summary
The discussion focuses on proving that the eigenvalues of an invertible matrix are the reciprocals of the original eigenvalues. A participant attempts various methods, including solving for eigenvalues and using unitary transformations, but finds them overly complex. Another contributor suggests a simpler approach by applying the definition of eigenvalues and eigenvectors directly to the inverse matrix equation. They clarify that while the last step in the initial attempt is incorrect, the overall reasoning is sound. This exchange highlights the importance of understanding eigenvalue relationships in linear algebra.
Old Guy
Messages
101
Reaction score
1

Homework Statement


Given a matrix with eigenvalues \lambda_{i}, show that if the inverse of the matrix exists, its eigenvalues are \frac{1}{\lambda}.


Homework Equations





The Attempt at a Solution

This shouldn't be so hard. I've come up with a few trivial examples, but I would like to get a general proof in 3 (or more) dimensions. I've tried solving for the eigenvalues, getting the eigenvectors and trying a unitary transformation. I've tried substituting the product of the matrix with its inverse in the characteristic equation and playing around with that before actually calculating the determionant. I've tried other symbolic brute force attempts, but they get very complicated very quickly. I've researched a bunch of determinant identities to no avail. I feel that there must be some key relation that I'm just missing. Any help would be appreciated.
 
Physics news on Phys.org
Just write down the definition of eigenvalue and eigenvector, and then apply the inverse matrix to that equation. Nothing complicated involving determinants and/or unitary transformations are necessary.
 
Is this the idea?
$\begin{array}{l}<br /> \Lambda \left| \psi \right\rangle = \lambda \left| \psi \right\rangle \\ <br /> \Lambda ^{ - 1} \left( {\Lambda \left| \psi \right\rangle } \right) = \Lambda ^{ - 1} \left( {\lambda \left| \psi \right\rangle } \right) \\ <br /> \left( {\Lambda ^{ - 1} \Lambda } \right)\left| \psi \right\rangle = \left( {\Lambda ^{ - 1} \lambda } \right)\left| \psi \right\rangle \\ <br /> I\left| \psi \right\rangle = \lambda \Lambda ^{ - 1} \left| \psi \right\rangle \\ <br /> \left| \psi \right\rangle = \lambda \Lambda ^{ - 1} \left| \psi \right\rangle \\ <br /> I = \lambda \Lambda ^{ - 1} \\ <br /> \end{array}$
 
Fine except for the last line; you can't drop the state because it's not a generic state; it's an eigenstate of Lambda. You can, however, multiply both sides of the next-to-last line by 1/lambda.
 
Except for the very last line, you're there!
 
Yes, I see. Thanks very much!
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
9
Views
2K
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
563
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K