Eigenvalues are invariant but eigenvectors are not

In summary: What does it mean for a matrix to have a basis-independent representation?It means that the eigenvalues and eigenvectors of the matrix are not affected by the choice of basis used to represent the vector space in which the matrix operates.
  • #1
squenshl
479
4
Hi there.

How would I show that the eigenvalues of a matrix are an invariant, that is, that they depend only on the linear function the matrix represents and not on the choice of basis vectors. Show also that the eigenvectors of a matrix are not an invariant.

Explain why the dependence of the eigenvectors on the particular basis is exactly what we would expect and argue that is some sense they are indeed invariant.

Do I use the fact that if 2 matrices ##A## and ##B## which are similar, then there exists an invertible matrix ##P## such that ##A = P^{-1}BP##. Hence, there determinants are the same

Someone please help.
 
Physics news on Phys.org
  • #2
I moved the thread to the homework section.

squenshl said:
Do I use the fact that if 2 matrices ##A## and ##B## which are similar, then there exists an invertible matrix ##P## such that ##A = P^{-1}BP##. Hence, there determinants are the same.
The same determinant is not sufficient (different eigenvalues can lead to the same determinant), but you can extend that argument.
 
  • #3
Hint: [itex]\lambda[/itex] is an eigenvalue of [itex]A[/itex] if and only if [itex]\det(A - \lambda I) = 0[/itex].
 
  • #4
Yes. So then do I show that B has the same determinant hence the same eigenvalues (as B is similar to A)??
 
  • #5
Are you saying det(A)=det(B) implies A and B have the same eigenvalues? If so, then no. pasmith is saying to look at det(A-λI), which isn't the same as det(A).
 
  • #6
No I'm saying ##\text{Det}(A-\lambda I) = \text{Det}(B-\lambda I)##. Meaning that A and B have the same eigenvalues.

I think I'm just not understanding the hint sorry.

Cheers.
 
  • #7
squenshl said:
No I'm saying ##\text{Det}(A-\lambda I) = \text{Det}(B-\lambda I)##. Meaning that A and B have the same eigenvalues.
That is not true.
You can show that both A and B have the eigenvalue λ if both sides are equal to zero. If they are non-zero, you just know that λ is not an eigenvalue of either matrix.
 
  • #8
Oops right of course I meant to state that ##\text{Det}{A-\lambda I)=0=\text{Det}{A-\lambda I)##.

So this is sufficient to show eigenvalues are invariant I'm assuming then?

How do we show eigenvectors are not?
 
  • #9
The eigenvectors do not change. Their representations change. The representation is an n-tuple of numbers.

The question is better expressed as 'How do I show that the eigenvalues of a linear transformation do not depend on the basis used, but the representation of each eigenvector in a basis depends on the basis'.

When it's put in that more pedantic way, it is obvious why the representations of the eigenvector change.

The reason why neither the eigenvalues nor the eigenvectors change is that the eigenvector-eigenvalue pairs ##(\lambda_i,\vec{v}_i)## are the solutions of the equation

$$L\vec{v}=\lambda\vec{v}$$

where ##L:V\to V## is a given linear transformation and ##V## is a ##n## dimensional vector space. Since the equation is well-defined and does not specify a basis, its solutions cannot depend on the choice of basis. The eigenvalues and eigenvectors depend only on ##L##, not on ##L## plus a basis.

Since the ##\lambda## are scalars and so not in the space ##V##, they do not need to be represented in a basis, hence there is no basis representation to vary by basis. On the other hand, the eigenvectors ##\vec{v}## are elements of the vector space ##V##, and hence have a representation in each basis. To show the representation varies by basis, just write the equation for converting the representation of a vector in basis A to a representation in basis B. That involves multiplication by a matrix. Unless that matrix is the identity, the representation will change.
 
  • #10
andrewkirk said:
The eigenvectors do not change. Their representations change. The representation is an n-tuple of numbers.
Careful. I was about to post the same comment, but squenshl is not asking about eigenvectors of a linear transformation. We actually change the matrix, so the eigenvectors of this matrix change.
 
  • #11
Right!

Nothing to do with linear transformations
 
  • #12
No need to involve determinants. Just take the eigenvalue equation ##Ax=\lambda x## and multiply it with something that enables you to rewrite the left-hand side so that it involves B. (This strategy answers both questions).
 
Last edited:
  • #13
mfb said:
We actually change the matrix, so the eigenvectors of this matrix change.
I agree.

But that assumes that we regard the n-tuple of numbers as the vector, rather than a representation in a basis of a vector, and the matrix as a n x n array of numbers that defines a linear transformation, rather than a representation in a basis of a linear transformation.

The problem then is the suggestion in the question that the eigenvectors 'do not depend on the choice of basis'. That suggestion is meaningless, because no basis is used to determine the n-tuple of numbers or the matrix. They are basis-independent representations. If we pre and post-multiply the matrix by matrices C-1 and C respectively, and pre-multiply a vector by C we have not changed the basis in which they are represented. We have changed the matrix and the vector.

Hence the question was incorrect to try to connect the question to bases. It would have been correct to instead ask 'show that, for any matrix ##A## and invertible matrix ##C##, the matrix ##C^{-1}AC## has the same eigenvalues as ##A## but not necessarily the same eigenvectors.'*

By mentioning bases, the question conflates the notion of an n-tuple of numbers as a vector in vector space ##\mathbb{R}^n## with the notion of an n-tuple as a representation of a vector in ##\mathbb{R}^n## (or in any other n-dimensional vector space over the reals), and the two are entirely different things.

Such confusion would prove particularly unhelpful when one starts to do Hamiltonian mechanics and wishes to distinguish carefully between active and passive transformations.

* As well as removing the confusion, that also points towards an easy solution. One simply has to show that if the pair ##(\lambda,\vec{v})## is a solution of
$$C^{-1}AC\vec{v}=\lambda\vec{v}$$
then there exists a vector ##\vec{u}## such that ##(\lambda,\vec{u})## is a solution of
$$A\vec{u}=\lambda\vec{u}$$
As Fredrik points out, there is no need to use determinants. The proof is two easy steps and of course one finds that ##\vec{u}=C\vec{v}##.
 
Last edited:
  • Like
Likes mfb
  • #15
Yes, that.
 
  • #16
Cheers!
 
  • #17
How do I show eigenvectors are not invariant and why the dependence of the eigenvectors in the particular basis is exactly what we expected and argue that in some sense they are indeed invariant?
 
  • #18
It sounds like you're asking us to rewrite the solutions that you've found (which are essentially complete already) so that they're 100% ready to hand into your teacher. We provide hints, not complete solutions. You have to show us your work if you want more help.
 
  • #20
To show that something is not invariant you can give an example.
 

What are eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are mathematical concepts used in linear algebra to describe the properties of a linear transformation. Eigenvalues are scalar values that represent the scaling factor of the eigenvectors when the transformation is applied.

Why are eigenvalues invariant?

Eigenvalues are invariant because they remain the same regardless of the coordinate system used to represent the linear transformation. This means that the eigenvalues will have the same value no matter how the transformation is represented.

Why are eigenvectors not invariant?

Eigenvectors are not invariant because they can change depending on the coordinate system used. This is because the direction and magnitude of the eigenvectors are relative to the coordinate system, unlike eigenvalues which are absolute.

What is the significance of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are important in linear algebra because they provide valuable information about the properties of a linear transformation. They can be used to determine the stability of a system, find the principal components of a dataset, and solve systems of differential equations.

Can a matrix have more than one set of eigenvalues and eigenvectors?

Yes, a matrix can have multiple sets of eigenvalues and eigenvectors. This is because there can be many different linear transformations that result in the same set of eigenvalues and eigenvectors. In this case, the eigenvalues and eigenvectors will still be invariant for each transformation, but there will be multiple sets that satisfy the requirements.

Similar threads

  • Calculus and Beyond Homework Help
Replies
5
Views
527
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
4K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Introductory Physics Homework Help
Replies
1
Views
694
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
2K
Back
Top