Showing That the Eigenvalue of a Matrix is 0

  • Thread starter Thread starter Drakkith
  • Start date Start date
  • Tags Tags
    Eigenvalue Matrix
Click For Summary

Homework Help Overview

The discussion revolves around the properties of a matrix \( A \) given that \( A^2 \) is the zero matrix. Participants are exploring the implications for the eigenvalues of \( A \), specifically questioning whether the only eigenvalue can be zero.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • Participants discuss the relationship between \( A^2 \) being the zero matrix and the eigenvalues of \( A \). There are attempts to manipulate the eigenvalue equation \( Ax = \lambda x \) and to understand the implications of nilpotent transformations. Some participants express uncertainty about the calculations and the implications of their reasoning.

Discussion Status

The discussion is active, with various interpretations being explored. Some participants have offered insights into the nature of eigenvectors and nilpotent matrices, while others are questioning their understanding of the relationships involved. There is no explicit consensus yet, but several productive lines of reasoning have been initiated.

Contextual Notes

Participants are working under the constraints of a homework problem, which may limit the information they can use or the methods they can apply. The nature of the problem suggests a need for rigorous justification of claims regarding eigenvalues and eigenvectors.

Drakkith
Mentor
Messages
23,205
Reaction score
7,687

Homework Statement


Show that if ##A^2## is the zero matrix, then the only eigenvalue of ##A## is 0.

Homework Equations


##Ax=λx##.

The Attempt at a Solution


For ##A^2## to be the zero matrix it looks like: ##A^2 = AA=A[A_1, A_2, A_3, ...] = [a_{11}a_{11}+a_{12}a_{21}+a_{13}a_{31} + ... = 0, a_{11}a_{12}+a_{12}a_{22}+a_{13}a_{32} + ... = 0] = [0, 0, 0, ...]##
(Rinse and repeat for the next row)

The eigenvalue of a matrix is a scalar ##λ## such that ##Ax=λx##.
So here we have ##AA=λA##

It looks to me like ##A## could be an infinite number of matrices, and that ##AA## would only rarely, if ever, equal ##λA## for any nonzero ##λ##. But I'm not sure how to prove it.
 
Physics news on Phys.org
Multiply ##A\vec{x}=\lambda \vec{x}## by ##A##.
 
Drakkith said:
So here we have ##AA=λA##
Sometimes it's better not to abbreviate calculations. We have for eigenvectors ##x## the equation ##Ax=\lambda x##, that is ##A(x) = \lambda \cdot x##. And ##A^2## is a function, which transforms ##x \longmapsto A^2(x) \stackrel{(*)}{=} A((A(x))##. You know the result of the LHS of ##(*)## and also the RHS for eigenvectors ##x## by using the linearity of ##A##.
 
vela said:
Multiply ##A\vec{x}=\lambda \vec{x}## by ##A##.

Don't tell me it's that simple...

fresh_42 said:
Sometimes it's better not to abbreviate calculations.

I haven't done any calculations yet, so I'm not sure what you mean.
 
You wrote ##AA=\lambda A##. With the ##x## it is easier to see, and yes, it is that simple.
 
You can also look at it this way: An eigenvector is a kind of fix point. Now a nilpotent transformation (##A^n=0##) maps everything sooner or later to zero. That leaves not many opportunities for fix points.
 
fresh_42 said:
You wrote ##AA=\lambda A##. With the ##x## it is easier to see, and yes, it is that simple.

The x isn't there because I apparently didn't understand what I was doing. I thought A became x. :rolleyes:
 
Drakkith said:
The x isn't there because I apparently didn't understand what I was doing. I thought A became x. :rolleyes:
Without the ##x## it is strictly spoken wrong, because ##A^2 \neq \lambda A##. Only for eigenvectors we can write ##A^2(x_\lambda) = A(A(x_\lambda))=A(\lambda x_\lambda)=\lambda A(x_\lambda)## and so on. For other vectors it doesn't have to be true. I even indexed the vector ##x_\lambda## with ##\lambda## to indicate, that it is a certain vector and that it depends on ##\lambda##. This might be a bit excessive, but it reminds me on what this vector is and thus helps to avoid mistakes. And in handwriting, it is no big deal.

In cases like this, but basically always, it is helpful to first list what is given:
  1. A linear function ##A##
  2. ##A^2=0## which means ##A(A(x))=0## for all ##x##
and then what has to be shown: ##A(x_\lambda) = \lambda \cdot x_\lambda \Longrightarrow \lambda =0##

This often gives already the pathway to a solution, because in order to show this implication ##"\Rightarrow "##, we can assume the left side of it, i.e. an eigenvector ##x_\lambda## to an eigenvalue ##\lambda ##. Now condition #2 can be applied to such an eigenvector and condition #1 allows us to pull the factor ##\lambda ## outside of ##A(\lambda x_\lambda)##.

Of course the thought "eigenvectors are stability vectors and a transformations which kills all vectors cannot have stability vectors unequal to zero" looks more elegant, but even elegant ideas have to be written down with rigor. So there is nothing wrong with the janitor method:
Prepare your tools. Inspect the task. And only until then begin to work.
 
  • Like
Likes   Reactions: Drakkith
Drakkith said:
Don't tell me it's that simple...
It is that simple.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K