# Showing That the Eigenvalue of a Matrix is 0

Staff Emeritus

## Homework Statement

Show that if ##A^2## is the zero matrix, then the only eigenvalue of ##A## is 0.

##Ax=λx##.

## The Attempt at a Solution

For ##A^2## to be the zero matrix it looks like: ##A^2 = AA=A[A_1, A_2, A_3, ...] = [a_{11}a_{11}+a_{12}a_{21}+a_{13}a_{31} + ... = 0, a_{11}a_{12}+a_{12}a_{22}+a_{13}a_{32} + ... = 0] = [0, 0, 0, ...]##
(Rinse and repeat for the next row)

The eigenvalue of a matrix is a scalar ##λ## such that ##Ax=λx##.
So here we have ##AA=λA##

It looks to me like ##A## could be an infinite number of matrices, and that ##AA## would only rarely, if ever, equal ##λA## for any nonzero ##λ##. But I'm not sure how to prove it.

vela
Staff Emeritus
Homework Helper
Multiply ##A\vec{x}=\lambda \vec{x}## by ##A##.

fresh_42
Mentor
So here we have ##AA=λA##
Sometimes it's better not to abbreviate calculations. We have for eigenvectors ##x## the equation ##Ax=\lambda x##, that is ##A(x) = \lambda \cdot x##. And ##A^2## is a function, which transforms ##x \longmapsto A^2(x) \stackrel{(*)}{=} A((A(x))##. You know the result of the LHS of ##(*)## and also the RHS for eigenvectors ##x## by using the linearity of ##A##.

Staff Emeritus
Multiply ##A\vec{x}=\lambda \vec{x}## by ##A##.

Don't tell me it's that simple...

Sometimes it's better not to abbreviate calculations.

I haven't done any calculations yet, so I'm not sure what you mean.

fresh_42
Mentor
You wrote ##AA=\lambda A##. With the ##x## it is easier to see, and yes, it is that simple.

fresh_42
Mentor
You can also look at it this way: An eigenvector is a kind of fix point. Now a nilpotent transformation (##A^n=0##) maps everything sooner or later to zero. That leaves not many opportunities for fix points.

Staff Emeritus
You wrote ##AA=\lambda A##. With the ##x## it is easier to see, and yes, it is that simple.

The x isn't there because I apparently didn't understand what I was doing. I thought A became x.

fresh_42
Mentor
The x isn't there because I apparently didn't understand what I was doing. I thought A became x.
Without the ##x## it is strictly spoken wrong, because ##A^2 \neq \lambda A##. Only for eigenvectors we can write ##A^2(x_\lambda) = A(A(x_\lambda))=A(\lambda x_\lambda)=\lambda A(x_\lambda)## and so on. For other vectors it doesn't have to be true. I even indexed the vector ##x_\lambda## with ##\lambda## to indicate, that it is a certain vector and that it depends on ##\lambda##. This might be a bit excessive, but it reminds me on what this vector is and thus helps to avoid mistakes. And in handwriting, it is no big deal.

In cases like this, but basically always, it is helpful to first list what is given:
1. A linear function ##A##
2. ##A^2=0## which means ##A(A(x))=0## for all ##x##
and then what has to be shown: ##A(x_\lambda) = \lambda \cdot x_\lambda \Longrightarrow \lambda =0##

This often gives already the pathway to a solution, because in order to show this implication ##"\Rightarrow "##, we can assume the left side of it, i.e. an eigenvector ##x_\lambda## to an eigenvalue ##\lambda ##. Now condition #2 can be applied to such an eigenvector and condition #1 allows us to pull the factor ##\lambda ## outside of ##A(\lambda x_\lambda)##.

Of course the thought "eigenvectors are stability vectors and a transformations which kills all vectors cannot have stability vectors unequal to zero" looks more elegant, but even elegant ideas have to be written down with rigor. So there is nothing wrong with the janitor method:
Prepare your tools. Inspect the task. And only until then begin to work.

Drakkith
Mark44
Mentor
Don't tell me it's that simple...
It is that simple.