# Homework Help: Matrices M such that M^2 = 0 ?

1. Jun 10, 2015

### geoffrey159

1. The problem statement, all variables and given/known data

What are the $n\times n$ matrices over a field $K$ such that $M^2 = 0$ ?

2. Relevant equations

3. The attempt at a solution

Please can you tell me if this is correct, it looks ok to me but I have some doubts. I have reused the ideas that I found in a proof about equivalent matrices.
• One possibility is $M = 0$
• Assume that $M$ represents the linear map $f: E \rightarrow F$ in basis ${\cal B}$ and ${\cal C}$ both with dimension $n$.

Since $f^2 = 0$, then $\text{Im}(f) \subset \text{Ker}(f)$.
Let $(e_1,...,e_p)$ be a basis of $\text{Ker}(f)$. This basis can be completed into a basis ${\cal B'} = (e_1,...,e_p,e_{p+1},...,e_n)$ of $E$.
The family $(f(e_{p+1}),...,f(e_n))$ belongs to $\text{Im}(f) \subset \text{Ker}(f)$ and is linearly independent in $F$.
• Linear indenpendence :
$\sum_{k = p+1}^n \lambda_k f(e_k) = 0 \Rightarrow \sum_{k = p+1}^n \lambda_k e_k \in \text{Ker}(f) = \text{span}(e_{1},...,e_p)$, but ${\cal B'}$ being a basis of $E$, all the lambda's are zero.
• Free families in a vector space have less vectors than a basis of that vector space, so $n-p \le p \Rightarrow p \ge n/2$. By the rank theorem, $f$ has rank less than $n/2$
The free family $(f(e_{p+1}),...,f(e_n))$ can be completed into a basis ${\cal C'} = (f(e_{p+1}),...,f(e_n), f_1,...,f_p)$ of $F$.​

So it follows from all this that in the basis $({\cal B',C'})$, the matrix of $f$ is zero everywhere but in the upper right corner where there is an identity matrix packed somewhere starting at line 1 and ending column $n$, the somewhere depending upon the rank of $f$.
-> My answer is 0 and matrices that are similar to a matrix zero everywhere but in the upper right corner, where there is an identity matrix packed somewhere starting at line 1 and ending column $n$.

2. Jun 10, 2015

### DEvens

1 -1
1 -1

3. Jun 10, 2015

### geoffrey159

Take $P = \begin{pmatrix} 0 & 1 \\ 1 & - 1 \end{pmatrix}$ so that $P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}$:

$\begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix} = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P$

4. Jun 10, 2015

### Staff: Mentor

What does this work have to do with your problem? The question asks about square matrices M such that M2 = 0. Your matrix P doesn't satisfy this requirement.

In post 1, your first point is that M = 0. Since M2 = 0, n x n zero matrices clearly are included.
Your second point in that post doesn't include the matrix that DEvens gave. Also, it is much more general than it needs to me. In particular, in this part:
The matrices in question are square, so the linear map f takes Rn to a subspace of Rn; namely, to the 0 vector in Rn.

5. Jun 10, 2015

### geoffrey159

DrEvens asked me to illustrate my point on a simple example. I prove that the given matrix $M = \begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix}$, which satisfies $M^2 = 0$, is similar to $\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix}$, which is the only $2\times 2$ matrix 0 everywhere but in the upper right where there is an identity matrix.

The question just says that the coefficients are in the field $K$. You can't assume that you have a linear map from $\mathbb{R}^n \rightarrow \mathbb{R}^n$, or there is something I don't understand

6. Jun 10, 2015

### Staff: Mentor

That's DEvens (no r). I didn't understand the point of your example. I though that you were giving P as an example of a matrix for which P2 = 0.
You are correct. I should have said that the map is from Kn to a subspace of Kn.

You also said this in post #1:
$$\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$$
There is no identity matrix in the upper right corner...

Last edited: Jun 10, 2015
7. Jun 10, 2015

### geoffrey159

I meant no offense

Why not a more general vector space over the field $K$ ? I have no example in mind, maybe I'm having a lack of understanding here. If you explain ...

8. Jun 10, 2015

### geoffrey159

Yes there is : $P = P^{-1} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$, then $M = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P$

9. Jun 10, 2015

### Staff: Mentor

There is no identity matrix in the upper right corner in the example I gave, but that matrix is similar to one that has an identity matrix there.

Do your examples extend to 3 x 3 matrices or larger?

In the 2 x 2 case do you have a geometric feel for what this matrix does to an arbitrary input vector?
$$\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$$

10. Jun 10, 2015

### geoffrey159

No it does not have an identity matrix in the upper right corner, but it is not my point.
My point is :

$M^2 = 0 \iff M = 0 \text{ or } \exists P\in \text{GL}_n(K) :\ M = P^{-1} \begin{pmatrix} 0 & I_{\text{rk}(M)} \\ 0 & 0 \end{pmatrix} P$

I believe it extends to $n\times n$ matrices or larger (if you say so), it is what I've tried to prove, and I have no geometric feel whatsoever, for now

11. Jun 10, 2015

### WWGD

Have you tried using orthogonality relations, i.e., a row, considered as a vector, must be orthogonal to each of the columns, considered as a vector? So each column must be in the ortho (complement) space of all of the row vectors. Of course, for general fields, this would be an abstract orthogonality relation.

Last edited: Jun 10, 2015
12. Jun 10, 2015

### geoffrey159

I haven't tried, but what I have already looks like a description, doesn't it ? It seems that It doesn't convince many people ...

13. Jun 10, 2015

### Staff: Mentor

No problem.
The matrix is n X n. If the field is K, then the matrix represents a transformation from Kn to itself.

14. Jun 10, 2015

### Dick

Maybe if you look at Jordan Normal Form you might be able to form a more convincing way of describing what you are trying to say.

15. Jun 11, 2015

### geoffrey159

Along this conversation, I've had trouble understanding why you say that. For example, if $K = \mathbb{C}$, and your vector space over $K$ is the set of polynomials of degree less than $n$. Any endomorphism of that vector space comes with a $(n+1) \times (n+1)$ matrix with coefficients in $K$.
Why should the vector space be reduced to $K^n$ ?

I don't know what Jordan Normal Form is. Do you have the solution to the problem ? How much does it cost ?

However, it seems that it works on $2\times 2$ matrices.

16. Jun 11, 2015

### Dick

I meant you should look it up. It's free on here http://en.wikipedia.org/wiki/Jordan_normal_form It puts what are trying to say in clearer terms than 'an identity in the upper right corner'.

17. Jun 11, 2015

### Staff: Mentor

No. A polynomial of degree less than n has at most n terms (c0z0 + c1z1+ ... + cn - 1 zn - 1), with exponents ranging from 0 through n - 1, inclusive. So the matrix would be n X n in size.

18. Jun 11, 2015

### geoffrey159

I have put it quite clearly in post #10, in symbolic language. I sense your annoyance, would you like to have the final word about this problem ?

19. Jun 11, 2015

### pasmith

All vector spaces of dimension $n$ over $K$ are isomorphic to $K^n$, whatever the actual objects: p-by-q matrices with entries in K where pq = n; polynomials of degree at most n - 1 with coefficients in K; functions $X \to K$ where $X$ is any set of cardinality $n$; linear functions $V \to K$ where $V$ is any n-dimensional vector space over K; and so forth.

20. Jun 11, 2015

### vela

Staff Emeritus
Why not use the basis ${\cal C'} = (f_1,\dots,f_p,f(e_{p+1}),\dots,f(e_n))$? Then the ones will be on the diagonal, and you can avoid the awkward phrasing "an identity matrix packed somewhere starting at line 1 and ending column $n$."

This makes me think there's something wrong with your proof since by choosing the right bases, you can write any linear transformation in this block-diagonal form. You've argued that the 0-block will be bigger than the identity matrix, but is that enough to guarantee that $M^2=0$? And how do you know there is a similarity transformation that allows you to turn M into this diagonal form?

The latter concern is your main problem, I think. I recommend you reconsider Dick's suggestion to look into Jordan normal form.

Last edited: Jun 11, 2015
21. Jun 12, 2015

### geoffrey159

Ok, thanks. I understand now.

You can, but when you will show the converse: that if a matrix with 1's on the diagonal that is similar to a matrix $M$, then $M^2 = 0$, I think you will need the similarity between your matrix and my matrix.

[EDIT] And the 1's will be on the diagonal if $f$ has exactly rank $n/2$. [EDIT]

Because the awkward matrix is an expression of $f$ in another basis than the original one. This means that there exist a 'change of basis' matrix $P$ such that $M = P^{-1} A P$, $A$ for awkward

Last edited: Jun 12, 2015
22. Jun 12, 2015

### vela

Staff Emeritus
But if you simply reorder the vectors in C', you'd still have a basis, right? So the diagonal matrix is "an expression of $f$ in another basis than the original one," and according to your reasoning, there's a similarity transformation which diagonalizes M. The entries on the diagonal should be the eigenvalues of M, but this leads to a contradiction because 1 is not an eigenvalue of M.

23. Jun 12, 2015

### geoffrey159

If you reorder the basis ${\cal B'}$ and/or ${\cal C'}$ such that $M$ is similar to a diagonal matrix that contains only 1's, then
$M = P^{-1} D P = P^{-1} D^2 P = M^2 = 0$.

[EDIT] But this is absurd because $M$ and the diagonal matrix should have the same rank. Does this mean there is no reordering possible ? I doubt this. It is getting very confusing.

Last edited: Jun 12, 2015
24. Jun 12, 2015

### vela

Staff Emeritus
You're running into contradictions, which, to me, suggests that your claim that there is a similarity transformation relating M and D (or A) isn't true.

25. Jun 13, 2015

### geoffrey159

[EDIT] Post deleted

Last edited: Jun 13, 2015