Matrices M such that M^2 = 0 ?

  • Thread starter geoffrey159
  • Start date
  • Tags
    Matrices
In summary: I don't understand what you're saying. I said in the basis ##({\cal B',C'})##. You can't choose a basis for a vector space unless you have a vector space. You have to have a specific vector space in mind. The matrices in question all are n x n. So the vector space has something to do with Kn. I don't see how it could be a vector space over some other field. Maybe you have some other vector space in mind?In summary, the question asks for the description of ##n \times n## matrices over a field ##K## such that ##M^2 = 0##. Two possible answers are that the matrix is either the zero matrix or it is
  • #1
geoffrey159
535
72

Homework Statement


[/B]
What are the ##n\times n## matrices over a field ##K## such that ##M^2 = 0 ## ?

Homework Equations



The Attempt at a Solution



Please can you tell me if this is correct, it looks ok to me but I have some doubts. I have reused the ideas that I found in a proof about equivalent matrices.
  • One possibility is ##M = 0##
  • Assume that ##M## represents the linear map ##f: E \rightarrow F## in basis ##{\cal B}## and ##{\cal C}## both with dimension ##n##.

    Since ##f^2 = 0##, then ##\text{Im}(f) \subset \text{Ker}(f) ##.
    Let ##(e_1,...,e_p)## be a basis of ##\text{Ker}(f)##. This basis can be completed into a basis ##{\cal B'} = (e_1,...,e_p,e_{p+1},...,e_n) ## of ##E##.
    The family ##(f(e_{p+1}),...,f(e_n)) ## belongs to ##\text{Im}(f) \subset \text{Ker}(f) ## and is linearly independent in ##F##.
    • Linear indenpendence :
      ##\sum_{k = p+1}^n \lambda_k f(e_k) = 0 \Rightarrow \sum_{k = p+1}^n \lambda_k e_k \in \text{Ker}(f) = \text{span}(e_{1},...,e_p) ##, but ##{\cal B'}## being a basis of ##E##, all the lambda's are zero.
    • Free families in a vector space have less vectors than a basis of that vector space, so ##n-p \le p \Rightarrow p \ge n/2##. By the rank theorem, ##f## has rank less than ##n/2##
The free family ##(f(e_{p+1}),...,f(e_n)) ## can be completed into a basis ##{\cal C'} = (f(e_{p+1}),...,f(e_n), f_1,...,f_p) ## of ##F##.​

So it follows from all this that in the basis ##({\cal B',C'})##, the matrix of ##f## is zero everywhere but in the upper right corner where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##, the somewhere depending upon the rank of ##f##.
-> My answer is 0 and matrices that are similar to a matrix zero everywhere but in the upper right corner, where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##.
 
Physics news on Phys.org
  • #2
How does this fit into your answer?

1 -1
1 -1
 
  • #3
Take ##P = \begin{pmatrix} 0 & 1 \\ 1 & - 1 \end{pmatrix}## so that ##P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}##:

##\begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix} = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P##
 
  • #4
geoffrey159 said:
Take ##P = \begin{pmatrix} 0 & 1 \\ 1 & - 1 \end{pmatrix}## so that ##P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}##:

##\begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix} = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P##
What does this work have to do with your problem? The question asks about square matrices M such that M2 = 0. Your matrix P doesn't satisfy this requirement.

In post 1, your first point is that M = 0. Since M2 = 0, n x n zero matrices clearly are included.
Your second point in that post doesn't include the matrix that DEvens gave. Also, it is much more general than it needs to me. In particular, in this part:
Assume that ##M## represents the linear map ##f: E \rightarrow F## in basis ##{\cal B}## and ##{\cal C}## both with dimension n.
The matrices in question are square, so the linear map f takes Rn to a subspace of Rn; namely, to the 0 vector in Rn.
 
  • #5
Mark44 said:
What does this work have to do with your problem?

DrEvens asked me to illustrate my point on a simple example. I prove that the given matrix ##M = \begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix}##, which satisfies ##M^2 = 0##, is similar to ##\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} ##, which is the only ##2\times 2## matrix 0 everywhere but in the upper right where there is an identity matrix.
Mark44 said:
Also, it is much more general than it needs to me.

The question just says that the coefficients are in the field ##K##. You can't assume that you have a linear map from ##\mathbb{R}^n \rightarrow \mathbb{R}^n##, or there is something I don't understand
 
  • #6
Mark44 said:
What does this work have to do with your problem?

geoffrey159 said:
DrEvens asked me to illustrate my point on a simple example. I prove that the given matrix ##M = \begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix}##, which satisfies ##M^2 = 0##, is similar to ##\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} ##, which is the only ##2\times 2## matrix 0 everywhere but in the upper right where there is an identity matrix.
That's DEvens (no r). I didn't understand the point of your example. I though that you were giving P as an example of a matrix for which P2 = 0.
Mark44 said:
Also, it is much more general than it needs to me.

geoffrey159 said:
The question just says that the coefficients are in the field ##K##. You can't assume that you have a linear map from ##\mathbb{R}^n \rightarrow \mathbb{R}^n##, or there is something I don't understand
You are correct. I should have said that the map is from Kn to a subspace of Kn.

You also said this in post #1:
geoffrey159 said:
So it follows from all this that in the basis ##({\cal B',C'})##, the matrix of ##f## is zero everywhere but in the upper right corner where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##, the somewhere depending upon the rank of ##f##.

What about this matrix?
$$\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$$
There is no identity matrix in the upper right corner...
 
Last edited:
  • #7
Mark44 said:
That's DEvens (no r)

:biggrin: I meant no offense

Mark44 said:
I should have said that the map is from Kn to a subspace of Kn.

Why not a more general vector space over the field ##K## ? I have no example in mind, maybe I'm having a lack of understanding here. If you explain ...
 
  • #8
Mark44 said:
What about this matrix?
$$\begin{pmatrix} 0 & 0 \\ 1 & 0 \end{pmatrix}$$
There is no identity matrix in the upper right corner...

Yes there is : ## P = P^{-1} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} ##, then ## M = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P ##
 
  • #9
There is no identity matrix in the upper right corner in the example I gave, but that matrix is similar to one that has an identity matrix there.

Do your examples extend to 3 x 3 matrices or larger?

In the 2 x 2 case do you have a geometric feel for what this matrix does to an arbitrary input vector?
$$\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$$
 
  • #10
No it does not have an identity matrix in the upper right corner, but it is not my point.
My point is :

## M^2 = 0 \iff M = 0 \text{ or } \exists P\in \text{GL}_n(K) :\ M = P^{-1} \begin{pmatrix} 0 & I_{\text{rk}(M)} \\ 0 & 0 \end{pmatrix} P##

I believe it extends to ##n\times n ## matrices or larger (if you say so), it is what I've tried to prove, and I have no geometric feel whatsoever, for now :biggrin:
 
  • #11
Have you tried using orthogonality relations, i.e., a row, considered as a vector, must be orthogonal to each of the columns, considered as a vector? So each column must be in the ortho (complement) space of all of the row vectors. Of course, for general fields, this would be an abstract orthogonality relation.
 
Last edited:
  • #12
I haven't tried, but what I have already looks like a description, doesn't it ? It seems that It doesn't convince many people ... :frown:
 
  • #13
geoffrey159 said:
:biggrin: I meant no offense
No problem.
geoffrey159 said:
Why not a more general vector space over the field ##K## ? I have no example in mind, maybe I'm having a lack of understanding here. If you explain ...
The matrix is n X n. If the field is K, then the matrix represents a transformation from Kn to itself.
 
  • #14
geoffrey159 said:
I haven't tried, but what I have already looks like a description, doesn't it ? It seems that It doesn't convince many people ... :frown:

Maybe if you look at Jordan Normal Form you might be able to form a more convincing way of describing what you are trying to say.
 
  • #15
Mark44 said:
The matrix is n X n. If the field is K, then the matrix represents a transformation from Kn to itself.

Along this conversation, I've had trouble understanding why you say that. For example, if ##K = \mathbb{C}##, and your vector space over ##K## is the set of polynomials of degree less than ##n##. Any endomorphism of that vector space comes with a ## (n+1) \times (n+1)## matrix with coefficients in ##K##.
Why should the vector space be reduced to ##K^n## ?

Dick said:
Maybe if you look at Jordan Normal Form you might be able to form a more convincing way of describing what you are trying to say.

I don't know what Jordan Normal Form is. Do you have the solution to the problem ? :smile: How much does it cost ? :biggrin:However, it seems that it works on ##2\times 2## matrices.
 
  • #16
geoffrey159 said:
I don't know what Jordan Normal Form is. Do you have the solution to the problem ? :smile: How much does it cost ? :biggrin:However, it seems that it works on ##2\times 2## matrices.

I meant you should look it up. It's free on here http://en.wikipedia.org/wiki/Jordan_normal_form It puts what are trying to say in clearer terms than 'an identity in the upper right corner'.
 
  • #17
geoffrey159 said:
Along this conversation, I've had trouble understanding why you say that. For example, if ##K = \mathbb{C}##, and your vector space over ##K## is the set of polynomials of degree less than ##n##. Any endomorphism of that vector space comes with a ## (n+1) \times (n+1)## matrix with coefficients in ##K##.
No. A polynomial of degree less than n has at most n terms (c0z0 + c1z1+ ... + cn - 1 zn - 1), with exponents ranging from 0 through n - 1, inclusive. So the matrix would be n X n in size.
geoffrey159 said:
Why should the vector space be reduced to ##K^n## ?
I don't know what Jordan Normal Form is. Do you have the solution to the problem ? :smile: How much does it cost ? :biggrin:However, it seems that it works on ##2\times 2## matrices.
 
  • #18
Dick said:
It puts what are trying to say in clearer terms than 'an identity in the upper right corner'.

I have put it quite clearly in post #10, in symbolic language. I sense your annoyance, would you like to have the final word about this problem ?
 
  • #19
geoffrey159 said:
Why should the vector space be reduced to ##K^n## ?

All vector spaces of dimension [itex]n[/itex] over [itex]K[/itex] are isomorphic to [itex]K^n[/itex], whatever the actual objects: p-by-q matrices with entries in K where pq = n; polynomials of degree at most n - 1 with coefficients in K; functions [itex]X \to K[/itex] where [itex]X[/itex] is any set of cardinality [itex]n[/itex]; linear functions [itex]V \to K[/itex] where [itex]V[/itex] is any n-dimensional vector space over K; and so forth.
 
  • Like
Likes geoffrey159
  • #20
geoffrey159 said:
The free family ##(f(e_{p+1}),...,f(e_n)) ## can be completed into a basis ##{\cal C'} = (f(e_{p+1}),...,f(e_n), f_1,...,f_p) ## of ##F##.

So it follows from all this that in the basis ##({\cal B',C'})##, the matrix of ##f## is zero everywhere but in the upper right corner where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##, the somewhere depending upon the rank of ##f##.

-> My answer is 0 and matrices that are similar to a matrix zero everywhere but in the upper right corner, where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##.
Why not use the basis ##{\cal C'} = (f_1,\dots,f_p,f(e_{p+1}),\dots,f(e_n))##? Then the ones will be on the diagonal, and you can avoid the awkward phrasing "an identity matrix packed somewhere starting at line 1 and ending column ##n##."

This makes me think there's something wrong with your proof since by choosing the right bases, you can write any linear transformation in this block-diagonal form. You've argued that the 0-block will be bigger than the identity matrix, but is that enough to guarantee that ##M^2=0##? And how do you know there is a similarity transformation that allows you to turn M into this diagonal form?

The latter concern is your main problem, I think. I recommend you reconsider Dick's suggestion to look into Jordan normal form.
 
Last edited:
  • Like
Likes geoffrey159
  • #21
pasmith said:
All vector spaces of dimension [itex]n[/itex] over [itex]K[/itex] are isomorphic to [itex]K^n[/itex], whatever the actual objects: p-by-q matrices with entries in K where pq = n

Ok, thanks. I understand now.

vela said:
Why not use the basis ##{\cal C'} = (f_1,\dots,f_p,f(e_{p+1}),\dots,f(e_n))##? Then the ones will be on the diagonal, and you can avoid the awkward phrasing "an identity matrix packed somewhere starting at line 1 and ending column ##n##."

You can, but when you will show the converse: that if a matrix with 1's on the diagonal that is similar to a matrix ##M##, then ##M^2 = 0##, I think you will need the similarity between your matrix and my matrix.

[EDIT] And the 1's will be on the diagonal if ##f## has exactly rank ##n/2##. [EDIT]

vela said:
And how do you know there is a similarity transformation that allows you to turn M into this diagonal form?

Because the awkward matrix is an expression of ##f## in another basis than the original one. This means that there exist a 'change of basis' matrix ##P## such that ##M = P^{-1} A P ##, ## A ## for awkward
 
Last edited:
  • #22
geoffrey159 said:
Because the awkward matrix is an expression of ##f## in another basis than the original one. This means that there exist a 'change of basis' matrix ##P## such that ##M = P^{-1} A P ##, ## A ## for awkward
But if you simply reorder the vectors in C', you'd still have a basis, right? So the diagonal matrix is "an expression of ##f## in another basis than the original one," and according to your reasoning, there's a similarity transformation which diagonalizes M. The entries on the diagonal should be the eigenvalues of M, but this leads to a contradiction because 1 is not an eigenvalue of M.
 
  • Like
Likes geoffrey159
  • #23
If you reorder the basis ##{\cal B'}## and/or ##{\cal C'}## such that ##M## is similar to a diagonal matrix that contains only 1's, then
##M = P^{-1} D P = P^{-1} D^2 P = M^2 = 0##.

[EDIT] But this is absurd because ##M## and the diagonal matrix should have the same rank. Does this mean there is no reordering possible ? I doubt this. It is getting very confusing.
 
Last edited:
  • #24
You're running into contradictions, which, to me, suggests that your claim that there is a similarity transformation relating M and D (or A) isn't true.
 
  • Like
Likes geoffrey159
  • #25
[EDIT] Post deleted
 
Last edited:
  • #26
I thought about what you said, and I can see an error in my reasoning: I did not use correctly the 'change of basis' formula which is
## \text{Mat}_{\cal B,C}(f) = \text{Mat}_{\cal B,C}(\text{id}_F \circ f \circ \text{id}_E) = \text{Mat}_{\cal C',C}(\text{id}_F) \text{Mat}_{\cal B',C'}(f) \text{Mat}_{\cal B,B'} (\text{id}_E) ##.
My reasoning shows rather that ##M## and ## A## are equivalent, but not similar ( ## M = Q^{-1} A P ##, and not ## M = P^{-1} A P ##). It is a real problem for me because I can't show the converse, it is not enough to have a matrix equivalent to ##A## to be sure that it squares to zero Also, using the space ##F## and basis ##{\cal C, C'}## just complicates matters because ##f## is a linear map from ##E \rightarrow E## since ## \text{Im} (f) \subset \text{Ker} (f) \subset E##. It is possible to build a basis ##{\cal B'} ## such that
## \text{Mat}_{\cal B,B}(f) = \text{Mat}_{\cal B',B}(\text{id}_E) \text{Mat}_{\cal B',B'}(f) \text{Mat}_{\cal B,B'} (\text{id}_E) ##, such that ## M = P^{-1} A P ##.
Take ##(e_1,...,e_p)## a basis of ##\text{Im}(f)## that you complete in a basis ##(e_1,...,e_p,...,e_{n-p})## of ##\text{Ker}(f)## thanks to the inclusion ## \text{Im} (f) \subset \text{Ker} (f) ## and the rank theorem. There are ##(a_1,..,a_p)## in ##E## such that ##e_1 = f(a_1),...e_p = f(a_p)##, because ##e_1,..,e_p\in \text{Im}(f)##. Now the family ##(e_1,...,e_p,...,e_{n-p},a_1,...,a_p)## is free in ##E##, which has dimension ##n##, so ##{\cal B'} =(e_1,...,e_p,...,e_{n-p},a_1,...,a_p)## must be a basis of ##E##.
In this basis, ## M = P^{-1} A P ##
 
  • #27
Looks good. Note if you ordered the basis B' as ##(e_1,a_1,e_2,a_2,\cdots,e_p,a_p,e_{p+1},e_{p+2},\cdots,e_{n-p})##, the matrix A would be in Jordan normal form.
 
Last edited:
  • #28
Thank you for your help ! :biggrin: (I'm glad to be done with this problem) :biggrin:
 

1. What does it mean for a matrix to have the property M^2 = 0?

When a matrix M has the property M^2 = 0, it means that when the matrix is multiplied by itself, the resulting matrix is a zero matrix where all elements are equal to 0.

2. What is the significance of matrices with the property M^2 = 0 in mathematics?

Matrices with the property M^2 = 0 are important in linear algebra as they represent matrices with a null space, meaning they have no independent columns and are therefore singular. They are also used in solving systems of linear equations and in applications such as graph theory.

3. What are some common examples of matrices M such that M^2 = 0?

One common example is the zero matrix, where all elements are equal to 0. Another example is a 2x2 matrix with the elements a and b in the first row and c and -a in the second row, where a, b, and c are any real numbers.

4. Is it possible for a non-zero matrix to have the property M^2 = 0?

No, it is not possible for a non-zero matrix to have the property M^2 = 0. This is because for M^2 to equal 0, all elements in the resulting matrix must be equal to 0, which can only happen if the original matrix is a zero matrix.

5. How can the property M^2 = 0 be used in real-world applications?

In real-world applications, matrices with the property M^2 = 0 can be used to represent situations where no change occurs after multiple transformations. For example, in physics, they can be used to represent systems in equilibrium or in economics, they can represent break-even points in a business model.

Similar threads

  • Calculus and Beyond Homework Help
Replies
14
Views
594
  • Calculus and Beyond Homework Help
Replies
6
Views
2K
  • Calculus and Beyond Homework Help
Replies
0
Views
449
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
13
Views
1K
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
3K
Back
Top