1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Matrices M such that M^2 = 0 ?

  1. Jun 10, 2015 #1
    1. The problem statement, all variables and given/known data

    What are the ##n\times n## matrices over a field ##K## such that ##M^2 = 0 ## ?

    2. Relevant equations

    3. The attempt at a solution

    Please can you tell me if this is correct, it looks ok to me but I have some doubts. I have reused the ideas that I found in a proof about equivalent matrices.
    • One possibility is ##M = 0##
    • Assume that ##M## represents the linear map ##f: E \rightarrow F## in basis ##{\cal B}## and ##{\cal C}## both with dimension ##n##.

      Since ##f^2 = 0##, then ##\text{Im}(f) \subset \text{Ker}(f) ##.
      Let ##(e_1,...,e_p)## be a basis of ##\text{Ker}(f)##. This basis can be completed into a basis ##{\cal B'} = (e_1,...,e_p,e_{p+1},...,e_n) ## of ##E##.
      The family ##(f(e_{p+1}),...,f(e_n)) ## belongs to ##\text{Im}(f) \subset \text{Ker}(f) ## and is linearly independent in ##F##.
      • Linear indenpendence :
        ##\sum_{k = p+1}^n \lambda_k f(e_k) = 0 \Rightarrow \sum_{k = p+1}^n \lambda_k e_k \in \text{Ker}(f) = \text{span}(e_{1},...,e_p) ##, but ##{\cal B'}## being a basis of ##E##, all the lambda's are zero.
      • Free families in a vector space have less vectors than a basis of that vector space, so ##n-p \le p \Rightarrow p \ge n/2##. By the rank theorem, ##f## has rank less than ##n/2##
    The free family ##(f(e_{p+1}),...,f(e_n)) ## can be completed into a basis ##{\cal C'} = (f(e_{p+1}),...,f(e_n), f_1,...,f_p) ## of ##F##.​

    So it follows from all this that in the basis ##({\cal B',C'})##, the matrix of ##f## is zero everywhere but in the upper right corner where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##, the somewhere depending upon the rank of ##f##.
    -> My answer is 0 and matrices that are similar to a matrix zero everywhere but in the upper right corner, where there is an identity matrix packed somewhere starting at line 1 and ending column ##n##.
     
  2. jcsd
  3. Jun 10, 2015 #2

    DEvens

    User Avatar
    Education Advisor
    Gold Member

    How does this fit into your answer?

    1 -1
    1 -1
     
  4. Jun 10, 2015 #3
    Take ##P = \begin{pmatrix} 0 & 1 \\ 1 & - 1 \end{pmatrix}## so that ##P^{-1} = \begin{pmatrix} 1 & 1 \\ 1 & 0 \end{pmatrix}##:

    ##\begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix} = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P##
     
  5. Jun 10, 2015 #4

    Mark44

    Staff: Mentor

    What does this work have to do with your problem? The question asks about square matrices M such that M2 = 0. Your matrix P doesn't satisfy this requirement.

    In post 1, your first point is that M = 0. Since M2 = 0, n x n zero matrices clearly are included.
    Your second point in that post doesn't include the matrix that DEvens gave. Also, it is much more general than it needs to me. In particular, in this part:
    The matrices in question are square, so the linear map f takes Rn to a subspace of Rn; namely, to the 0 vector in Rn.
     
  6. Jun 10, 2015 #5
    DrEvens asked me to illustrate my point on a simple example. I prove that the given matrix ##M = \begin{pmatrix} 1 & -1 \\ 1 & -1 \end{pmatrix}##, which satisfies ##M^2 = 0##, is similar to ##\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} ##, which is the only ##2\times 2## matrix 0 everywhere but in the upper right where there is an identity matrix.


    The question just says that the coefficients are in the field ##K##. You can't assume that you have a linear map from ##\mathbb{R}^n \rightarrow \mathbb{R}^n##, or there is something I don't understand
     
  7. Jun 10, 2015 #6

    Mark44

    Staff: Mentor

    That's DEvens (no r). I didn't understand the point of your example. I though that you were giving P as an example of a matrix for which P2 = 0.
    You are correct. I should have said that the map is from Kn to a subspace of Kn.

    You also said this in post #1:
    What about this matrix?
    $$\begin{bmatrix} 0 & 0 \\ 1 & 0 \end{bmatrix}$$
    There is no identity matrix in the upper right corner...
     
    Last edited: Jun 10, 2015
  8. Jun 10, 2015 #7
    :biggrin: I meant no offense

    Why not a more general vector space over the field ##K## ? I have no example in mind, maybe I'm having a lack of understanding here. If you explain ...
     
  9. Jun 10, 2015 #8
    Yes there is : ## P = P^{-1} = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} ##, then ## M = P^{-1} \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} P ##
     
  10. Jun 10, 2015 #9

    Mark44

    Staff: Mentor

    There is no identity matrix in the upper right corner in the example I gave, but that matrix is similar to one that has an identity matrix there.

    Do your examples extend to 3 x 3 matrices or larger?

    In the 2 x 2 case do you have a geometric feel for what this matrix does to an arbitrary input vector?
    $$\begin{bmatrix} 0 & 1 \\ 0 & 0 \end{bmatrix}$$
     
  11. Jun 10, 2015 #10
    No it does not have an identity matrix in the upper right corner, but it is not my point.
    My point is :

    ## M^2 = 0 \iff M = 0 \text{ or } \exists P\in \text{GL}_n(K) :\ M = P^{-1} \begin{pmatrix} 0 & I_{\text{rk}(M)} \\ 0 & 0 \end{pmatrix} P##

    I believe it extends to ##n\times n ## matrices or larger (if you say so), it is what I've tried to prove, and I have no geometric feel whatsoever, for now :biggrin:
     
  12. Jun 10, 2015 #11

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    Have you tried using orthogonality relations, i.e., a row, considered as a vector, must be orthogonal to each of the columns, considered as a vector? So each column must be in the ortho (complement) space of all of the row vectors. Of course, for general fields, this would be an abstract orthogonality relation.
     
    Last edited: Jun 10, 2015
  13. Jun 10, 2015 #12
    I haven't tried, but what I have already looks like a description, doesn't it ? It seems that It doesn't convince many people ... :frown:
     
  14. Jun 10, 2015 #13

    Mark44

    Staff: Mentor

    No problem.
    The matrix is n X n. If the field is K, then the matrix represents a transformation from Kn to itself.
     
  15. Jun 10, 2015 #14

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    Maybe if you look at Jordan Normal Form you might be able to form a more convincing way of describing what you are trying to say.
     
  16. Jun 11, 2015 #15
    Along this conversation, I've had trouble understanding why you say that. For example, if ##K = \mathbb{C}##, and your vector space over ##K## is the set of polynomials of degree less than ##n##. Any endomorphism of that vector space comes with a ## (n+1) \times (n+1)## matrix with coefficients in ##K##.
    Why should the vector space be reduced to ##K^n## ?

    I don't know what Jordan Normal Form is. Do you have the solution to the problem ? :smile: How much does it cost ? :biggrin:


    However, it seems that it works on ##2\times 2## matrices.
     
  17. Jun 11, 2015 #16

    Dick

    User Avatar
    Science Advisor
    Homework Helper

    I meant you should look it up. It's free on here http://en.wikipedia.org/wiki/Jordan_normal_form It puts what are trying to say in clearer terms than 'an identity in the upper right corner'.
     
  18. Jun 11, 2015 #17

    Mark44

    Staff: Mentor

    No. A polynomial of degree less than n has at most n terms (c0z0 + c1z1+ ... + cn - 1 zn - 1), with exponents ranging from 0 through n - 1, inclusive. So the matrix would be n X n in size.
     
  19. Jun 11, 2015 #18
    I have put it quite clearly in post #10, in symbolic language. I sense your annoyance, would you like to have the final word about this problem ?
     
  20. Jun 11, 2015 #19

    pasmith

    User Avatar
    Homework Helper

    All vector spaces of dimension [itex]n[/itex] over [itex]K[/itex] are isomorphic to [itex]K^n[/itex], whatever the actual objects: p-by-q matrices with entries in K where pq = n; polynomials of degree at most n - 1 with coefficients in K; functions [itex]X \to K[/itex] where [itex]X[/itex] is any set of cardinality [itex]n[/itex]; linear functions [itex]V \to K[/itex] where [itex]V[/itex] is any n-dimensional vector space over K; and so forth.
     
  21. Jun 11, 2015 #20

    vela

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper
    Education Advisor

    Why not use the basis ##{\cal C'} = (f_1,\dots,f_p,f(e_{p+1}),\dots,f(e_n))##? Then the ones will be on the diagonal, and you can avoid the awkward phrasing "an identity matrix packed somewhere starting at line 1 and ending column ##n##."

    This makes me think there's something wrong with your proof since by choosing the right bases, you can write any linear transformation in this block-diagonal form. You've argued that the 0-block will be bigger than the identity matrix, but is that enough to guarantee that ##M^2=0##? And how do you know there is a similarity transformation that allows you to turn M into this diagonal form?

    The latter concern is your main problem, I think. I recommend you reconsider Dick's suggestion to look into Jordan normal form.
     
    Last edited: Jun 11, 2015
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Matrices M such that M^2 = 0 ?
Loading...