Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The rank of a matrix

  1. Apr 15, 2009 #1
    Theorem: Let A be an m x n matrix. If P and Q are invertible m x m and n x n matrices, respectively, then
    (a.) rank(AQ) = rank(A)
    (b.) rank(PA) = rank(A)
    (c.) rank(PAQ) = rank(A)

    [tex]R(L_A_Q)[/tex] = [tex]R(L_AL_Q)[/tex] = [tex]L_AL_Q(F^n)[/tex] = [tex]L_A(L_Q(F^n)) [/tex]= [tex]L_A(F^n)[/tex] = [tex]R(L_A)[/tex]

    since [tex]L_Q[/tex] is onto. Therefore,
    rank(AQ) = dim(R([tex]L_A_Q[/tex])) = dim(R([tex]L_A[/tex])) = rank(A). (#1)

    Question1: How is [tex]L_Q[/tex] onto?
    Question2:How does the onto-ness imply (#1)?
    Question3:Can anyone help me/supply ideas for the proof for parts (b.) and (c.) of the theorem?

    NOTE: the symbol R denotes the terminology of images.
    Last edited: Apr 15, 2009
  2. jcsd
  3. Apr 15, 2009 #2


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

    Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]".
    Now that you have [itex]R(L_A_Q)=R(L_A)[/itex], it follows in particular that [itex]dim(R(L_A_Q))=dim(R(L_A))[/itex]. But by definition of the rank of a matrix, we have [itex]rank(AQ) = dim(R(L_{AQ} ))[/itex] and [itex]rank(A) = dim(R(L_A ))[/itex].

    Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the image of the map f.
  4. Apr 15, 2009 #3
    Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way?
  5. Apr 16, 2009 #4
    L(V,W) stands for a vector space of linear transformations form vector space V to W.
    L(V) stands for a vector space of linear transformations form vector space V to itself.
    rk(?) stands for the rank of "?".
    ker(?) stands for the kernel of a linear transformation "?".
    im(?) stands for the image of "?".
    inv(?) stands for the inverse of a linear transformation "?".

    Answer 1:
    Think about the kernel of a linear transformation, if the inverse of a linear transfomation exists, then its kernel is {0}, i.e., the zero vector. In other words, the only way that makes 2 distinct vectors map to a same vector is that these two vectors belong to the kernel of the linear transfomation and their same mapping can only be {0}, since:
    σu=σv ←→ σ(u-v)=0 ←→ ker(σ) σ∈L(V) and u,v∈V

    Answer 2:
    Your first two questions are identical to:
    σ,τ,inv(τ)∈L(V), rk(στ) = rk(τσ) = rk(σ)

    Let φ=στ, according to dim(ker(φ)) + rk(φ)=dim(V), rk(φ)=dim(V) - dim(ker(φ)), dim(V) is a fixed number, the only thing need be considered is ker(φ). The mapping process of φ can be decomposed to two steps: 1, mapping a vector to im(τ) by τ; 2, mapping the result of 1 to im(σ) by σ. As inv(τ)∈L(V), namely, ker(τ) = {0}, so step 1 will map V to V, ker(φ)=ker(τ)={0} for now, but inv(σ)∈L(V) is unknow, so the decisive factor is ker(σ) and after step 2, ker(φ)=ker(σ).
    You can analyse τσ in a similar way.

    Answer 3:
    Let μ=τστ, based on answer 2, rk(στ)=rk(φ)=rk(σ), so μ=τφ, it's the same question mentioned in answer 2.
  6. Apr 16, 2009 #5


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Which part is fuzzy to you?
  7. Apr 16, 2009 #6
    Question: How do we justify "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question.

    Also for the proof of part (b.) to this theorem, I have the following outlined:

    [tex]dim(R(L_A))[/tex] = [tex]dim(L_PR(L_A))[/tex] = [tex]dim((L_P(L_A(F^n)))[/tex] = [tex]dim(R(L_PL_A))[/tex] = [tex]dim(R(L_P_A))[/tex] = [tex]rank(PA)[/tex]

    but the first equality apparently hinges on the result of the following problem:
    Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let [tex]V_0[/tex] be a subspace of V:
    Prove that [tex]dim(V_0)[/tex] = [tex]dim(T(V_0)).[/tex]

    Question: Is there anyway you could help me prove this new question?

    Thanks again
  8. Apr 16, 2009 #7


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Ask yourself what does it mean that [itex]L_Q[/itex] is onto. It means precisely that [itex]L_Q(F^n)=F^n[/itex].

    That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.

    Recall that by definition, the vector space [itex]V_0[/itex] has dimension d if it admits a set of d linearly independent vectors that span [itex]V_0[/itex] (i.e. a basis of d elements). So, suppose [itex]\{e_1,...,e_d\}[/itex] is a basis for [itex]V_0[/itex]. What can you say about the sets [itex]\{T(e_1),...,T(e_d)\}[/itex]?
  9. Apr 17, 2009 #8
    Is there any way to prove this problem without your specified definitions above, with a more emphasis on the idea of "isomorphism"? The reason I ask about this is because it isn't until the next theorem that wee know ...the rank of a matrix is the dimension of the subspace generated by its columns- in particular rank(A) = [tex]dim(R(L_A))[/tex] = dim( [tex]span({a_1, a_2, .., a_n})[/tex], where [tex]a_n[/tex] are the jth column of A.


  10. Apr 17, 2009 #9


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member

    Not really, no.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook