Theorem: Rank of a Matrix: Proof & Questions

  • Context: Graduate 
  • Thread starter Thread starter jeff1evesque
  • Start date Start date
  • Tags Tags
    Matrix rank
Click For Summary

Discussion Overview

The discussion revolves around a theorem concerning the rank of a matrix and its properties when multiplied by invertible matrices. Participants explore the proof of the theorem, which includes questions about the implications of linear transformations being onto and the rank of composite transformations. The scope includes theoretical aspects of linear algebra and proofs related to matrix rank.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants assert that if Q is an invertible matrix, then L_Q is an invertible linear map, which implies it is both 1-1 and onto.
  • Others discuss the justification for the step in the proof that states "since L_Q is onto," linking it to the equality R(L_A_Q) = R(L_A).
  • There are inquiries about how the onto-ness of L_Q leads to the conclusion in the proof regarding the rank of AQ.
  • Some participants propose that the proof for parts (b) and (c) of the theorem can be established once parts (a) is understood.
  • Questions arise about the kernel of linear transformations and how it relates to the rank of the transformations involved.
  • One participant outlines a proof for part (b) but notes that it hinges on proving a related problem about isomorphisms and dimensions of vector spaces.
  • There is a request for clarification on the definitions and implications of isomorphisms in the context of the discussion.

Areas of Agreement / Disagreement

Participants express varying levels of understanding regarding the proof and its implications, with some agreeing on the properties of invertible matrices while others seek further clarification. The discussion remains unresolved on several points, particularly regarding the proof of part (b) and the related problem about isomorphisms.

Contextual Notes

Limitations include potential misunderstandings about the terminology used in the proof, such as the use of R for images, which some participants challenge. Additionally, the discussion reveals dependencies on definitions and assumptions that are not universally agreed upon.

jeff1evesque
Messages
312
Reaction score
0
Theorem: Let A be an m x n matrix. If P and Q are invertible m x m and n x n matrices, respectively, then
(a.) rank(AQ) = rank(A)
(b.) rank(PA) = rank(A)
(c.) rank(PAQ) = rank(A)

Proof:
[tex]R(L_A_Q)[/tex] = [tex]R(L_AL_Q)[/tex] = [tex]L_AL_Q(F^n)[/tex] = [tex]L_A(L_Q(F^n))[/tex]= [tex]L_A(F^n)[/tex] = [tex]R(L_A)[/tex]

since [tex]L_Q[/tex] is onto. Therefore,
rank(AQ) = dim(R([tex]L_A_Q[/tex])) = dim(R([tex]L_A[/tex])) = rank(A). (#1)

Question1: How is [tex]L_Q[/tex] onto?
Question2:How does the onto-ness imply (#1)?
Question3:Can anyone help me/supply ideas for the proof for parts (b.) and (c.) of the theorem?

NOTE: the symbol R denotes the terminology of images.
 
Last edited:
Physics news on Phys.org
Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]". <br /> Now that you have [itex]R(L_A_Q)=R(L_A)[/itex], it follows in particular that [itex]dim(R(L_A_Q))=dim(R(L_A))[/itex]. But by definition of the rank of a matrix, we have [itex]rank(AQ) = dim(R(L_{AQ} ))[/itex] and [itex]rank(A) = dim(R(L_A ))[/itex].<br /> <br /> Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the <u>image </u>of the map f.[/tex]
 
quasar987 said:
Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]". <br /> Now that you have [itex]R(L_A_Q)=R(L_A)[/itex], it follows in particular that [itex]dim(R(L_A_Q))=dim(R(L_A))[/itex]. But by definition of the rank of a matrix, we have [itex]rank(AQ) = dim(R(L_{AQ} ))[/itex] and [itex]rank(A) = dim(R(L_A ))[/itex].<br /> <br /> Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the <u>image </u>of the map f.[/tex]
[tex] <br /> Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way?[/tex]
 
Notations:
L(V,W) stands for a vector space of linear transformations form vector space V to W.
L(V) stands for a vector space of linear transformations form vector space V to itself.
rk(?) stands for the rank of "?".
ker(?) stands for the kernel of a linear transformation "?".
im(?) stands for the image of "?".
inv(?) stands for the inverse of a linear transformation "?".

Answer 1:
Think about the kernel of a linear transformation, if the inverse of a linear transfomation exists, then its kernel is {0}, i.e., the zero vector. In other words, the only way that makes 2 distinct vectors map to a same vector is that these two vectors belong to the kernel of the linear transfomation and their same mapping can only be {0}, since:
σu=σv ←→ σ(u-v)=0 ←→ ker(σ) σ∈L(V) and u,v∈V

Answer 2:
Your first two questions are identical to:
σ,τ,inv(τ)∈L(V), rk(στ) = rk(τσ) = rk(σ)

Let φ=στ, according to dim(ker(φ)) + rk(φ)=dim(V), rk(φ)=dim(V) - dim(ker(φ)), dim(V) is a fixed number, the only thing need be considered is ker(φ). The mapping process of φ can be decomposed to two steps: 1, mapping a vector to im(τ) by τ; 2, mapping the result of 1 to im(σ) by σ. As inv(τ)∈L(V), namely, ker(τ) = {0}, so step 1 will map V to V, ker(φ)=ker(τ)={0} for now, but inv(σ)∈L(V) is unknow, so the decisive factor is ker(σ) and after step 2, ker(φ)=ker(σ).
You can analyse τσ in a similar way.

Answer 3:
Let μ=τστ, based on answer 2, rk(στ)=rk(φ)=rk(σ), so μ=τφ, it's the same question mentioned in answer 2.
 
jeff1evesque said:
Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way?

Which part is fuzzy to you?
 
quasar987 said:
Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]".[/tex]
[tex] <br /> <b>Question:</b> How do we justify "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question.<br /> <br /> Also for the proof of part (b.) to this theorem, I have the following outlined:<br /> <br /> [tex]dim(R(L_A))[/tex] = [tex]dim(L_PR(L_A))[/tex] = [tex]dim((L_P(L_A(F^n)))[/tex] = [tex]dim(R(L_PL_A))[/tex] = [tex]dim(R(L_P_A))[/tex] = [tex]rank(PA)[/tex]<br /> <br /> but the first equality apparently hinges on the result of the following problem:<br /> Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let [tex]V_0[/tex] be a subspace of V:<br /> Prove that [tex]dim(V_0)[/tex] = [tex]dim(T(V_0)).[/tex]<br /> <br /> <b>Question:</b> Is there anyway you could help me prove this new question?<br /> <br /> Thanks again[/tex][/tex]
 
jeff1evesque said:
Question: How do we justify "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question.[/tex]
[tex] Ask yourself what does it mean that [itex]L_Q[/itex] is onto. It means precisely that [itex]L_Q(F^n)=F^n[/itex].<br /> <br /> <blockquote data-attributes="" data-quote="jeff1evesque" data-source="post: 2163397" class="bbCodeBlock bbCodeBlock--expandable bbCodeBlock--quote js-expandWatch"> <div class="bbCodeBlock-title"> jeff1evesque said: </div> <div class="bbCodeBlock-content"> <div class="bbCodeBlock-expandContent js-expandContent "> Also for the proof of part (b.) to this theorem, I have the following outlined:<br /> <br /> [tex]dim(R(L_A))[/tex] = [tex]dim(L_PR(L_A))[/tex] = [tex]dim((L_P(L_A(F^n)))[/tex] = [tex]dim(R(L_PL_A))[/tex] = [tex]dim(R(L_P_A))[/tex] = [tex]rank(PA)[/tex]<br /> <br /> but the first equality apparently hinges on the result of the following problem:<br /> Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let [tex]V_0[/tex] be a subspace of V:<br /> Prove that [tex]dim(V_0)[/tex] = [tex]dim(T(V_0)).[/tex]<br /> <br /> <b>Question:</b> Is there anyway you could help me prove this new question? </div> </div> </blockquote>That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.<br /> <br /> Recall that by definition, the vector space [itex]V_0[/itex] has <i>dimension d</i> if it admits a set of d linearly independent vectors that span [itex]V_0[/itex] (i.e. a <i>basis </i>of d elements). So, suppose [itex]\{e_1,...,e_d\}[/itex] is a basis for [itex]V_0[/itex]. What can you say about the sets [itex]\{T(e_1),...,T(e_d)\}[/itex]?[/tex]
 
quasar987 said:
Ask yourself what does it mean that [itex]L_Q[/itex] is onto. It means precisely that [itex]L_Q(F^n)=F^n[/itex].


That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.

Recall that by definition, the vector space [itex]V_0[/itex] has dimension d if it admits a set of d linearly independent vectors that span [itex]V_0[/itex] (i.e. a basis of d elements). So, suppose [itex]\{e_1,...,e_d\}[/itex] is a basis for [itex]V_0[/itex]. What can you say about the sets [itex]\{T(e_1),...,T(e_d)\}[/itex]?

Is there any way to prove this problem without your specified definitions above, with a more emphasis on the idea of "isomorphism"? The reason I ask about this is because it isn't until the next theorem that wee know ...the rank of a matrix is the dimension of the subspace generated by its columns- in particular rank(A) = [tex]dim(R(L_A))[/tex] = dim( [tex]span({a_1, a_2, .., a_n})[/tex], where [tex]a_n[/tex] are the jth column of A.

Thanks,

JL
 
Not really, no.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
9K
  • · Replies 5 ·
Replies
5
Views
2K