# The rank of a matrix

## Main Question or Discussion Point

Theorem: Let A be an m x n matrix. If P and Q are invertible m x m and n x n matrices, respectively, then
(a.) rank(AQ) = rank(A)
(b.) rank(PA) = rank(A)
(c.) rank(PAQ) = rank(A)

Proof:
$$R(L_A_Q)$$ = $$R(L_AL_Q)$$ = $$L_AL_Q(F^n)$$ = $$L_A(L_Q(F^n))$$= $$L_A(F^n)$$ = $$R(L_A)$$

since $$L_Q$$ is onto. Therefore,
rank(AQ) = dim(R($$L_A_Q$$)) = dim(R($$L_A$$)) = rank(A). (#1)

Question1: How is $$L_Q$$ onto?
Question2:How does the onto-ness imply (#1)?
Question3:Can anyone help me/supply ideas for the proof for parts (b.) and (c.) of the theorem?

NOTE: the symbol R denotes the terminology of images.

Last edited:

Related Linear and Abstract Algebra News on Phys.org
quasar987
Homework Helper
Gold Member
Answer 1: Q is an invertible matrix. This is the same as saying that $L_Q$ is an invertible linear map. And this is the same as saying that $L_Q$ is 1-1 and onto.

Answer 2:In the proof, they said " [...] since $L_Q$ is onto." to justify the step "$$L_A(L_Q(F^n))=L_A(F^n)[/itex]". Now that you have $R(L_A_Q)=R(L_A)$, it follows in particular that $dim(R(L_A_Q))=dim(R(L_A))$. But by definition of the rank of a matrix, we have $rank(AQ) = dim(R(L_{AQ} ))$ and $rank(A) = dim(R(L_A ))$. Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the image of the map f. Answer 1: Q is an invertible matrix. This is the same as saying that $L_Q$ is an invertible linear map. And this is the same as saying that $L_Q$ is 1-1 and onto. Answer 2:In the proof, they said " [...] since $L_Q$ is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]". Now that you have $R(L_A_Q)=R(L_A)$, it follows in particular that $dim(R(L_A_Q))=dim(R(L_A))$. But by definition of the rank of a matrix, we have $rank(AQ) = dim(R(L_{AQ} ))$ and $rank(A) = dim(R(L_A ))$. Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the image of the map f. Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way? Notations: L(V,W) stands for a vector space of linear transformations form vector space V to W. L(V) stands for a vector space of linear transformations form vector space V to itself. rk(?) stands for the rank of "?". ker(?) stands for the kernel of a linear transformation "?". im(?) stands for the image of "?". inv(?) stands for the inverse of a linear transformation "?". Answer 1: Think about the kernel of a linear transformation, if the inverse of a linear transfomation exists, then its kernel is {0}, i.e., the zero vector. In other words, the only way that makes 2 distinct vectors map to a same vector is that these two vectors belong to the kernel of the linear transfomation and their same mapping can only be {0}, since: σu=σv ←→ σ(u-v)=0 ←→ ker(σ) σ∈L(V) and u,v∈V Answer 2: Your first two questions are identical to: σ,τ,inv(τ)∈L(V), rk(στ) = rk(τσ) = rk(σ) Let φ=στ, according to dim(ker(φ)) + rk(φ)=dim(V), rk(φ)=dim(V) - dim(ker(φ)), dim(V) is a fixed number, the only thing need be considered is ker(φ). The mapping process of φ can be decomposed to two steps: 1, mapping a vector to im(τ) by τ; 2, mapping the result of 1 to im(σ) by σ. As inv(τ)∈L(V), namely, ker(τ) = {0}, so step 1 will map V to V, ker(φ)=ker(τ)={0} for now, but inv(σ)∈L(V) is unknow, so the decisive factor is ker(σ) and after step 2, ker(φ)=ker(σ). You can analyse τσ in a similar way. Answer 3: Let μ=τστ, based on answer 2, rk(στ)=rk(φ)=rk(σ), so μ=τφ, it's the same question mentioned in answer 2. quasar987 Science Advisor Homework Helper Gold Member Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way? Which part is fuzzy to you? Answer 1: Q is an invertible matrix. This is the same as saying that $L_Q$ is an invertible linear map. And this is the same as saying that $L_Q$ is 1-1 and onto. Answer 2:In the proof, they said " [...] since $L_Q$ is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]". Question: How do we justify "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question. Also for the proof of part (b.) to this theorem, I have the following outlined: [tex]dim(R(L_A))$$ = $$dim(L_PR(L_A))$$ = $$dim((L_P(L_A(F^n)))$$ = $$dim(R(L_PL_A))$$ = $$dim(R(L_P_A))$$ = $$rank(PA)$$

but the first equality apparently hinges on the result of the following problem:
Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let $$V_0$$ be a subspace of V:
Prove that $$dim(V_0)$$ = $$dim(T(V_0)).$$

Question: Is there anyway you could help me prove this new question?

Thanks again

quasar987
Homework Helper
Gold Member
Question: How do we justify "$$L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question. Ask yourself what does it mean that $L_Q$ is onto. It means precisely that $L_Q(F^n)=F^n$. Also for the proof of part (b.) to this theorem, I have the following outlined: [tex]dim(R(L_A))$$ = $$dim(L_PR(L_A))$$ = $$dim((L_P(L_A(F^n)))$$ = $$dim(R(L_PL_A))$$ = $$dim(R(L_P_A))$$ = $$rank(PA)$$

but the first equality apparently hinges on the result of the following problem:
Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let $$V_0$$ be a subspace of V:
Prove that $$dim(V_0)$$ = $$dim(T(V_0)).$$

Question: Is there anyway you could help me prove this new question?
That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.

Recall that by definition, the vector space $V_0$ has dimension d if it admits a set of d linearly independent vectors that span $V_0$ (i.e. a basis of d elements). So, suppose $\{e_1,...,e_d\}$ is a basis for $V_0$. What can you say about the sets $\{T(e_1),...,T(e_d)\}$?

Ask yourself what does it mean that $L_Q$ is onto. It means precisely that $L_Q(F^n)=F^n$.

That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.

Recall that by definition, the vector space $V_0$ has dimension d if it admits a set of d linearly independent vectors that span $V_0$ (i.e. a basis of d elements). So, suppose $\{e_1,...,e_d\}$ is a basis for $V_0$. What can you say about the sets $\{T(e_1),...,T(e_d)\}$?
Is there any way to prove this problem without your specified definitions above, with a more emphasis on the idea of "isomorphism"? The reason I ask about this is because it isn't until the next theorem that wee know ...the rank of a matrix is the dimension of the subspace generated by its columns- in particular rank(A) = $$dim(R(L_A))$$ = dim( $$span({a_1, a_2, .., a_n})$$, where $$a_n$$ are the jth column of A.

Thanks,

JL

quasar987