Theorem: Rank of a Matrix: Proof & Questions

  • Thread starter jeff1evesque
  • Start date
  • Tags
    Matrix rank
In summary: V_0. Similarly, if T(V_0) has dimension d, then it admits a set of d linearly independent vectors that span T(V_0). Since T is an isomorphism, it preserves linear independence, meaning that if a set of vectors is linearly independent in V_0, then their images under T will also be linearly independent in T(V_0). And since T is onto, it also preserves spanning, meaning that if a set of vectors spans V_0, then their images under T will also span T(V_0). Therefore, if V_0 has dimension d, then T(V_0) must also have dimension d, since T preserves both linear
  • #1
jeff1evesque
312
0
Theorem: Let A be an m x n matrix. If P and Q are invertible m x m and n x n matrices, respectively, then
(a.) rank(AQ) = rank(A)
(b.) rank(PA) = rank(A)
(c.) rank(PAQ) = rank(A)

Proof:
[tex]R(L_A_Q)[/tex] = [tex]R(L_AL_Q)[/tex] = [tex]L_AL_Q(F^n)[/tex] = [tex]L_A(L_Q(F^n)) [/tex]= [tex]L_A(F^n)[/tex] = [tex]R(L_A)[/tex]

since [tex]L_Q[/tex] is onto. Therefore,
rank(AQ) = dim(R([tex]L_A_Q[/tex])) = dim(R([tex]L_A[/tex])) = rank(A). (#1)

Question1: How is [tex]L_Q[/tex] onto?
Question2:How does the onto-ness imply (#1)?
Question3:Can anyone help me/supply ideas for the proof for parts (b.) and (c.) of the theorem?

NOTE: the symbol R denotes the terminology of images.
 
Last edited:
Physics news on Phys.org
  • #2
Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]".
Now that you have [itex]R(L_A_Q)=R(L_A)[/itex], it follows in particular that [itex]dim(R(L_A_Q))=dim(R(L_A))[/itex]. But by definition of the rank of a matrix, we have [itex]rank(AQ) = dim(R(L_{AQ} ))[/itex] and [itex]rank(A) = dim(R(L_A ))[/itex].

Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the image of the map f.
 
  • #3
quasar987 said:
Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]".
Now that you have [itex]R(L_A_Q)=R(L_A)[/itex], it follows in particular that [itex]dim(R(L_A_Q))=dim(R(L_A))[/itex]. But by definition of the rank of a matrix, we have [itex]rank(AQ) = dim(R(L_{AQ} ))[/itex] and [itex]rank(A) = dim(R(L_A ))[/itex].

Btw - the "terminology of images" is not a recognized term in mathematics and nor is the symbol R for it. You should use Im(f) instead of R(f) and this is called the image of the map f.

Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way?
 
  • #4
Notations:
L(V,W) stands for a vector space of linear transformations form vector space V to W.
L(V) stands for a vector space of linear transformations form vector space V to itself.
rk(?) stands for the rank of "?".
ker(?) stands for the kernel of a linear transformation "?".
im(?) stands for the image of "?".
inv(?) stands for the inverse of a linear transformation "?".

Answer 1:
Think about the kernel of a linear transformation, if the inverse of a linear transfomation exists, then its kernel is {0}, i.e., the zero vector. In other words, the only way that makes 2 distinct vectors map to a same vector is that these two vectors belong to the kernel of the linear transfomation and their same mapping can only be {0}, since:
σu=σv ←→ σ(u-v)=0 ←→ ker(σ) σ∈L(V) and u,v∈V

Answer 2:
Your first two questions are identical to:
σ,τ,inv(τ)∈L(V), rk(στ) = rk(τσ) = rk(σ)

Let φ=στ, according to dim(ker(φ)) + rk(φ)=dim(V), rk(φ)=dim(V) - dim(ker(φ)), dim(V) is a fixed number, the only thing need be considered is ker(φ). The mapping process of φ can be decomposed to two steps: 1, mapping a vector to im(τ) by τ; 2, mapping the result of 1 to im(σ) by σ. As inv(τ)∈L(V), namely, ker(τ) = {0}, so step 1 will map V to V, ker(φ)=ker(τ)={0} for now, but inv(σ)∈L(V) is unknow, so the decisive factor is ker(σ) and after step 2, ker(φ)=ker(σ).
You can analyse τσ in a similar way.

Answer 3:
Let μ=τστ, based on answer 2, rk(στ)=rk(φ)=rk(σ), so μ=τφ, it's the same question mentioned in answer 2.
 
  • #5
jeff1evesque said:
Oh yeah, i should have remembered the idea of an invertible matrix having properties that equivalent. And the answer for part C is simple once parts a and b is established. I think your explanation for (#1) is pretty good, however, for me it is still a little fuzzy- could you try to explain it to me in another way?

Which part is fuzzy to you?
 
  • #6
quasar987 said:
Answer 1: Q is an invertible matrix. This is the same as saying that [itex]L_Q[/itex] is an invertible linear map. And this is the same as saying that [itex]L_Q[/itex] is 1-1 and onto.

Answer 2:In the proof, they said " [...] since [itex]L_Q[/itex] is onto." to justify the step "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]".

Question: How do we justify "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question.

Also for the proof of part (b.) to this theorem, I have the following outlined:

[tex]dim(R(L_A))[/tex] = [tex]dim(L_PR(L_A))[/tex] = [tex]dim((L_P(L_A(F^n)))[/tex] = [tex]dim(R(L_PL_A))[/tex] = [tex]dim(R(L_P_A))[/tex] = [tex]rank(PA)[/tex]

but the first equality apparently hinges on the result of the following problem:
Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let [tex]V_0[/tex] be a subspace of V:
Prove that [tex]dim(V_0)[/tex] = [tex]dim(T(V_0)).[/tex]

Question: Is there anyway you could help me prove this new question?

Thanks again
 
  • #7
jeff1evesque said:
Question: How do we justify "[tex]L_A(L_Q(F^n))=L_A(F^n)[/itex]" since it was onto- sorry for asking such a silly question.
Ask yourself what does it mean that [itex]L_Q[/itex] is onto. It means precisely that [itex]L_Q(F^n)=F^n[/itex].

jeff1evesque said:
Also for the proof of part (b.) to this theorem, I have the following outlined:

[tex]dim(R(L_A))[/tex] = [tex]dim(L_PR(L_A))[/tex] = [tex]dim((L_P(L_A(F^n)))[/tex] = [tex]dim(R(L_PL_A))[/tex] = [tex]dim(R(L_P_A))[/tex] = [tex]rank(PA)[/tex]

but the first equality apparently hinges on the result of the following problem:
Let V and W be finite dimensional vector spaces and T: V-->W be an isomorphism. Let [tex]V_0[/tex] be a subspace of V:
Prove that [tex]dim(V_0)[/tex] = [tex]dim(T(V_0)).[/tex]

Question: Is there anyway you could help me prove this new question?
That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.

Recall that by definition, the vector space [itex]V_0[/itex] has dimension d if it admits a set of d linearly independent vectors that span [itex]V_0[/itex] (i.e. a basis of d elements). So, suppose [itex]\{e_1,...,e_d\}[/itex] is a basis for [itex]V_0[/itex]. What can you say about the sets [itex]\{T(e_1),...,T(e_d)\}[/itex]?
 
  • #8
quasar987 said:
Ask yourself what does it mean that [itex]L_Q[/itex] is onto. It means precisely that [itex]L_Q(F^n)=F^n[/itex].


That's very good work. Indeed, if you could just prove this new question, then (b) would be solved.

Recall that by definition, the vector space [itex]V_0[/itex] has dimension d if it admits a set of d linearly independent vectors that span [itex]V_0[/itex] (i.e. a basis of d elements). So, suppose [itex]\{e_1,...,e_d\}[/itex] is a basis for [itex]V_0[/itex]. What can you say about the sets [itex]\{T(e_1),...,T(e_d)\}[/itex]?

Is there any way to prove this problem without your specified definitions above, with a more emphasis on the idea of "isomorphism"? The reason I ask about this is because it isn't until the next theorem that wee know ...the rank of a matrix is the dimension of the subspace generated by its columns- in particular rank(A) = [tex]dim(R(L_A))[/tex] = dim( [tex]span({a_1, a_2, .., a_n})[/tex], where [tex]a_n[/tex] are the jth column of A.

Thanks,

JL
 
  • #9
Not really, no.
 

1. What is the Rank of a Matrix?

The rank of a matrix is the maximum number of linearly independent rows or columns in the matrix. It is also equal to the maximum number of non-zero rows or columns in the matrix.

2. How is the Rank of a Matrix calculated?

The rank of a matrix can be calculated by performing elementary row or column operations on the matrix and counting the number of non-zero rows or columns left in the reduced row echelon form of the matrix.

3. What is the significance of the Rank of a Matrix?

The rank of a matrix is an important concept in linear algebra as it provides information about the linear independence of the rows or columns of the matrix. It is used to determine if a system of linear equations has a unique solution or not.

4. What is the Rank-Nullity Theorem?

The Rank-Nullity Theorem states that the rank of a matrix plus the dimension of its null space is equal to the number of columns in the matrix. In other words, it relates the rank of a matrix to its nullity (the number of linearly independent solutions to the homogeneous system of equations).

5. How do you prove the Rank-Nullity Theorem?

The Rank-Nullity Theorem can be proved by using the properties of linear transformations and their associated matrices. It involves showing that the rank of a matrix is equal to the dimension of the image of the corresponding linear transformation, and the nullity is equal to the dimension of the kernel of the transformation.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
4K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
2
Views
7K
Back
Top