Proving a statement about the rank of transformations

In summary, the conversation discusses how to prove the inequality ##max\{0, \rho(\sigma)+\rho(\tau)-m\}\leq \rho(\tau\sigma)\leq min\{\rho(\tau), \rho(\sigma)\}##, where ##\rho## represents the rank of a linear transformation. The first attempt at a solution involves using Corollary 1.11 and the general formulas for rank and nullity. The second attempt involves using Theorem 1.4 and the general formulas. There is also a discussion about the relationship between kernels and images, and how to use them to prove the desired inequality.
  • #1
Terrell
317
26

Homework Statement


How to prove ##max\{0, \rho(\sigma)+\rho(\tau)-m\}\leq \rho(\tau\sigma)\leq min\{\rho(\tau), \rho(\sigma)\}##?

Homework Equations


Let ##\sigma:U\rightarrow V## and ##\tau:V\rightarrow W## such that ##dimU=n##, ##dimV=m##. Define ##v(\tau)## to be the nullity of ##\tau##, ##\sigma## and ##\tau## are linear transformations, and ##\rho## means rank of the linear transformation

The Attempt at a Solution


proof (attempt 1):

By Corollary 1.11, ##\rho(\sigma)=\rho(\tau\sigma)+v(\tau)$ $\Leftrightarrow## ##\rho(\sigma)-v(\tau)=\rho(\tau\sigma)##. Note that ##v(\tau)\leq m## \begin{align}\Rightarrow \rho(\sigma)-m &\leq \rho(\tau\sigma)\\ \Rightarrow \rho(\sigma)-(\rho(\tau)+v(\tau)) &\leq \rho(\tau\sigma)\\ \Rightarrow \rho(\sigma)-\rho(\tau)-m &\leq \rho(\tau\sigma)\end{align}

proof (attempt 2):

By Corollary 1.11, ##\rho(\sigma)-m\leq \rho(\tau\sigma)##. This implies ##\rho(\sigma)-m+\rho(\tau)\leq\rho(\tau\sigma)+\rho(\tau)\Rightarrow \rho(\sigma)+\rho(\tau)-m\leq\rho(\tau\sigma)+\rho(\tau)##.

I am stuck. Is it possible to this relationship intuitively?
 
Physics news on Phys.org
  • #2
Say we have ##U_n \stackrel{\sigma}{\longrightarrow} V_m \stackrel{\tau}{\longrightarrow} W_k##.

Then - I think - we already have the general formulas ##n=\rho(\sigma) + \nu(\sigma)\, , \,n=\rho(\tau \sigma) + \nu(\tau \sigma)## and ##m=\rho(\tau) + \nu(\tau)##. Now all you need is some substitution and ##\nu(\tau \sigma) \leq \nu(\sigma)+\nu(\tau)##. The second inequality is more or less obvious as you cannot gain linear independent vectors by a transformation. The images can only shrink.
 
  • Like
Likes Terrell
  • #3
Using the general formulas... please check the proof that follows:
Note that ##\rho(\sigma)+\rho(\tau)-m=\rho(\sigma)-(m-\rho(\tau))=\rho(\sigma)-v(\tau)## and ##\rho(\tau\sigma)+v(\tau\sigma)=n\Rightarrow \rho(\tau\sigma)=n=v(\tau\sigma)##. By Theorem 1.4., ##\rho(\sigma)\leq min\{m,n\}\Rightarrow \rho(\sigma)\leq n\Rightarrow \rho(\sigma)-v(\tau)\leq n-v(\tau)##. Since ##K(\tau\sigma)\subset K(\tau)\Rightarrow v(\tau\sigma)\leq v(\tau)\Rightarrow -v(\tau\sigma)\geq -v(\tau)##. Then \begin{align}n-v(\tau)\leq n-v(\tau\sigma)&\Rightarrow \rho(\sigma)-v(\tau)\leq n-v(\tau\sigma)\\&\Rightarrow \rho(\sigma)-(m-\rho(\tau))\leq \rho(\tau\sigma)\\&\Rightarrow \rho(\sigma)+\rho(\tau)-m\leq \rho(\tau\sigma)\end{align}
and ##\rho(\tau\sigma)\leq min\{\rho(\sigma),\rho(\tau)\}## directly follows from Theorem 1.4.
##\Bbb{Q.E.D.}##
 
Last edited:
  • #4
fresh_42 said:
Now all you need is some substitution and ν(τσ)≤ν(σ)+ν(τ)
I don't know how this inequality gets into the picture, but it's part of the next proof i want to write so.. proving the inequality i have:
##K(\tau\sigma)=K(\tau\vert_{\sigma(U_{n})})\Rightarrow K(\tau\sigma)\subset K(\tau)\Rightarrow v(\tau\sigma)\leq v(\tau)\Rightarrow v(\tau\sigma)\leq v(\tau)+v(\sigma)##
 
  • #5
Terrell said:
Using the general formulas... please check the proof that follows:
Note that ##\rho(\sigma)+\rho(\tau)-m=\rho(\sigma)-(m-\rho(\tau))=\rho(\sigma)-v(\tau)## and ##\rho(\tau\sigma)+v(\tau\sigma)=n\Rightarrow \rho(\tau\sigma)=n=v(\tau\sigma)##.
Typo.
By Theorem 1.4., ##\rho(\sigma)\leq min\{m,n\}\Rightarrow \rho(\sigma)\leq n\Rightarrow \rho(\sigma)-v(\tau)\leq n-v(\tau)##. Since ##K(\tau\sigma)\subset K(\tau)\Rightarrow v(\tau\sigma)\leq v(\tau)\Rightarrow -v(\tau\sigma)\geq -v(\tau)##.
Assuming ##K## shall denote the kernel, we have ##K(\tau \sigma) \subseteq U_n## and ##K(\tau) \subseteq V_m\,,## so how can ##K(\tau\sigma)\subset K(\tau)## be?
Furthermore, let's assume ##n>m>0\, , \,V_m=W_k\, , \,\tau = \operatorname{id}|_{V_m}## and ##\sigma = 0##. Then ##\nu(\tau \sigma) = n > m > 0 = \nu(\tau)##.
Then \begin{align}n-v(\tau)\leq n-v(\tau\sigma)&\Rightarrow \rho(\sigma)-v(\tau)\leq n-v(\tau\sigma)\\&\Rightarrow \rho(\sigma)-(m-\rho(\tau))\leq \rho(\tau\sigma)\\&\Rightarrow \rho(\sigma)+\rho(\tau)-m\leq \rho(\tau\sigma)\end{align}
and ##\rho(\tau\sigma)\leq min\{\rho(\sigma),\rho(\tau)\}## directly follows from Theorem 1.4.
 
  • Like
Likes Terrell
  • #6
Terrell said:
I don't know how this inequality gets into the picture, but it's part of the next proof i want to write so.. proving the inequality i have:
##K(\tau\sigma)=K(\tau\vert_{\sigma(U_{n})})\Rightarrow K(\tau\sigma)\subset K(\tau)\Rightarrow v(\tau\sigma)\leq v(\tau)\Rightarrow v(\tau\sigma)\leq v(\tau)+v(\sigma)##
See my previous correction.
 
  • #7
fresh_42 said:
how can K(τσ)⊂K(τ)K(τσ)⊂K(τ)K(\tau\sigma)\subset K(\tau) be?
I think I ran into trouble here by assuming that ##K(\tau\sigma)=K(\tau\vert_{\sigma(U_n)})##, where ##K(\tau\vert_{\sigma(U_n)})## means Kernel of ##\tau## strictly on ##\sigma(U_n)##. It's an incorrect assumption? Because certainly, ##K(\tau\vert_{\sigma(U_n)})\subset K(\tau)##.
 
  • #8
fresh_42 said:
Then - I think - we already have the general formulas n=ρ(σ)+ν(σ),n=ρ(τσ)+ν(τσ)n=ρ(σ)+ν(σ),n=ρ(τσ)+ν(τσ)n=\rho(\sigma) + \nu(\sigma)\, , \,n=\rho(\tau \sigma) + \nu(\tau \sigma) and m=ρ(τ)+ν(τ)m=ρ(τ)+ν(τ)m=\rho(\tau) + \nu(\tau). Now all you need is some substitution and ν(τσ)≤ν(σ)+ν(τ)ν(τσ)≤ν(σ)+ν(τ)\nu(\tau \sigma) \leq \nu(\sigma)+\nu(\tau).
I think this is the way to go. Now to show ##v(\tau\sigma)\leq v(\tau)+v(\sigma)##, I started by defining ##K(\tau\sigma)=\{\zeta\in U_n\mid \sigma(\zeta)\in K(\tau)\lor \sigma(\zeta)=0_V\}##.
 
Last edited:
  • Like
Likes fresh_42
  • #9
Terrell said:
I think I ran into trouble here by assuming that ##K(\tau\sigma)=K(\tau\vert_{\sigma(U_n)})##, where ##K(\tau\vert_{\sigma(U_n)})## means Kernel of ##\tau## strictly on ##\sigma(U_n)##. It's an incorrect assumption? Because certainly, ##K(\tau\vert_{\sigma(U_n)})\subset K(\tau)##.
Yes, but not the first assumption on ##K(\tau \sigma)##. That's why I wrote ##\nu(\tau \sigma) \leq \nu(\tau) + \nu(\sigma)##, if a vector is sent to zero by a composite function, then either by the first or by the second, so the sum is the secure side. But your argumentation is fine for the second inequality in your initial post, and with images instead of kernels. ##\operatorname{im}(\tau \sigma) \subseteq \operatorname{im}(\tau)## gives one inequality, and that the ##\tau## cannot increase the rank of ##\sigma## gives the other inequality which is needed for the minimum.
Terrell said:
Do you think my current approach is futile?
No, just a bit more cautious. I think you've been on the right track. Just substitute all three ranks by the formula ##\rho(\alpha)+\nu(\alpha)=\dim X## for a ##\alpha:X \rightarrow \,\ldots## and then use what I said above and in post #2: ##\nu(\tau \sigma)\leq \nu(\tau) + \nu(\sigma)##.
 
  • #10
fresh_42 said:
Yes, but not the first assumption on K(τσ)K(τσ)K(\tau \sigma). That's why I wrote ν(τσ)≤ν(τ)+ν(σ)
So ##K(\tau\vert_{\sigma(U_n)})=K(\tau\sigma)##?
 
  • #11
Terrell said:
So ##K(\tau\vert_{\sigma(U_n)})=K(\tau\sigma)##?
Is this correct with ##\sigma =0## and ##\tau = \operatorname{id}\,##?
 
  • Like
Likes Terrell
  • #12
I found the perfect theorem from the book I'm working on that fits the proof, perfectly!
By that theorem, ##\rho(\sigma)=\rho(\tau\sigma)+dim[\sigma(U)\cap K(\tau)]## \begin{align}\Rightarrow dim[\sigma(U)\cap K(\tau)]&=\rho(\sigma)-\rho(\tau\sigma)\\&=(n-v(\sigma))-(n-v(\sigma))\\&=v(\tau\sigma)-v(\sigma)\\&=dim[K(\tau\vert_{\sigma(U_n)})]\\&=v(\tau\vert_{\sigma(U_n)})\\ \Rightarrow v(\tau\sigma)=v(\tau\vert_{\sigma(U_n)}+v(\sigma)\leq v(\tau)+v(\sigma).\end{align} ##\Bbb{Q.E.D.}##
 

1. What is the definition of "rank" in transformations?

The rank of a transformation is the number of linearly independent columns or rows in the transformation matrix. It represents the number of dimensions in the output space that are affected by the transformation.

2. How is the rank of a transformation calculated?

The rank of a transformation can be calculated by finding the number of pivot positions in the reduced row-echelon form of the transformation matrix. Alternatively, it can also be found by counting the number of non-zero singular values in the singular value decomposition of the matrix.

3. What does it mean if the rank of a transformation is less than the number of input variables?

If the rank of a transformation is less than the number of input variables, it means that the transformation is not invertible and some information is lost in the transformation. This can happen if there are redundant or linearly dependent input variables.

4. How do you prove a statement about the rank of transformations?

To prove a statement about the rank of transformations, you can use mathematical techniques such as Gaussian elimination, row-reduction, or the singular value decomposition. These methods can be used to manipulate the transformation matrix and show that the rank remains the same before and after the transformation.

5. Why is the rank of transformations important?

The rank of transformations is important in many areas of mathematics and science, including linear algebra, differential equations, and data analysis. It helps us understand the properties of a transformation and its effect on the input and output variables. It also allows us to determine if a transformation is invertible and to solve systems of equations.

Similar threads

  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
  • Calculus and Beyond Homework Help
Replies
11
Views
1K
  • Advanced Physics Homework Help
Replies
3
Views
2K
  • Advanced Physics Homework Help
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
15
Views
2K
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Back
Top