# Statements about linear maps | Linear Algebra

• JD_PM
In summary: Thanks!An example of such a linear transformation is ##V=\mathbb{R}^4\, , \,L=\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&1\\0&0&1&0\end{bmatrix}##Sure! But this is not a linear map. Remember, a linear map is a function from one vector space to another that preserves the vector space operations of addition and scalar multiplication.We are looking at linear maps between vector spaces, not matrices. To see why a matrix is not a linear map, consider the matrix you wrote: $$L=\begin{bmatrix}0&0& JD_PM Homework Statement Let ##V## be a finite-dimensional vector space and let ##L: V \to V:v\mapsto L(v)## be a linear transformation such that the rank of ##L## (which, by definition, equals ##\dim(\Im L)##) equals the rank of ##L^2 = L \circ L## and ##\ker(L) = \ker(L^2)##. a) Show that the linear map ##L' : \Im(L) \to V : x \mapsto L(x)## is injective. b) Show that ##V =\ker(L) \oplus \Im(L)## Relevant Equations N/A First thing to notice is that ##L## and ##L \circ L## are precisely equal linear maps. What we know$$L \ \text{is injective} \iff \ker(L)=\{0\}\ker L' = \{ x \in \Im(L) \ | \ L'(x)=0\}\Im(L)=\{ x \in V \ | \ \exists \ v \in V \ \text{such that} \ L(v)=x\}$$Besides, we notice that ##x \in \Im(L) \subseteq V \Rightarrow x\in V##. To prove:$$\ker(L') = \{0\}$$Let ##x\in \ker(L')##. Then$$0=L'(x) \Rightarrow 0=L(x) \Rightarrow 0 = L (L(v)) \Rightarrow 0 = L^2(v) = L(v) \Rightarrow 0 = L(v) =x$$So ##x=0## and hence ##\ker L' = \{ 0\}##. b) I saw in class the following proposition: given two subspaces ##U_1, U_2## of ##V## we have$$W= U_1 \oplus U_2 \iff W=U_1 + U_2 \ \& \ U_1 \cap U_2 = \{0\}$$TP:$$V=\ker(L) + \Im(L) \ \& \ \ker(L) \cap \Im(L) = \{0\}$$Let ##x \in \ker(L) \cap \Im(L)##. Then ##x \in \ker(L)## and ##x \in \Im(L)##, which implies ##L(x) = 0## and ##L(v) = x##. Apply ##L## to the former equation to get$$L(L(v)) = L(x) \Rightarrow L(L(v)) = L(v) = 0 \Rightarrow x=0$$So ##\ker(L) \cap \Im(L) = \{0\}##. We are left to show that ##V = \ker(L) + \Im(L)## is unique. Take ##x_1, x_1' \in \ker(L)## and ##x_2, x_2' \in \Im(L)##. Suppose it is not unique i.e. there's ##v\in V## such that ##v=x_1+x_2## and ##v=x_1'+x_2'##. Then ##x_1+x_2=x_1'+x_2' \Rightarrow x_1-x_1'=x_2'-x_2##. But ##x_1-x_1' \in \ker(L)## and ##x_2'-x_2 \in \Im(L)## and we just proved that ##\ker(L) \cap \Im(L) = \{0\}## so ##x_1-x_1'=0=x_2'-x_2 \Rightarrow x_1=x_1', x_2=x_2'##, a contradiction! :) I think I got this one mainly right! I'd like to get some feedback/slash correction from you guys, thanks! JD_PM said: First thing to notice is that ##L## and ##L \circ L## are precisely equal linear maps. Why? If this is essential to the rest of the proof then I have my doubts about the proof. I have not looked at the rest of it. JD_PM said: Homework Statement:: Let ##V## be a finite-dimensional vector space and let ##L: V \to V:v\mapsto L(v)## be a linear transformation such that the rank of ##L## (which, by definition, equals ##\dim(\Im L)##) equals the rank of ##L^2 = L \circ L## and ##\ker(L) = \ker(L^2)##. a) Show that the linear map ##L' : \Im(L) \to V : x \mapsto L(x)## is injective. b) Show that ##V =\ker(L) \oplus \Im(L)## Relevant Equations:: N/A First thing to notice is that ##L## and ##L \circ L## are precisely equal linear maps. What we know$$L \ \text{is injective} \iff \ker(L)=\{0\}\ker L' = \{ x \in \Im(L) \ | \ L'(x)=0\}\Im(L)=\{ x \in V \ | \ \exists \ v \in V \ \text{such that} \ L(v)=x\}$$Besides, we notice that ##x \in \Im(L) \subseteq V \Rightarrow x\in V##. To prove:$$\ker(L') = \{0\}$$Let ##x\in \ker(L')##. Then$$0=L'(x) \Rightarrow 0=L(x) \Rightarrow 0 = L (L(v)) \Rightarrow 0 = L^2(v) = L(v) \Rightarrow 0 = L(v) =x$$So ##x=0## and hence ##\ker L' = \{ 0\}##. b) I saw in class the following proposition: given two subspaces ##U_1, U_2## of ##V## we have$$W= U_1 \oplus U_2 \iff W=U_1 + U_2 \ \& \ U_1 \cap U_2 = \{0\}$$TP:$$V=\ker(L) + \Im(L) \ \& \ \ker(L) \cap \Im(L) = \{0\}$$Let ##x \in \ker(L) \cap \Im(L)##. Then ##x \in \ker(L)## and ##x \in \Im(L)##, which implies ##L(x) = 0## and ##L(v) = x##. Apply ##L## to the former equation to get$$L(L(v)) = L(x) \Rightarrow L(L(v)) = L(v) = 0 \Rightarrow x=0$$So ##\ker(L) \cap \Im(L) = \{0\}##. We are left to show that ##V = \ker(L) + \Im(L)## is unique. Take ##x_1, x_1' \in \ker(L)## and ##x_2, x_2' \in \Im(L)##. Suppose it is not unique i.e. there's ##v\in V## such that ##v=x_1+x_2## and ##v=x_1'+x_2'##. Then ##x_1+x_2=x_1'+x_2' \Rightarrow x_1-x_1'=x_2'-x_2##. But ##x_1-x_1' \in \ker(L)## and ##x_2'-x_2 \in \Im(L)## and we just proved that ##\ker(L) \cap \Im(L) = \{0\}## so ##x_1-x_1'=0=x_2'-x_2 \Rightarrow x_1=x_1', x_2=x_2'##, a contradiction! :) I think I got this one mainly right! I'd like to get some feedback/slash correction from you guys, thanks! You cannot use ##(L')^2=L'## or ##L^2=L##. What we have is$$
\operatorname{ker}L \hookrightarrow V \twoheadrightarrow \operatorname{im}L \stackrel{L'}{\cong} \operatorname{im}L
$$and the last isomorphism has to be shown, which isn't necessarily an identity. The second part b) is true for all linear transformations on finite-dimensional vector spaces. You can prove it e.g. by choosing a basis for ##\operatorname{ker} L## and extending it to a basis of ##L## etc. Once you have b), you can use it to prove a). An example of such a linear transformation is ##V=\mathbb{R}^4\, , \,L=\begin{bmatrix} 0&0&0&0\\0&0&0&0\\0&0&0&1\\0&0&1&0 \end{bmatrix}## Last edited: JD_PM said: First thing to notice is that ##L## and ##L \circ L## are precisely equal linear maps. FactChecker said: Why? Given ##L: V \to V:v\mapsto L(v)## and the definition of product of linear maps (for instance, as found in enlightening Axler's Algebra done right book) One finds that ##L \circ L: V \to V:v\mapsto L(v)##, so I would say this is a sufficient condition to assert that these two are equal. fresh_42 said: You cannot use ##(L')^2=L'## or ##L^2=L##. What we have is$$
\operatorname{ker}L \hookrightarrow V \twoheadrightarrow \operatorname{im}L \stackrel{L'}{\cong} \operatorname{im}L
$$and the last isomorphism has to be shown, which isn't necessarily an identity. Hi @fresh_42, it is nice to discuss with you! You seem to disagree in this step$$L^2(v) = L(v)$$So you seem to suggest that ##L## and ##L \circ L## are not the same maps. Might you please explain why in more detail? (I do not follow your argument above). JD_PM said: Given ##L: V \to V:v\mapsto L(v)## and the definition of product of linear maps (for instance, as found in enlightening Axler's Algebra done right book) View attachment 287372 Applied to ##S=T=L## we get ##L^2(v)=L(L(v))##. Nowhere is said that this equals ##L(v)## except in your proof. Look at my example matrix. It has rank ##2##, as has its square, a ##2-##dimensional kernel, and ##e_3=L^2(e_3)\neq L(e_3)=e_4## if ##e_3,e_4## denote the third and fourth basis vector. Yet, it is injective on ##\operatorname{im}(L).## Try my approach: Solve b) for any linear transformation ##L\, : \,V\longrightarrow V## and use it for a). Are you allowed to use the rank-nullity theorem, ##\dim V =\dim \operatorname{im(L)}+\dim \operatorname{ker}(L),## or is exercise b) used to prove it? JD_PM JD_PM said: One finds that ##L \circ L: V \to V:v\mapsto L(v)##, This is definitely not true in general. How are you proving this? Are you thinking that having the same range implies that they are identical? What about the linear function f(x) = 2x and f(f(x)) = 4x? JD_PM JD_PM said: Homework Statement:: Let ##V## be a finite-dimensional vector space and let ##L: V \to V:v\mapsto L(v)## be a linear transformation such that the rank of ##L## (which, by definition, equals ##\dim(\Im L)##) equals the rank of ##L^2 = L \circ L## and ##\ker(L) = \ker(L^2)##. a) Show that the linear map ##L' : \Im(L) \to V : x \mapsto L(x)## is injective. b) Show that ##V =\ker(L) \oplus \Im(L)## Relevant Equations:: N/A First thing to notice is that ##L## and ##L \circ L## are precisely equal linear maps. This is false. Consider, for example, $L: \mathbb{R}^2 \to \mathbb{R}^2$ where $L(e_1) = 2e_1$ and $L(e_2) = 0$. Here $\ker L = \ker L^2 = \langle e_2 \rangle$, but $$L^2(e_1) = 4e_1 \neq 2e_1 = L(e_1)$$. PeroK and JD_PM Sorry for replying a bit late, needed to study. Ohhh, big yikes! I understand my mistake now! :) fresh_42 said: Try my approach: Solve b) for any linear transformation ##L\, : \,V\longrightarrow V## and use it for a). Are you allowed to use the rank-nullity theorem, ##\dim V =\dim \operatorname{im(L)}+\dim \operatorname{ker}(L),## or is exercise b) used to prove it? I am indeed allowed to use the fundamental theorem of linear maps but I am supposed to solve a) before b), so please let me show my new reasoning.$$0=L'(x) \Rightarrow 0=L(x) \Rightarrow 0 = L (L(v))=L^2(v) \Rightarrow v \in \ker L^2 = \ker L \Rightarrow v \in \ker L \Rightarrow L(v) = 0 \Rightarrow x=0$$Now a) should be correct JD_PM said: b) Show that ##V =\ker(L) \oplus \text{Im}(L)## What I presented was essentially wrong so let's start from scratch. We need to show that ##\ker(L) \cap \text{Im}(L) = \{0\}## and ##V=\ker(L) + \text{Im}(L)##. Let's start by ##\ker(L) \cap \text{Im}(L) = \{0\}##. My mistake was again on assuming that ##L^2(v) = L(v)## holds generally. Let ##x \in \ker(L) \cap \text{Im}(L)##. Then ##x \in \ker(L)## and ##x \in \text{Im}(L)##, which implies ##L(x) = 0## and ##L(v) = x##. Apply ##L## to the latter equation to get$$L(L(v)) = L(x) \Rightarrow L(L(v)) = 0 \Rightarrow v \in \ker L^2 = \ker L \Rightarrow v \in \ker L \Rightarrow L(v) = 0 \Rightarrow x=0$$So ##\ker(L) \cap \text{Im}(L) = \{0\}##. JD_PM said: We are left to show that ##V = \ker(L) + \Im(L)##. Take ##x_1, x_1' \in \ker(L)## and ##x_2, x_2' \in \Im(L)##. Suppose it is not unique i.e. there's ##v\in V## such that ##v=x_1+x_2## and ##v=x_1'+x_2'##. Then ##x_1+x_2=x_1'+x_2' \Rightarrow x_1-x_1'=x_2'-x_2##. But ##x_1-x_1' \in \ker(L)## and ##x_2'-x_2 \in \Im(L)## and we just proved that ##\ker(L) \cap \Im(L) = \{0\}## so ##x_1-x_1'=0=x_2'-x_2 \Rightarrow x_1=x_1', x_2=x_2'##, a contradiction! :) OK, what I presented is NOT a proof of ##V =\ker(L) + \text{Im}(L)## but that given the sum of any two subspaces of ##V##, with their intersection equal to ##\{0\}##, their sum must be direct. But let's tackle it once again (it is actually quite fun because it made me think a lot! :D). Usually, I am given the explicit vector space ##V## when I am asked to show that ##V = U_1 + U_2##. For instance, an easy example: "show that ##\Bbb R^2 = U_1 + U_2##, where ##U_1 = \{ (x, 0) | x \in \Bbb R\}## and ##U_2 = \{ (0, y) | y \in \Bbb R\}##". We then would proceed to show both inclusions i.e. ##V \subset U_1 + U_2## and ##U_1 + U_2 \subset V##. To show the first we of course take any ##x \in \Bbb R^2##, write it as a linear combination of basis vectors of ##\Bbb R^2## and notice that ##x = \underbrace{a_1(1,0)}_{\in U_1} + \underbrace{a_2(1,0)}_{\in U_2}## so ##x \in U_1 + U_2##. To show ##U_1 + U_2 \subset V##, we would take any ##x \in U_1 + U_2## and, by definition of sum of subspaces, notice that ##\exists u_1 \in U_1, u_2 \in U_2## such that ##x = u_1 + u_2 \in \Bbb R^2##. Let's go back to the main problem. What's tricky here is that I see no way of showing ##V \subset \ker(L) + \text{Im}(L)## because we are not given the explicit form of ##V##. So the idea to show such equality would be showing the inclusion ##\ker(L) + \text{Im}(L) \subset V## (which is also tough because, again, we are not given the explicit form of ##V## but at least I see how to start) and ##\dim \left( \ker(L)+\text{Im}(L)\right) = \dim V## What do you think of this approach? If you agree it is feasible I'll show my attempt. Please let me know if there is an easier way, thanks! JD_PM said: Sorry for replying a bit late, needed to study. Ohhh, big yikes! I understand my mistake now! :) I am indeed allowed to use the fundamental theorem of linear maps but I am supposed to solve a) before b), so please let me show my new reasoning.$$0=L'(x) \Rightarrow 0=L(x) \Rightarrow 0 = L (L(v))=L^2(v) \Rightarrow v \in \ker L^2 = \ker L \Rightarrow v \in \ker L \Rightarrow L(v) = 0 \Rightarrow x=0$$Now a) should be correct Let's see. You apparently use ##x=L(v)\in \mathfrak{I}(V)## and assume ##x\in \ker L'##. Then ##L^2(x)=0## so ##x\in \ker L^2.## Clearly ##\ker L \subseteq \ker L^2##. But why is$$\ker L^2 \subseteq \ker L\qquad ?$$This is the crucial point of your argument, but I do not see why this should be the case. You have to use the condition ##\operatorname{rk}L^2=\operatorname{rk}L## somewhere! One possibility is to prove part b) first. If you do not want to prove b) first, then you have to emulate the reasoning of part b) here in part a). JD_PM said: What I presented was essentially wrong so let's start from scratch. We need to show that ##\ker(L) \cap \text{Im}(L) = \{0\}## and ##V=\ker(L) + \text{Im}(L)##. Let's start by ##\ker(L) \cap \text{Im}(L) = \{0\}##. My mistake was again on assuming that ##L^2(v) = L(v)## holds generally. Let ##x \in \ker(L) \cap \text{Im}(L)##. Then ##x \in \ker(L)## and ##x \in \text{Im}(L)##, which implies ##L(x) = 0## and ##L(v) = x##. Apply ##L## to the latter equation to get$$L(L(v)) = L(x) \Rightarrow L(L(v)) = 0 \Rightarrow v \in \ker L^2 = \ker L \Rightarrow v \in \ker L \Rightarrow L(v) = 0 \Rightarrow x=0

So ##\ker(L) \cap \text{Im}(L) = \{0\}##.
Same as above!
JD_PM said:
OK, what I presented is NOT a proof of ##V =\ker(L) + \text{Im}(L)## but that given the sum of any two subspaces of ##V##, with their intersection equal to ##\{0\}##, their sum must be direct.

But let's tackle it once again (it is actually quite fun because it made me think a lot! :D).

Usually, I am given the explicit vector space ##V## when I am asked to show that ##V = U_1 + U_2##. For instance, an easy example: "show that ##\Bbb R^2 = U_1 + U_2##, where ##U_1 = \{ (x, 0) | x \in \Bbb R\}## and ##U_2 = \{ (0, y) | y \in \Bbb R\}##". We then would proceed to show both inclusions i.e. ##V \subset U_1 + U_2## and ##U_1 + U_2 \subset V##. To show the first we of course take any ##x \in \Bbb R^2##, write it as a linear combination of basis vectors of ##\Bbb R^2## and notice that ##x = \underbrace{a_1(1,0)}_{\in U_1} + \underbrace{a_2(1,0)}_{\in U_2}## so ##x \in U_1 + U_2##. To show ##U_1 + U_2 \subset V##, we would take any ##x \in U_1 + U_2## and, by definition of sum of subspaces, notice that ##\exists u_1 \in U_1, u_2 \in U_2## such that ##x = u_1 + u_2 \in \Bbb R^2##.

Let's go back to the main problem.

What's tricky here is that I see no way of showing ##V \subset \ker(L) + \text{Im}(L)## because we are not given the explicit form of ##V##.

So the idea to show such equality would be showing the inclusion ##\ker(L) + \text{Im}(L) \subset V## (which is also tough because, again, we are not given the explicit form of ##V## but at least I see how to start) and ##\dim \left( \ker(L)+\text{Im}(L)\right) = \dim V##

What do you think of this approach? If you agree it is feasible I'll show my attempt.
Watch out for hidden assumptions!
JD_PM said:
Please let me know if there is an easier way, thanks!
Well, Wikipedia has a, which I find, quite detailed proof. Maybe you want to study it, and you let me know if you have any difficulties with it.
https://en.wikipedia.org/wiki/Rank–nullity_theorem#First_proof

PeroK

## 1. What is a linear map?

A linear map, also known as a linear transformation, is a function that maps one vector space to another while preserving the algebraic structure of the vector space. In simpler terms, it is a transformation that preserves lines and the origin.

## 2. How is a linear map represented?

A linear map can be represented by a matrix. The columns of the matrix represent the image of the basis vectors of the domain, and the rows represent the coordinates of the image vectors in the basis of the codomain.

## 3. What is the difference between a linear map and a nonlinear map?

A linear map preserves the operations of addition and scalar multiplication, while a nonlinear map does not. This means that a linear map will always map a line to a line, while a nonlinear map may map a line to a curve or a point.

## 4. How is the composition of linear maps calculated?

The composition of two linear maps, f and g, is calculated by multiplying their respective matrices. This can be represented as (g ◦ f)(x) = g(f(x)). In other words, the output of the first linear map is used as the input for the second linear map.

## 5. What is the kernel of a linear map?

The kernel of a linear map is the set of all vectors in the domain that are mapped to the zero vector in the codomain. In other words, it is the set of all vectors that are mapped to the origin by the linear map.

• Calculus and Beyond Homework Help
Replies
10
Views
2K
• Calculus and Beyond Homework Help
Replies
0
Views
511
• Calculus and Beyond Homework Help
Replies
2
Views
1K
• Calculus and Beyond Homework Help
Replies
15
Views
989
• Calculus and Beyond Homework Help
Replies
12
Views
1K
• Calculus and Beyond Homework Help
Replies
9
Views
1K
• Calculus and Beyond Homework Help
Replies
1
Views
1K
• Calculus and Beyond Homework Help
Replies
1
Views
525
• Calculus and Beyond Homework Help
Replies
7
Views
1K
• Calculus and Beyond Homework Help
Replies
1
Views
720