Linear operators and vector spaces

"Don't panic!"
Messages
600
Reaction score
8
Hi all,

I've been doing some independent study on vector spaces and have moved on to looking at linear operators, in particular those of the form T:V \rightarrow V. I know that the set of linear transformations \mathcal{L}\left( V,V\right) =\lbrace T:V \rightarrow V \vert \text{T is linear} \rbrace form a vector space over \mathbb{F}, under the operations of point-wise addition and scalar multiplication, but I've been trying to show this explicitly by showing that these operations satisfy the vector space axioms. Here's my attempt so far:

Let V be a vector space over \mathbb{F} and let \mathcal{L}\left( V,V\right) =\lbrace T:V \rightarrow V \vert \text{T is linear} \rbrace be the set of all linear transformations from V into itself. As the operators T \in \mathcal{L}\left(V,V\right) are linear, they satisfy the following conditions T\left(\mathbf{v} + \mathbf{w}\right)= T\left(\mathbf{v}\right) + T\left(\mathbf{w}\right) \quad\forall \mathbf{v},\mathbf{w} \in V,\; T\in \mathcal{L}\left(V,V\right) and T\left(c\mathbf{v}\right) = cT\left(\mathbf{v}\right)
And as such, we define point-wise addition of two operators S, T\in \mathcal{L}\left(V,V\right) as \left(T+S\right) \left(\mathbf{v}\right) = T\left(\mathbf{v}\right) + S\left(\mathbf{v}\right) and scalar multiplication of and operator T\in \mathcal{L}\left(V,V\right) by some scalar from the underlying field \mathbb{F} as T\left(c\mathbf{v}\right) = \left(cT\right) \left(\mathbf{v}\right) = cT\left( \mathbf{v}\right)

Given this, we can now show that \mathcal{L}\left(V,V\right) forms a vector space. First, we note that as both S, T\in \mathcal{L}\left(V,V\right) clearly their "operator sum" T+S \in\mathcal{L}\left(V,V\right) and similarly cT \in\mathcal{L}\left(V,V\right) and hence the set \mathcal{L}\left(V,V\right) is closed under these binary operations. Now, we shall check that \mathcal{L}\left(V,V\right) satisfies the vector space axioms.

1. (Commutativity of vector addition). Given S, T\in \mathcal{L}\left(V,V\right) and \mathbf{v} \in V, and noting that as both S and T maps V into itself, such that S\left(\mathbf{v}\right), T\left(\mathbf{v}\right) \in V and therefore satisfy the vector space axioms, we have that \left(T+S\right) \left(\mathbf{v}\right) = T\left(\mathbf{v}\right) + S\left(\mathbf{v}\right)= S\left(\mathbf{v}\right) +T\left(\mathbf{v}\right)= \left(S+T\right) \left(\mathbf{v}\right).

2. (Associativity of vector addition). Given S, T, U\in \mathcal{L}\left(V,V\right) and \mathbf{v} \in V, we have that \left(T +\left(S+U\right)\right) \left(\mathbf{v}\right) = T\left(\mathbf{v}\right) + \left(S+U\right)\left( \mathbf{v}\right) = T\left(\mathbf{v}\right) + \left(S\left(\mathbf{v}\right) + U\left(\mathbf{v}\right) \right) \qquad\qquad\qquad\quad\quad = \left(T\left(\mathbf{v}\right) + S\left(\mathbf{v}\right) \right) + U\left(\mathbf{v}\right) = \left(T+S\right) \left(\mathbf{v}\right) + U\left(\mathbf{v}\right) \qquad\qquad\qquad\quad\quad =\left(\left(T +S \right) +U\right) \left(\mathbf{v}\right).

3. (Identity element of vector addition). Let us define an operator \tilde{T} such that \tilde{T}\left( \mathbf{v}\right) = \mathbf{0}. Hence, given T\in \mathcal{L}\left(V,V\right) and \mathbf{v} \in V, we have that \left( T+ \tilde{T} \right)\left(\mathbf{v} \right) = T\left(\mathbf{v}\right) + \tilde{T}\left( \mathbf{v}\right) = T\left(\mathbf{v} \right) + \mathbf{0} = T\left(\mathbf{v} \right)
where we have noted that T\left( \mathbf{v}\right) \in V
hence an identity exists for \mathcal{L}\left(V,V\right). (*I'm really not sure I've done this part correctly, it doesn't seem right?!*).

4. (Inverse elements of addition) Noting that each \mathbf{v} \in V has a unique inverse, -\mathbf{v} \in V, such that \mathbf{v} + \left(-\mathbf{v}\right) = \mathbf{0}, and that \left(-1\right)\mathbf{v}= - \mathbf{v} \;\;\forall\;\mathbf{v} \in V, we have T\left(\mathbf{0}\right) = T\left( \mathbf{v} + \left(-\mathbf{v}\right) \right) = T\left(\mathbf{v}\right) + T\left(-\mathbf{v}\right)= T\left(\mathbf{v}\right) + T\left(\left(-1\right)\mathbf{v}\right) = T\left(\mathbf{v}\right)+ \left(-1\right)T\left(\mathbf{v}\right) = T\left(\mathbf{v}\right) + \left(-T\left(\mathbf{v}\right)\right)= \left(T + \left(-T\right)\right)\left(\mathbf {v}\right) = \mathbf{0} where we have noted that T\left(\mathbf{0}\right) = T\left(0\mathbf{v}\right) = 0T\left(\mathbf{v}\right) = \mathbf{0} (as 0\mathbf{v}=\mathbf{0} \;\; \forall \mathbf{v} \in V and T\left(\mathbf{v}\right) \in \mathcal{L}\left(V,V\right).
Hence, each element T\in \mathcal{L}\left(V,V\right) has an inverse -T\in \mathcal{L}\left(V,V\right). (*Again, I'm not fully sure I'm correct on this one?!*).

5. (Compatibility of scalar multiplication with field multiplication). Let T\in \mathcal{L}\left(V,V\right), \mathbf{v} \in V and c_{1},c_{2} \in \mathbb{F}, noting that \left(c_{1}c_{2}\right) \mathbf{v} = c_{1}\left( c_{2}\mathbf{v}\right) \;\;\forall\, \mathbf{v} \in V, \; c_{1},c_{2} \in \mathbb{F}, we have that T\left(\left( c_{1}c_{2}\right) \mathbf{v}\right) = \left( c_{1}c_{2}\right)T\left( \mathbf{v}\right) and also T\left(\left( c_{1}c_{2}\right) \mathbf{v}\right)= T\left(c_{1}\left( c_{2}\mathbf{v}\right)\right)= c_{1}T\left(c_{2}\mathbf{v}\right) = c_{1}\left( c_{2} T\left(\mathbf{v}\right) \right) Hence, we have that, \left( c_{1}c_{2}\right) T\left( \mathbf{v}\right) = c_{1}\left( c_{2} T \right) \left(\mathbf{v}\right)

6. (Identity element of scalar multiplication). Noting that 1\mathbf{v} = \mathbf{v} \;\; \forall \mathbf{v} \in V, then given T\in \mathcal{L}\left(V,V\right) and \mathbf{v} \in V we have T\left(\mathbf{v} \right)= T\left(1\mathbf{v} \right) = 1T\left(\mathbf{v} \right) = \left(1 T\right)\left(\mathbf{v} \right) as required.

7. (Distributivity of scalar multiplication with respect to vector addition). Let S,T\in \mathcal{L}\left(V,V\right), \mathbf{v} \in V and c \in \mathbb{F}. Then, \left(S+T\right)\left(c\mathbf{v}\right)= c\left(S+T\right)\left(\mathbf{v}\right) and also \left(S+T\right)\left(c\mathbf{v}\right) = S\left(c\mathbf{v}\right) + T\left(c\mathbf{v}\right) = cS\left(\mathbf{v}\right) + cT\left(\mathbf{v}\right) = \left( cS+ cT\right)\left(\mathbf{v}\right) and hence, c\left(S+T\right)\left(\mathbf{v}\right) =\left( cS+ cT\right)\left(\mathbf{v}\right)

8. (Distributivity of scalar multiplication with respect to field addition). Let T\in \mathcal{L}\left(V,V\right), \mathbf{v} \in V and c_{1},c_{2} \in \mathbb{F}. Then, noting that \left(c_{1} +c_{2}\right)\mathbf{v} = c_{1}\mathbf{v} + c_{2}\mathbf{v} \;\;\forall\; \mathbf{v} \in V, c_{1},c_{2} \in \mathbb{F}, we have that T\left(\left(c_{1} +c_{2}\right)\mathbf{v}\right) = \left(c_{1} +c_{2}\right) T\left(\mathbf{v}\right) and also T\left(\left(c_{1} +c_{2}\right)\mathbf{v}\right)= T\left(c_{1} \mathbf{v}\right) + T\left(c_{2}\mathbf{v}\right) = c_{1}T\left( \mathbf{v}\right)+ c_{2}T\left( \mathbf{v}\right) = \left(c_{1}T + c_{2}T\right)\left( \mathbf{v}\right) and hence, \left(c_{1} +c_{2}\right) T\left(\mathbf{v}\right)= \left(c_{1}T + c_{2}T\right)\left( \mathbf{v}\right) Therefore, the set \mathcal{L}\left( V,V\right) forms a vector space over \mathbb{F} under point-wise addition and scalar multiplication.

Is this a correct analysis?
 
Physics news on Phys.org
"Don't panic!" said:
Is this a correct analysis?

Yes.
 
Yes, don't panic.
 
Thanks guys, appreciate you checking it over.
 
"Don't panic!" said:
4. (Inverse elements of addition)
...
T\left(\mathbf{0}\right) = \cdots = T\left(\mathbf{v}\right)+ \left(-1\right)T\left(\mathbf{v}\right) = T\left(\mathbf{v}\right) + \left(-T\left(\mathbf{v}\right)\right)= \left(T + \left(-T\right)\right)\left(\mathbf {v}\right) = \mathbf{0}
You shouldn't use the notation -T anywhere in this calculation, since we don't know that there's something that deserves that notation until we have completed it. Also, (this is just a presentation issue) if you have proved that the first thing in your chain of equalities is equal to ##\mathbf 0##, then you should put ##\mathbf 0=## at the start of it, not ##=\mathbf 0## at the end of it. Your notation and the "=0" at the end make it look like you're using the result that you're trying to prove in the final step. (I realize that you're not).

This would be better:
$$\mathbf 0=T(\mathbf 0)=\cdots=T(\mathbf v)+(-1)T(\mathbf v) =T(\mathbf v)+((-1)T)(\mathbf v)=(T+(-1)T)(\mathbf v).$$ And it would be even better to write the start of the calculation as ##\tilde T(\mathbf v)=\mathbf 0=\cdots##, because then we can conclude (since the equalities hold for all ##\mathbf v## that ##\tilde T=T+(-1)T##, where ##\tilde T## is the function you defined and proved to be the identity in step 3. The result ##\tilde T=T+(-1)T## implies that (-1)T is the additive inverse of T. Now we can start using the notation -T=(-1)T.

Another thing, you used that T is linear in steps 4-8, but there's no need to do that. There's nothing wrong with using linearity, considering what you want to prove, but it's useful to know that the assumption of linearity is unnecessary.

Step 4 without linearity:
$$(T+(-1)T)(\mathbf v) = T(\mathbf v)+((-1)T)(\mathbf v) =T(\mathbf v)+(-1)T(\mathbf v) =(1+(-1))T(\mathbf v) =0 T(\mathbf v)=\mathbf 0=\tilde T(\mathbf v).$$ Step 5 without linearity:
$$((ab)T)(\mathbf v) =(ab)T(\mathbf v) =a(bT(\mathbf v)) =a((bT)(\mathbf v)) =(a(bT))(\mathbf v).$$

Edit: In fact, you can prove that for all vector spaces V,W, the set of all functions from V into W, with the definitions of addition and scalar multiplication that you included in your post, is a vector space. Then you know that the set of all functions from V into V is a vector space, and you can prove quite easily that the set of linear operators on V is a vector space by only checking the usual three things to verify that it's a subspace of that larger vector space.
 
Last edited:
Thanks once again for your help Fredrik! Extremely useful and helpful :)
 
Last edited:
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top