How Do Linear Transformations Form a Basis in Finite Dimensional Spaces?

  • Thread starter Thread starter WiFO215
  • Start date Start date
  • Tags Tags
    Theorem
WiFO215
Messages
416
Reaction score
1
Theorem:
Let V and W be n-dimensional vector spaces over the field F of complex/real numbers. Then the space of linear transformations L(V,W) is finite dimensional and has dimension mn.

Proof:
Let B = {\alpha 1, \alpha 2 ... , \alpha n} and B' = {\beta 1, \beta 2,... \beta m} be ordered bases for V and W respectively. For each pair of integers (p,q) with 1\leq p \leq m and 1 \leq q \leq n, we define a linear transformation E(p,q) from V into W by

E(p,q)(\apha i) = 0, if i\neq q
=\beta p, if i = q


=\delta(i,q)\betap.

According to theorem, there is a unique linear transformation from V into W satisfying these conditions. The claim is that the mn transformations E(p,q) form a basis for L(V,W).

Let T be a linear transformation from V into W. For each j, 1 \leq j \leq n, let A(i,j),...,A(m,j) be the coordinates of the vector T\alpha i in the ordered basis B', i.e.,

T\alpha j = \sum^{m}_{p=1}A(p,j) \beta p.

We wish to show that
T = \sum^{m}_{p=1} \sum^{n}_{q=1} A(p,q) E(p,q) ... (1)

Let U be the linear transformation in the right hand member of (1). Then for each j

U\alphaj = \sum_{p} \sum_{q} A(p,q) E(p,q)(\alphaj)

= \sum_{p} \sum_{q} A(p,q) \delta(j,q)(\betap)

= \sum^{m}_{p=1}A(p,j) \beta p

= T\alphaj

and consequently U = T. Now (1) shows that the E(p,q) span L(V,W); we must prove that they are independent [ THIS IS THE PART THAT I DON'T UNDERSTAND. I COULD FOLLOW UP TO HERE]. But this is clear from what we did above; for, if the linear transformation

U = \sum_{p} \sum_{q} A(p,q) E(p,q)

is the zero transformation, then U\alphaj = 0 for each j, so

\sum^{m}_{p=1}A(p,j) \beta p = 0

and the independence of the \betap implies that A(p,j) = 0 for every p and j.

------END OF PROOF IN TEXT

Now let me explain a little more clearly what I don't understand with a rather simple example.

Let S be the set of ordered pairs (a,1) with 1\leq a \leq n, a is an integer, and F be the set of real numbers.

Now let me define a function f(i,j), f: S \rightarrow F, such that

f (i,j) [(a,1)] = \delta(j,a)

This could be represented as a space of nx1 column matrices with 1s in the jth position.

What I am trying to point out is that f(1,1) maps to the matrix [1 0 0 0... 0], but so does f(1,2). If both of them map to the same fellow, how the heck are the two linearly independent?
 
Last edited:
Physics news on Phys.org
Okay, wait. I see a flaw in my argument. It doesn't matter if two f(i,j) map to the same vector in F. That ought to be a good thing actually since that just says we can even construct linear transformations which aren't 1:1. So that's that. But I still don't understand how the mn linear transformations are linearly independent. I fully understand how the span the space of linear transformations. I just can't connect the dots.
 
DONE. You can delete this thread now. I was able to prove it right after posting it here as usual.
 
Back
Top