How Do Linear Transformations Form a Basis in Finite Dimensional Spaces?

  • Thread starter Thread starter WiFO215
  • Start date Start date
  • Tags Tags
    Theorem
WiFO215
Messages
416
Reaction score
1
Theorem:
Let V and W be n-dimensional vector spaces over the field F of complex/real numbers. Then the space of linear transformations L(V,W) is finite dimensional and has dimension mn.

Proof:
Let B = {\alpha 1, \alpha 2 ... , \alpha n} and B' = {\beta 1, \beta 2,... \beta m} be ordered bases for V and W respectively. For each pair of integers (p,q) with 1\leq p \leq m and 1 \leq q \leq n, we define a linear transformation E(p,q) from V into W by

E(p,q)(\apha i) = 0, if i\neq q
=\beta p, if i = q


=\delta(i,q)\betap.

According to theorem, there is a unique linear transformation from V into W satisfying these conditions. The claim is that the mn transformations E(p,q) form a basis for L(V,W).

Let T be a linear transformation from V into W. For each j, 1 \leq j \leq n, let A(i,j),...,A(m,j) be the coordinates of the vector T\alpha i in the ordered basis B', i.e.,

T\alpha j = \sum^{m}_{p=1}A(p,j) \beta p.

We wish to show that
T = \sum^{m}_{p=1} \sum^{n}_{q=1} A(p,q) E(p,q) ... (1)

Let U be the linear transformation in the right hand member of (1). Then for each j

U\alphaj = \sum_{p} \sum_{q} A(p,q) E(p,q)(\alphaj)

= \sum_{p} \sum_{q} A(p,q) \delta(j,q)(\betap)

= \sum^{m}_{p=1}A(p,j) \beta p

= T\alphaj

and consequently U = T. Now (1) shows that the E(p,q) span L(V,W); we must prove that they are independent [ THIS IS THE PART THAT I DON'T UNDERSTAND. I COULD FOLLOW UP TO HERE]. But this is clear from what we did above; for, if the linear transformation

U = \sum_{p} \sum_{q} A(p,q) E(p,q)

is the zero transformation, then U\alphaj = 0 for each j, so

\sum^{m}_{p=1}A(p,j) \beta p = 0

and the independence of the \betap implies that A(p,j) = 0 for every p and j.

------END OF PROOF IN TEXT

Now let me explain a little more clearly what I don't understand with a rather simple example.

Let S be the set of ordered pairs (a,1) with 1\leq a \leq n, a is an integer, and F be the set of real numbers.

Now let me define a function f(i,j), f: S \rightarrow F, such that

f (i,j) [(a,1)] = \delta(j,a)

This could be represented as a space of nx1 column matrices with 1s in the jth position.

What I am trying to point out is that f(1,1) maps to the matrix [1 0 0 0... 0], but so does f(1,2). If both of them map to the same fellow, how the heck are the two linearly independent?
 
Last edited:
Physics news on Phys.org
Okay, wait. I see a flaw in my argument. It doesn't matter if two f(i,j) map to the same vector in F. That ought to be a good thing actually since that just says we can even construct linear transformations which aren't 1:1. So that's that. But I still don't understand how the mn linear transformations are linearly independent. I fully understand how the span the space of linear transformations. I just can't connect the dots.
 
DONE. You can delete this thread now. I was able to prove it right after posting it here as usual.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top