Vector space proof

1. Dec 4, 2008

Carl140

1. The problem statement, all variables and given/known data

Let V be a vector space over a field K. Let S be a set of vectors of V,
S= {e_i : i in J} (i.e e_i in V for each i in J) where J is an index set.
Prove that if S satisfies the following property, then S must be a basis.

The property is:
For every vector space W over the field K and for every function f: S-> W there exists
a unique linear transformation T: V-> W such that T restricted to S = f.

3. The attempt at a solution

I really have no clue how to start this, of course I must prove its linearly independent
and its spans V.

2. Dec 4, 2008

Office_Shredder

Staff Emeritus
Try doing proof by contradiction. If the set isn't linearly independent, find a function f such that no such T exists. If the set isn't spanning, find a function f such that more than one T exists

EDIT: Intuition-wise, you should know that if you describe a linear function on a basis, you've described the linear function entirely and uniquely. Basically, this is showing that if you add more elements (remove linear independence) then you have too much information and no longer necessarily have a linear function that satisfies it all, and if you remove elements (remove spanning) you don't have enough information for a unique T

Last edited: Dec 4, 2008
3. Dec 4, 2008

Carl140

Hi, thanks for your reply but I don't see it yet. Let's see.
Suppose the set isn't linearly independent, then there exists some e_k in S such that
e_k is a linear combination of the elements of S, i.e
e_k = sum ( r_i * e_i) where r_i are elements of the field K.
Now I have to define a function f: S-> W such that no T exists. My problem is W is
an arbitrary vector space over K, so how can I send elements of S to W if we don't
have any information at all about W? could you please give a hint for this?

4. Dec 4, 2008

Office_Shredder

Staff Emeritus
Pick W to be... say, K (1-dimensional). W is arbitrary, so you're allowed to pick a specific W.

5. Dec 4, 2008

HallsofIvy

Staff Emeritus
Can we at least assume that "for every function f: S-> W" is actually "for every linear function f:S->W"? If not then the statement is not true.'

If S is NOT a basis the either
1) it does not span V or
2) it is not independent.

Given 1, construct a linear function f:S-> W so that it there is NO linear transfromation L:V-> W such that L restricted to S is f.

Given 2, construct a linear function f:S->W so that there are two different linear transformations L1:V->W and L2: V->W so that L1 and L2 restricted to S are both equal to f.

6. Dec 4, 2008

Carl140

Here's what I did:
Suppose S is not linearly independent then we can write some e_k as:
e_k = sum( a_r * e_r) , r in J and a_r in K.
Now define f: S -> K by f(e_i ) = sum ( abs(a_r) , r in J) abs(a_r) means absolute value.
Then assume there exists a linear transformation such that for all s in S we have:
T(s) = f(s), then in particular for s= e_k we get
T(e_k = f(e_k) then T(sum( abs(a_r) * e_r)) = sum( abs(a_r) , r in J) so using linearity we
end up with sum ( |a_r| * [T(e_r) - 1] ) = 0 so T(e_r) = 1..and then? is this the right approach?