# Third isomorphic proof of the day.

Let T: V--> W be an isomorphism. Let {v1,....vk} be a subset of V. Prove that {v1,....vk} is a linearly independent set iff {T(v1),....T(vk)} is a linearly independent set.

Attempt: All I know is that {T(v1),....T(vk)} means that each set of co-ordinate vectors can be written as a linear combination of the standard basis vectors, which means the co-ordinate vectors are linearly independent. As well if we are assuming T is isomorphic then T has an inverse. How or what other facts am I suppose to use?

I'm at one of those boiling points of frustration when it comes to these proofs. As if I really don't know how to construct them. What am I missing when it comes to constructing these things. Sincerely, ABout to blow a gasket.

Related Calculus and Beyond Homework Help News on Phys.org
From what I can tell {v_{1},v_{2},...,v_{k}} is simply a finite subset of V, and {T(v_{1}),T(v_{2}),...,T(v_{k})} is the corresponding subset of images in W. As such, I don't think this question concerns bases of either V or W. That said, when a set {v_{1},v_{2},...,v_{k}} is linearly independent, it means that if a linear combination of these elements is equal to 0, ie. a_{1}v_{1}+a_{2}v_{2}+...+a_{k}v_{k}=0 where a_{i} are scalars, then all a_{i}=0. Also recall that your isomorphism satisfies T(x+y)=T(x)+T(y) and T(ax)=aT(x) where x,y are vectors and a is any scalar. Also, as you said, an isomorphism T has an inverse which is of course also an isomorphism. Hope this helps!

Let T: V--> W be an isomorphism. Let {v1,....vk} be a subset of V. Prove that {v1,....vk} is a linearly independent set iff {T(v1),....T(vk)} is a linearly independent set.

Attempt: All I know is that {T(v1),....T(vk)} means that each set of co-ordinate vectors can be written as a linear combination of the standard basis vectors, which means the co-ordinate vectors are linearly independent. As well if we are assuming T is isomorphic then T has an inverse. How or what other facts am I suppose to use?

I'm at one of those boiling points of frustration when it comes to these proofs. As if I really don't know how to construct them. What am I missing when it comes to constructing these things. Sincerely, ABout to blow a gasket.
There is a trick to doing proofs. And that is to be very clear about what you need to show; and to write down the exact definitions of technical terms.

In this case you have an iff, in other words a two-way implication. A implies B and B implies A. So you have two directions to prove. So you proof will consist of two parts: the => part and the <= part.

Secondly, what does it mean for a set of vectors to be linearly independent? The exact definition is that if a1v1 + ... + anvn = 0, then each of the ai must be zero.

Now to do the => direction, assume that v1 through vn are linearly independent. You need to show that {T(v1), ... T(vn}} are linearly independent.

Now if you have a linear combination of the T(vi's), you can use the linearity of T to drive the sums and scalar products through T so that you would have T(linear combo of v's) = 0. Since T is an isomorphism, its kernel is 0. So the linear combo must have all coefficients zero (since the vi's are linearly independent). So you're done!

I didn't do the detail and I was lazy with my markup, but you should be able to write out my argument in detail; and then do the <= direction.

But the point is that when you don't know how to get a proof started, do two things:

1) Write out exactly what is to be proved; and

2) Write down the exact technical definition of any defined terms. In this case once you write down exactly what it means for a set of vectors to be linearly independent, you'll end up with a bunch of sums and scalar products that will drive through your linear transformation T.

In other words once you write down the exact definition of linear independence, the proof writes itself.

Gave it an attempt and it appears to be working out. Thanks for the help and the push needed for today. Cheers

Hiya,

Well I've given it a go and I'd like your feedback to know if it's moving in the right direction:

Proof in --> :Since {v1,.....vk} are each linearly independent, this means they span subset of V (call it X). Since T is a linear then {T(v1),...,T(vk)} spans the Im(X).
Let w be in Im(X), then there exists v in X with T(v) = w. Since {v1,.....vk} is linearly independent, there exists scalars a1v1+.....+anvn= v. Then
w = T(v) = T(a1v1+.....+anvn) = a1T(v1) +.......+anT(vn)

=> {T(v1),...,T(vk)} is linearly independent.

Proof in <--: Assume {T(v1),...,T(vk)} is linearly independent that means there exist scalars a1........an = 0 for a1w1 +.....+ anwn for each w in Im(X), but w = T(v).............I'm stuck here.

So I gather I have to somehow show that {v1,.....vk} is also linearly independent?

sorry but i think your running yourself in circles. while linear independence implies spanning a subspace, you're only showing that the image of these vectors span a subspace of the image subspace.

you still need to go to the definition of linear independence.
begin with: "if a linear combination of vectors is equal to zero..."
and conclude "...therefore, all the coefficients in the linear combination of image vectors (which as a whole is still equal to zero) are all equal to zero"

How's this:

If a linear combo of vectors equals zero this means all of the coefficients of the vectors equal zero (i.e ai = 0). This implies the set of vectors is linearly independent. Applying the transformation function "T" to this implies a1T(v1) +.......+anT(vn), now each of the linear combinations of the T(vi's) are also linearly independent, so their coeffcients also each equal zero and this is in the image.

no. backwards. start with "a1T(v1) +.......+anT(vn)=0" and use the fact that the v_i's are are linearly independent and T is a linear isomorphism. (hint: what is the Kernel of this transformation?) what must the a_i's be and what does that imply about the T(v_i)'s? spanning does not imply independence, so you need to use what you know about the transformation T.