Need help with linear independence proof

Click For Summary

Homework Help Overview

The discussion revolves around a proof concerning linear independence in vector spaces, specifically examining the relationship between two sets of vectors: the original set (v1, ...vn) and a transformed set (v1-v2, v2-v3, ...vn-1 -vn, vn). Participants are exploring the implications of linear independence and the conditions under which these vectors maintain that property.

Discussion Character

  • Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • The original poster attempts to understand the connection between linear independence and spanning sets, questioning what aspect they might be missing in their proof. Some participants suggest proving by contradiction and expanding the vectors to demonstrate implications for the original set.

Discussion Status

Participants are actively engaging with the problem, offering guidance on how to approach the proof. There is a focus on contradiction as a method, and some clarification on ensuring nonzero coefficients during the expansion process has been provided. However, there is no explicit consensus on the proof's direction or completeness.

Contextual Notes

The original poster mentions a familiarity with a related proof involving spanning sets, indicating a potential gap in understanding the nuances of linear independence versus spanning properties. The discussion also hints at the complexity of the problem due to the dimensionality of the vector space.

dyanmcc
Messages
7
Reaction score
0
Hi,

I don't know how to do the following proof:

If (v1, ...vn) are linearly independent in V, then so is the list (v1-v2, v2-v3, ...vn-1 -vn, vn).

I can do the proof if I replace 'linearly independent' with 'spans V' ...so what connection am I missing?

Thanks much!
 
Physics news on Phys.org
Prove it by contradiction. If the second set of vectors was not linearly independent, then you can write 0 as a linear combination of those vectors. Then simply expand out each vector to show that this implies v1...vn are also linearly dependent.
 
Great thanks. Here's another one for you...Prove that if V is finite dimensional with dim V > 1, then the set of noninvertible operators on V is not a subspace of L(V)
 
What have you done for that second problem dyanmcc? Start by thinking about matrices.
 
0rthodontist said:
Prove it by contradiction. If the second set of vectors was not linearly independent, then you can write 0 as a linear combination of those vectors. Then simply expand out each vector to show that this implies v1...vn are also linearly dependent.
Also be sure to prove that there is still a nonzero coefficient when you expand the vectors out. (look at the FIRST nonzero coefficient before expansion)
 
Last edited:

Similar threads

Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
17
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 3 ·
Replies
3
Views
8K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 7 ·
Replies
7
Views
2K
Replies
4
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K