Linearly Independent Sets: Subsets of A Are Also Independent

  • Thread starter Thread starter _Bd_
  • Start date Start date
  • Tags Tags
    Linear
Click For Summary

Homework Help Overview

The discussion revolves around the concept of linear independence in vector spaces, specifically examining whether any subset of a linearly independent set A = {v_1, v_2, v_3, ..., v_n} is also linearly independent. Participants are exploring definitions and implications of linear independence, as well as the relationship between subsets and the original set.

Discussion Character

  • Conceptual clarification, Assumption checking, Exploratory

Approaches and Questions Raised

  • Participants are attempting to understand the implications of linear independence and whether subsets maintain this property. Questions are raised about the definitions being used and the assumptions underlying the reasoning. Some participants suggest using proof by contradiction to explore the concept further.

Discussion Status

The discussion is active, with participants providing insights and questioning each other's assumptions. There is an exploration of different definitions of linear independence, and some guidance is offered regarding the structure of a proof. However, there is no explicit consensus on the approach to take.

Contextual Notes

Participants note that the chapter is focused on vector spaces, which may influence the definitions and methods discussed. There is also mention of a textbook reference that simply states "Proof," indicating a potential lack of detailed guidance in the source material.

_Bd_
Messages
107
Reaction score
0

Homework Statement



Show that if A = {v_1, v_2, v_3. . .v_n} is a linearly independent set then any subset of A is also linearly independent

Homework Equations



proof.

The Attempt at a Solution



so if A is a set of vectors
[v_1 ]
[v_2 ]
[v_3 ]
[. . . ]
[v_n ]
for it to be independent then the determinant must NOT be zero and so far I know that det are just for square matrices.
therefore A must be a square matrix with a det not = to zero.
therefore it must be possible to reduce it to the identity matrix
therefore since the Identity matrix has a leading 1 on different columns/rows you can NOT write any of them as a linear combination of any other ones, therefore any subset would be independent also ??

Im not sure about this since its mostly text and I don't know if I should have more math
. . .
the back of the book just says Proof.
 
Last edited:
Physics news on Phys.org
_Bd_ said:
you can NOT write any of them as a linear combination of any other ones, therefore any subset would be independent also ??

Im not sure about this since its mostly text and I don't know if I should have more math
. . .
the back of the book just says Proof.

Focus on this last bit. If a subset were linearly dependent, you could write an equation for one of the vectors as a linear combination of the others. Is that consistent with A being linearly independent? What assumption did we make that must be false?
 
fzero said:
Focus on this last bit. If a subset were linearly dependent, you could write an equation for one of the vectors as a linear combination of the others. Is that consistent with A being linearly independent? What assumption did we make that must be false?

thats kind of what i pointed. . . or I am not sure where you're getting at
say it reduces to

[1 0 0 0 ]
[0 1 0 0 ]
[0 0 1 0 ]
[0 0 0 1 ]

so for example vector 1 (1 0 0 0) can NOT be written as any combination of the other 3 vectors. . .because none of them have any coefficients in that column. . . so even if you were to only take a subset
[ 1 0 0 0 ]
[ 0 0 0 1 ]
[ 0 0 1 0 ]
then neither of those can be written as combinations of the others (therefore they are indepenent?)
or that's what I am thinking but I don't know if there is any more math and less text involved
?
 
My question then, is "what definition of 'independent' are you using?"

The definition that appears in most texts has nothing to do with matrices!
 
You should be able to show this pretty straight forward by using the definition of linear independence that says: if {v0, v1 ,…, vn } are linear independent iff a0v0 + a1v1 + ... + anvn = 0 implies a0, a1 ,…, an = 0. Try a proof by contradiction.
 
Last edited:
HallsofIvy said:
My question then, is "what definition of 'independent' are you using?"

The definition that appears in most texts has nothing to do with matrices!

The chapter is on vector spaces . . .



JonF said:
You should be able to show this pretty straight forward by using the definition of linear independence that says: if {v0, v1 ,…, vn } are linear independent iff a0v0 + a1v1 + ... + anvn = 0 implies a0, a1 ,…, an = 0. Try a proof by contradiction.

ok so . . .then c1v1 + c2v2. . .cnvn = 0 then the set is independent if c1, c2. . .cn are zero.

if c1,c2. . .cn are NOT zero then the set contains at least 1 c_iv_i which is NOT equal to zero and therefore does not satisfy the independence theorem . . .
is that cool?
 
Make sure you note that in the definition I gave you it’s for any an's have to = 0, not just for one particular group

I'd formal it up a bit, but that's the general idea. Start with something like let {v0, v1 ,…, vn } be a set of linear independent vectors. Suppose {vp0, vp1 ,…, vpk } is a subset of vectors and it is linear dependent. If it is linearly dependent… thus we reach a contradiction. I'll let you fill in the rest.
 
_Bd_ said:
ok so . . .then c1v1 + c2v2. . .cnvn = 0 then the set is independent if c1, c2. . .cn are zero.

Make sure you understand the subtlety of this definition, which is that the only solution to the equation c1v1 + c2v2 + ... + cnvn = 0 is c1 = c2 = ... = cn = 0.

I can always set any linear combination of vectors to 0, whether they are linearly dependent or linearly independent, and come up with the solution c1 = c2 = ... = cn = 0. The key difference is that for linearly independent vectors, this is the only one solution. It will be one of many solutions for a set of linearly dependent vectors.

For example, the vectors v1 = <2, 1, 1> and v2 = <4, 2, 2> are obviously linearly dependent.

The equation c1v1 + c2v2 = 0 has a solution c1 = c2 = 0, even though these vectors are linearly dependent, but there are many other solutions, such as c1 = 2 and c2 = -1.
 

Similar threads

  • · Replies 30 ·
2
Replies
30
Views
2K
Replies
15
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
5
Views
2K
Replies
1
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
34
Views
4K