What does it mean for a set of vectors to be linearly dependent?

Click For Summary

Discussion Overview

The discussion revolves around the concept of linear dependence and independence of sets of vectors, including definitions and implications in vector spaces. Participants explore the correctness of definitions, uniqueness of representations, and the implications of vector ordering in various contexts.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant defines linear dependence as the existence of distinct vectors and scalars such that their linear combination equals the zero vector, suggesting that at least one vector can be expressed in terms of others.
  • Another participant agrees with the definition but points out a minor quibble regarding the notation of the set of vectors.
  • A different participant challenges the relevance of the correction about the notation, asserting that the original definition is correct even for infinite sets.
  • One participant discusses the uniqueness of representation of a vector in terms of a basis, providing a proof that relies on the linear independence of the basis vectors.
  • Another participant confirms the validity of the proof regarding uniqueness and discusses the implications of definitions of bases on the argument.
  • Several participants engage in a back-and-forth about the relevance of ordering in vector spaces and the implications for finite versus infinite sets of vectors.

Areas of Agreement / Disagreement

Participants express both agreement and disagreement on various points, particularly regarding the definitions of linear dependence and independence, the implications of vector ordering, and the uniqueness of representations. No consensus is reached on the relevance of certain corrections or the implications of infinite sets.

Contextual Notes

Some participants note that the discussion may not apply universally to infinite sets or different types of bases, highlighting limitations in the assumptions made about vector sets and their properties.

"Don't panic!"
Messages
600
Reaction score
8
Hi all,

I was asked by someone today to explain the notion of linear independence of a set of vectors and I would just like to check that I explained it correctly.

A set of vectors S is said to be linearly dependent if there exists distinct vectors \mathbf{v}_{1}, \ldots , \mathbf{v}_{m} in S and scalars c_{1},\ldots c_{m}, not all of which are zero, such that c_{1}\mathbf{v}_{1} + \cdots + c_{m}\mathbf{v}_{m} = \sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0}

What this means is that at least one vector in S can be completely specified in terms of the other vectors in the set and hence it is dependent on the particular form of those vectors. However, if the only case for which \sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0} is the trivial case, in which c_{i} = 0 \; \forall \; i=1, \ldots , m, then the set is said to be linearly independent, as none of the vectors contained within it can be specified in terms of the other vectors in S.

Is this a valid description of the concept?
 
Physics news on Phys.org
"Don't panic!" said:
Hi all,

I was asked by someone today to explain the notion of linear independence of a set of vectors and I would just like to check that I explained it correctly.

A set of vectors S is said to be linearly dependent if there exists distinct vectors \mathbf{v}_{1}, \ldots , \mathbf{v}_{m} in S and scalars c_{1},\ldots c_{m}, not all of which are zero, such that c_{1}\mathbf{v}_{1} + \cdots + c_{m}\mathbf{v}_{m} = \sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0}

What this means is that at least one vector in S can be completely specified in terms of the other vectors in the set and hence it is dependent on the particular form of those vectors. However, if the only case for which \sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0} is the trivial case, in which c_{i} = 0 \; \forall \; i=1, \ldots , m, then the set is said to be linearly independent, as none of the vectors contained within it can be specified in terms of the other vectors in S.

Is this a valid description of the concept?
Minor quibble. Usually S=(\mathbf{v}_{1}, \ldots , \mathbf{v}_{m})
 
Yes, I think that is a very good way of describing it.
 
Edit: Never mind, I read the OP wrong.
 
Last edited:
mathman said:
Minor quibble. Usually S=(\mathbf{v}_{1}, \ldots , \mathbf{v}_{m})

Do you mean the basis is ordered?
 
mathman said:
Minor quibble. Usually S=(\mathbf{v}_{1}, \ldots , \mathbf{v}_{m})

I don't see how this is correct. For example, ##S## can be infinite and then your correction doesn't apply. The definition in the OP is entirely correct.
 
Thanks for your help on the matter guys, much appreciated.
 
Also, in addition is this reasoning correct for proving that the representation of a given vector \mathbf{v} in a vector space V, with respect to a given basis \mathcal{B}=\lbrace \mathbf{e}_{i} \rbrace_{i=1, \ldots , n}, is unique? :

Let V be an n-dimensional vector space and \mathcal{B}=\lbrace \mathbf{e}_{i} \rbrace_{i=1, \ldots , n} be a given basis for V. Suppose that a given vector \mathbf{v} \in V can be represented in terms of the basis \mathcal{B} as two linear combinations \sum_{i=1}^{n}a_{i} \mathbf{e}_{i} and \sum_{i=1}^{n}b_{i} \mathbf{e}_{i}. Then, \sum_{i=1}^{n}a_{i} \mathbf{e}_{i} = \sum_{i=1}^{n}b_{i} \mathbf{e}_{i} such that \sum_{i=1}^{n} \left( a_{i}-b_{i} \right) \mathbf{e}_{i} = \mathbf{0} As the vectors \lbrace \mathbf{e}_{i} \rbrace_{i=1, \ldots , n} form a basis they are, by definition, linearly independent. Hence, this implies that a_{i} = b_{i} \; \forall \; i= 1, \ldots n. The coefficients a_{i} and b_{i} must satisfy the condition that their linear combination (along with the basis vectors) describe the vector \mathbf{v}, but are otherwise arbitrary, and hence we must conclude that in fact there is only one, unique, set of scalars \lbrace a_{i} \rbrace that satisfy \mathbf{v} = \sum_{i=1}^{n}a_{i} \mathbf{e}_{i}.
 
Yes, that proof is fine.

If you had been using a definition of "basis" that doesn't make it clear that every basis is a linearly independent set, you would also have had to prove linear independence.
 
  • #10
Thanks. Is the argument I gave at the end about why a_{i} =b_{i} for arbitrary scalars a_{i}, b_{i}, satisfying the required properties, requires that there is actually only one set of scalars?
 
  • #11
"Don't panic!" said:
Thanks. Is the argument I gave at the end about why a_{i} =b_{i} for arbitrary scalars a_{i}, b_{i}, satisfying the required properties, requires that there is actually only one set of scalars?
Yes, this is the standard way to prove uniqueness. If you know that x has property P, and you want to prove that nothing else does, you show that for all y with property P, we have y=x.
 
  • #12
Ok, cool. Thanks for your help.
 
  • #13
micromass said:
I don't see how this is correct. For example, ##S## can be infinite and then your correction doesn't apply. The definition in the OP is entirely correct.

The statement talks about one particular set of vectors. In the case S is infinite, the statement would have to say for every finite set of basis vectors.
-------------------------------------------------------------------

Do you mean the basis is ordered?
No. I just meant S was the given set. In general, when talking about vector spaces, ordering is not relevant.
 
  • #14
mathman said:
The statement talks about one particular set of vectors. In the case S is infinite, the statement would have to say for every finite set of basis vectors.

The OP never even mentioned basis vectors :confused:
 
  • #15
micromass said:
The OP never even mentioned basis vectors :confused:

You are right. However his question was about a particular (finite) set of vectors, which was the question I was addressing.
 
  • #16
-------------------------------------------------------------------

No. I just meant S was the given set. In general, when talking about vector spaces, ordering is not relevant.[/QUOTE]

You're right for f.d case, or for Hamel bases, where sums are finite, which is the case here. I know this is absurdly far-off for this post, but for the sake of a broader context , order does matter for Schauder bases, where convergence is conditional. And order matters when defining an iso. between linear maps and their representation as matrices.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 40 ·
2
Replies
40
Views
4K
  • · Replies 12 ·
Replies
12
Views
3K