What does it mean for a set of vectors to be linearly dependent?

In summary, the concept of linear independence of a set of vectors is defined as having at least one vector in the set that can be expressed entirely in terms of the other vectors, without any unique contribution. A set is said to be linearly independent if the only way for the sum of the scalars times the vectors to equal zero is if all the scalars are zero. This is important in proving the uniqueness of a vector's representation in terms of a given basis.
  • #1
"Don't panic!"
601
8
Hi all,

I was asked by someone today to explain the notion of linear independence of a set of vectors and I would just like to check that I explained it correctly.

A set of vectors [itex] S[/itex] is said to be linearly dependent if there exists distinct vectors [itex] \mathbf{v}_{1}, \ldots , \mathbf{v}_{m} [/itex] in [itex] S[/itex] and scalars [itex] c_{1},\ldots c_{m}[/itex], not all of which are zero, such that [tex] c_{1}\mathbf{v}_{1} + \cdots + c_{m}\mathbf{v}_{m} = \sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0} [/tex]

What this means is that at least one vector in [itex]S[/itex] can be completely specified in terms of the other vectors in the set and hence it is dependent on the particular form of those vectors. However, if the only case for which [itex]\sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0}[/itex] is the trivial case, in which [itex] c_{i} = 0 \; \forall \; i=1, \ldots , m[/itex], then the set is said to be linearly independent, as none of the vectors contained within it can be specified in terms of the other vectors in [itex]S[/itex].

Is this a valid description of the concept?
 
Physics news on Phys.org
  • #2
"Don't panic!" said:
Hi all,

I was asked by someone today to explain the notion of linear independence of a set of vectors and I would just like to check that I explained it correctly.

A set of vectors [itex] S[/itex] is said to be linearly dependent if there exists distinct vectors [itex] \mathbf{v}_{1}, \ldots , \mathbf{v}_{m} [/itex] in [itex] S[/itex] and scalars [itex] c_{1},\ldots c_{m}[/itex], not all of which are zero, such that [tex] c_{1}\mathbf{v}_{1} + \cdots + c_{m}\mathbf{v}_{m} = \sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0} [/tex]

What this means is that at least one vector in [itex]S[/itex] can be completely specified in terms of the other vectors in the set and hence it is dependent on the particular form of those vectors. However, if the only case for which [itex]\sum_{i=1}^{m} c_{i}\mathbf{v}_{i} = \mathbf{0}[/itex] is the trivial case, in which [itex] c_{i} = 0 \; \forall \; i=1, \ldots , m[/itex], then the set is said to be linearly independent, as none of the vectors contained within it can be specified in terms of the other vectors in [itex]S[/itex].

Is this a valid description of the concept?
Minor quibble. Usually [itex]S=(\mathbf{v}_{1}, \ldots , \mathbf{v}_{m}) [/itex]
 
  • #3
Yes, I think that is a very good way of describing it.
 
  • #4
Edit: Never mind, I read the OP wrong.
 
Last edited:
  • #5
mathman said:
Minor quibble. Usually [itex]S=(\mathbf{v}_{1}, \ldots , \mathbf{v}_{m}) [/itex]

Do you mean the basis is ordered?
 
  • #6
mathman said:
Minor quibble. Usually [itex]S=(\mathbf{v}_{1}, \ldots , \mathbf{v}_{m}) [/itex]

I don't see how this is correct. For example, ##S## can be infinite and then your correction doesn't apply. The definition in the OP is entirely correct.
 
  • #7
Thanks for your help on the matter guys, much appreciated.
 
  • #8
Also, in addition is this reasoning correct for proving that the representation of a given vector [itex]\mathbf{v}[/itex] in a vector space [itex]V[/itex], with respect to a given basis [itex]\mathcal{B}=\lbrace \mathbf{e}_{i} \rbrace_{i=1, \ldots , n} [/itex], is unique? :

Let [itex]V[/itex] be an [itex]n[/itex]-dimensional vector space and [itex]\mathcal{B}=\lbrace \mathbf{e}_{i} \rbrace_{i=1, \ldots , n} [/itex] be a given basis for [itex] V[/itex]. Suppose that a given vector [itex]\mathbf{v} \in V [/itex] can be represented in terms of the basis [itex]\mathcal{B}[/itex] as two linear combinations [itex] \sum_{i=1}^{n}a_{i} \mathbf{e}_{i} [/itex] and [itex] \sum_{i=1}^{n}b_{i} \mathbf{e}_{i} [/itex]. Then, [tex] \sum_{i=1}^{n}a_{i} \mathbf{e}_{i} = \sum_{i=1}^{n}b_{i} \mathbf{e}_{i} [/tex] such that [tex] \sum_{i=1}^{n} \left( a_{i}-b_{i} \right) \mathbf{e}_{i} = \mathbf{0} [/tex] As the vectors [itex] \lbrace \mathbf{e}_{i} \rbrace_{i=1, \ldots , n} [/itex] form a basis they are, by definition, linearly independent. Hence, this implies that [itex] a_{i} = b_{i} \; \forall \; i= 1, \ldots n [/itex]. The coefficients [itex] a_{i}[/itex] and [itex] b_{i}[/itex] must satisfy the condition that their linear combination (along with the basis vectors) describe the vector [itex] \mathbf{v} [/itex], but are otherwise arbitrary, and hence we must conclude that in fact there is only one, unique, set of scalars [itex] \lbrace a_{i} \rbrace [/itex] that satisfy [itex] \mathbf{v} = \sum_{i=1}^{n}a_{i} \mathbf{e}_{i} [/itex].
 
  • #9
Yes, that proof is fine.

If you had been using a definition of "basis" that doesn't make it clear that every basis is a linearly independent set, you would also have had to prove linear independence.
 
  • #10
Thanks. Is the argument I gave at the end about why [itex] a_{i} =b_{i} [/itex] for arbitrary scalars [itex] a_{i}, b_{i} [/itex], satisfying the required properties, requires that there is actually only one set of scalars?
 
  • #11
"Don't panic!" said:
Thanks. Is the argument I gave at the end about why [itex] a_{i} =b_{i} [/itex] for arbitrary scalars [itex] a_{i}, b_{i} [/itex], satisfying the required properties, requires that there is actually only one set of scalars?
Yes, this is the standard way to prove uniqueness. If you know that x has property P, and you want to prove that nothing else does, you show that for all y with property P, we have y=x.
 
  • #12
Ok, cool. Thanks for your help.
 
  • #13
micromass said:
I don't see how this is correct. For example, ##S## can be infinite and then your correction doesn't apply. The definition in the OP is entirely correct.

The statement talks about one particular set of vectors. In the case S is infinite, the statement would have to say for every finite set of basis vectors.
-------------------------------------------------------------------

Do you mean the basis is ordered?
No. I just meant S was the given set. In general, when talking about vector spaces, ordering is not relevant.
 
  • #14
mathman said:
The statement talks about one particular set of vectors. In the case S is infinite, the statement would have to say for every finite set of basis vectors.

The OP never even mentioned basis vectors :confused:
 
  • #15
micromass said:
The OP never even mentioned basis vectors :confused:

You are right. However his question was about a particular (finite) set of vectors, which was the question I was addressing.
 
  • #16
-------------------------------------------------------------------

No. I just meant S was the given set. In general, when talking about vector spaces, ordering is not relevant.[/QUOTE]

You're right for f.d case, or for Hamel bases, where sums are finite, which is the case here. I know this is absurdly far-off for this post, but for the sake of a broader context , order does matter for Schauder bases, where convergence is conditional. And order matters when defining an iso. between linear maps and their representation as matrices.
 

1. What is the definition of linear dependence of vectors?

Linear dependence of vectors refers to the relationship between two or more vectors in which one vector can be expressed as a linear combination of the other vectors. In other words, one vector is a multiple of another vector or a combination of multiple vectors.

2. How can you determine if a set of vectors is linearly dependent?

A set of vectors is linearly dependent if at least one of the vectors can be written as a linear combination of the other vectors. This can be determined by using the determinant test or by solving a system of equations using the vectors as coefficients.

3. What is the significance of linear dependence in vector calculus?

Linear dependence is important in vector calculus because it helps us understand the relationship between vectors and how they can be combined to form other vectors. It also allows us to solve systems of linear equations and determine the rank of a matrix.

4. Can a set of linearly dependent vectors be linearly independent?

No, a set of vectors cannot be both linearly dependent and linearly independent. If a set of vectors is linearly dependent, it means that one or more of the vectors can be expressed as a linear combination of the others, making them linearly related and not independent.

5. How can linear dependence of vectors be used in real-world applications?

Linear dependence of vectors is used in various fields such as engineering, physics, and computer science to model and solve real-world problems. For example, it can be used in analyzing forces and motion in physics or in creating algorithms for data analysis and machine learning.

Similar threads

  • Linear and Abstract Algebra
Replies
11
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
874
  • Linear and Abstract Algebra
Replies
4
Views
877
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
17
Views
9K
Back
Top