Relationship between linear (in)dependency and span

Click For Summary

Discussion Overview

The discussion revolves around the relationship between linear independence and span in vector spaces, exploring definitions, implications, and examples. Participants address theoretical aspects, practical implications, and theorems related to these concepts.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants define linear dependence as a situation where at least one vector can be expressed as a linear combination of others, suggesting this indicates redundancy in the set.
  • Others argue that the span of a set of vectors is the collection of all possible linear combinations of those vectors, and if one vector is dependent on others, it does not contribute to expanding the span.
  • A participant mentions that in an n-dimensional vector space, n linearly independent vectors are necessary to form a basis, and dependent vectors do not provide information outside their subspace.
  • One participant presents a theorem connecting span and linear independence, stating that a minimal spanning set is equivalent to a maximal linearly independent set, which is sometimes used to define a basis.
  • Another viewpoint suggests that independence and span can be seen as opposites, where adding vectors can lead to dependence while having a sufficiently large set can ensure spanning.
  • There is a discussion about the dimensionality of vector spaces, indicating that the smallest spanning set must be independent and the largest independent set must span the space, with the dimension being the number of vectors in both cases.

Areas of Agreement / Disagreement

Participants express various interpretations and definitions of linear independence and span, with no consensus reached on a singular definition or understanding. Multiple competing views remain regarding the implications of these concepts.

Contextual Notes

Some statements rely on specific definitions of vector spaces and dimensionality, which may not be universally agreed upon. The discussion includes assumptions about the nature of linear combinations and their implications for span and independence.

gummz
Messages
32
Reaction score
2
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.
 
Physics news on Phys.org
Yes, basically. That is the easiest way to test for linear dependence, but I don't think it is the most intuitive definition. I think of linear dependence as a redundancy - a set of vectors is linear dependent if at least one of them can be expressed as a linear combination of others.

This is related to the span as follows. The span of (A, B, C) is the set of all linear combinations of A, B, and C. Now, if it happens that C = 2A+B (or something), then span (A, B, C) = span (A, B), because C is "redundant" in that it is a linear combination of A and B, and so it is in the span(A,B), and thus so are any linear combinations involving it.

This is important in seeing if a set spans a vector space as follows. If a vector space has "dimension" 3, it means 3 linearly independent vectors are required to generate it. But, if these three vectors were not linearly independent, then at least one of them is redundant, and so this would be equal to a span of fewer than 3 linearly independent vectors, so it could not generate that particular vector space.

For example, the complex plane can be thought of a vector space over the reals. It is also two dimensional, and 1, i spans the complex, because any complex number is a linear combination of 1 and i. However, it is easy to see that 1, 2 does not span the complex, and it is because 1 and 2 are linearly dependent.
 
gummz said:
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.

I guess you mean R^n, or is R a generic vector space? For an n-dimensional vector space V, n linearly-independent vectors --no less than that -- form a basis for V. Similar to what 1 Mile Crash said, vectors v,w that are dependent live in the same subspace U of V, and so any linear combination av+bw stays in U, i.e., av+bw gives you no information outside of U .
 
The following theorem makes a connection between span and linear independence: Let V be a vector space and let S be a subset of V. The following statements are equivalent:

(a) S is a minimal spanning set.
(b) S is a maximal linearly independent set.

The definition of "spanning set" is that S is said to be a spanning set for V if S spans V. (a) means that no proper subset of S is a spanning set. (b) means that S is not a proper subset of any linearly independent set.

The theorem is sometimes used in the definition of "basis": S is said to be a Hamel basis, or just a basis, for V, if it satisfies the equivalent conditions of the theorem.
 
gummz said:
for some set of coefficients.

such that not all of the coefficients are zeroes.
 
One way of looking at "independence" and "span" is that they are opposites, in a sense. Given an n dimensional vector space, V, a set containing a single vector is certainly independent. It might be possible to add more vectors and still have independence but you need to be careful about that!

On the other hand, if we take a large enough set of vectors, perhaps all of V itself, the set will certainly span the space. We might be able to drop some vectors and still have an independent set.

Now, the whole point of "dimensionality" is that we can, in fact, keep dropping vectors from that spanning set until we have the smallest possible spanning set. And we can keep adding vectors to that independent set until we have the largest possible independent set. We can then use the fact that a set of linear homogeneous equations must have at least one, and under certain conditions, exactly one, solution to show that the smallest spanning set must be independent and the largest independent set must be spanning and, in fact, every set of vectors that is both independent and spanning must contain the same number of vectors- the "dimension" of the vector space.
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K