Relationship between linear (in)dependency and span

In summary, the conversation discusses the concepts of linear dependence and span in relation to vector spaces. The easiest way to test for linear dependence is by finding a linear combination that results in the null vector. Linear dependence can also be seen as redundancy in a set of vectors, where one vector can be expressed as a linear combination of others. The span of a set of vectors is the set of all linear combinations of those vectors. A set of vectors is said to be a basis for a vector space if it is both independent and spanning. The concept of dimensionality is also introduced, where the smallest spanning set is independent and the largest independent set is spanning, and both have the same number of vectors, known as the dimension of the vector space.
  • #1
gummz
32
2
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.
 
Physics news on Phys.org
  • #2
Yes, basically. That is the easiest way to test for linear dependence, but I don't think it is the most intuitive definition. I think of linear dependence as a redundancy - a set of vectors is linear dependent if at least one of them can be expressed as a linear combination of others.

This is related to the span as follows. The span of (A, B, C) is the set of all linear combinations of A, B, and C. Now, if it happens that C = 2A+B (or something), then span (A, B, C) = span (A, B), because C is "redundant" in that it is a linear combination of A and B, and so it is in the span(A,B), and thus so are any linear combinations involving it.

This is important in seeing if a set spans a vector space as follows. If a vector space has "dimension" 3, it means 3 linearly independent vectors are required to generate it. But, if these three vectors were not linearly independent, then at least one of them is redundant, and so this would be equal to a span of fewer than 3 linearly independent vectors, so it could not generate that particular vector space.

For example, the complex plane can be thought of a vector space over the reals. It is also two dimensional, and 1, i spans the complex, because any complex number is a linear combination of 1 and i. However, it is easy to see that 1, 2 does not span the complex, and it is because 1 and 2 are linearly dependent.
 
  • #3
gummz said:
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.

I guess you mean R^n, or is R a generic vector space? For an n-dimensional vector space V, n linearly-independent vectors --no less than that -- form a basis for V. Similar to what 1 Mile Crash said, vectors v,w that are dependent live in the same subspace U of V, and so any linear combination av+bw stays in U, i.e., av+bw gives you no information outside of U .
 
  • #4
The following theorem makes a connection between span and linear independence: Let V be a vector space and let S be a subset of V. The following statements are equivalent:

(a) S is a minimal spanning set.
(b) S is a maximal linearly independent set.

The definition of "spanning set" is that S is said to be a spanning set for V if S spans V. (a) means that no proper subset of S is a spanning set. (b) means that S is not a proper subset of any linearly independent set.

The theorem is sometimes used in the definition of "basis": S is said to be a Hamel basis, or just a basis, for V, if it satisfies the equivalent conditions of the theorem.
 
  • #5
gummz said:
for some set of coefficients.

such that not all of the coefficients are zeroes.
 
  • #6
One way of looking at "independence" and "span" is that they are opposites, in a sense. Given an n dimensional vector space, V, a set containing a single vector is certainly independent. It might be possible to add more vectors and still have independence but you need to be careful about that!

On the other hand, if we take a large enough set of vectors, perhaps all of V itself, the set will certainly span the space. We might be able to drop some vectors and still have an independent set.

Now, the whole point of "dimensionality" is that we can, in fact, keep dropping vectors from that spanning set until we have the smallest possible spanning set. And we can keep adding vectors to that independent set until we have the largest possible independent set. We can then use the fact that a set of linear homogeneous equations must have at least one, and under certain conditions, exactly one, solution to show that the smallest spanning set must be independent and the largest independent set must be spanning and, in fact, every set of vectors that is both independent and spanning must contain the same number of vectors- the "dimension" of the vector space.
 

1. What is the definition of linear (in)dependency?

Linear (in)dependency refers to the relationship between vectors in a vector space. A set of vectors is considered linearly dependent if one of the vectors can be written as a linear combination of the others. Conversely, a set of vectors is considered linearly independent if no vector can be written as a linear combination of the others.

2. How does linear (in)dependency affect the span of a set of vectors?

The span of a set of vectors is determined by the linear (in)dependency of the vectors. If the vectors are linearly dependent, their span will be a lower-dimensional subspace of the vector space. If the vectors are linearly independent, their span will be the entire vector space.

3. Can a set of linearly dependent vectors span a higher-dimensional subspace?

No, a set of linearly dependent vectors can only span a lower-dimensional subspace. This is because if one vector can be written as a linear combination of the others, it is redundant and does not add any new information to the span.

4. How can you determine if a set of vectors is linearly dependent or independent?

To determine linear (in)dependency, you can use the following methods: 1) Gaussian elimination to reduce the vectors to their row-echelon form, 2) calculating the determinant of the matrix formed by the vectors, or 3) checking if any vector is a linear combination of the others.

5. Why is the concept of linear (in)dependency important in linear algebra?

Linear (in)dependency is a fundamental concept in linear algebra because it allows us to understand the relationships between vectors in a vector space. It is also essential in solving systems of linear equations and in determining the dimensionality of vector spaces. Additionally, it plays a crucial role in applications such as data analysis, computer graphics, and machine learning.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
869
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
865
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
871
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
890
Back
Top