Relationship between linear (in)dependency and span

Click For Summary
The discussion centers on the relationship between linear independence, linear dependence, and span in vector spaces. Linear dependence occurs when at least one vector can be expressed as a combination of others, indicating redundancy. The span of a set of vectors is the collection of all possible linear combinations of those vectors, and if one vector is dependent on others, it does not contribute to expanding the span. The equivalence between minimal spanning sets and maximal linearly independent sets is highlighted, emphasizing that a basis for a vector space must consist of linearly independent vectors that span the space. Ultimately, the dimension of a vector space is defined by the number of vectors in a basis, linking independence and span directly.
gummz
Messages
32
Reaction score
2
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.
 
Physics news on Phys.org
Yes, basically. That is the easiest way to test for linear dependence, but I don't think it is the most intuitive definition. I think of linear dependence as a redundancy - a set of vectors is linear dependent if at least one of them can be expressed as a linear combination of others.

This is related to the span as follows. The span of (A, B, C) is the set of all linear combinations of A, B, and C. Now, if it happens that C = 2A+B (or something), then span (A, B, C) = span (A, B), because C is "redundant" in that it is a linear combination of A and B, and so it is in the span(A,B), and thus so are any linear combinations involving it.

This is important in seeing if a set spans a vector space as follows. If a vector space has "dimension" 3, it means 3 linearly independent vectors are required to generate it. But, if these three vectors were not linearly independent, then at least one of them is redundant, and so this would be equal to a span of fewer than 3 linearly independent vectors, so it could not generate that particular vector space.

For example, the complex plane can be thought of a vector space over the reals. It is also two dimensional, and 1, i spans the complex, because any complex number is a linear combination of 1 and i. However, it is easy to see that 1, 2 does not span the complex, and it is because 1 and 2 are linearly dependent.
 
gummz said:
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.

I guess you mean R^n, or is R a generic vector space? For an n-dimensional vector space V, n linearly-independent vectors --no less than that -- form a basis for V. Similar to what 1 Mile Crash said, vectors v,w that are dependent live in the same subspace U of V, and so any linear combination av+bw stays in U, i.e., av+bw gives you no information outside of U .
 
The following theorem makes a connection between span and linear independence: Let V be a vector space and let S be a subset of V. The following statements are equivalent:

(a) S is a minimal spanning set.
(b) S is a maximal linearly independent set.

The definition of "spanning set" is that S is said to be a spanning set for V if S spans V. (a) means that no proper subset of S is a spanning set. (b) means that S is not a proper subset of any linearly independent set.

The theorem is sometimes used in the definition of "basis": S is said to be a Hamel basis, or just a basis, for V, if it satisfies the equivalent conditions of the theorem.
 
gummz said:
for some set of coefficients.

such that not all of the coefficients are zeroes.
 
One way of looking at "independence" and "span" is that they are opposites, in a sense. Given an n dimensional vector space, V, a set containing a single vector is certainly independent. It might be possible to add more vectors and still have independence but you need to be careful about that!

On the other hand, if we take a large enough set of vectors, perhaps all of V itself, the set will certainly span the space. We might be able to drop some vectors and still have an independent set.

Now, the whole point of "dimensionality" is that we can, in fact, keep dropping vectors from that spanning set until we have the smallest possible spanning set. And we can keep adding vectors to that independent set until we have the largest possible independent set. We can then use the fact that a set of linear homogeneous equations must have at least one, and under certain conditions, exactly one, solution to show that the smallest spanning set must be independent and the largest independent set must be spanning and, in fact, every set of vectors that is both independent and spanning must contain the same number of vectors- the "dimension" of the vector space.
 
I am studying the mathematical formalism behind non-commutative geometry approach to quantum gravity. I was reading about Hopf algebras and their Drinfeld twist with a specific example of the Moyal-Weyl twist defined as F=exp(-iλ/2θ^(μν)∂_μ⊗∂_ν) where λ is a constant parametar and θ antisymmetric constant tensor. {∂_μ} is the basis of the tangent vector space over the underlying spacetime Now, from my understanding the enveloping algebra which appears in the definition of the Hopf algebra...

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K