# Relationship between linear (in)dependency and span

So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.

Yes, basically. That is the easiest way to test for linear dependence, but I don't think it is the most intuitive definition. I think of linear dependence as a redundancy - a set of vectors is linear dependent if at least one of them can be expressed as a linear combination of others.

This is related to the span as follows. The span of (A, B, C) is the set of all linear combinations of A, B, and C. Now, if it happens that C = 2A+B (or something), then span (A, B, C) = span (A, B), because C is "redundant" in that it is a linear combination of A and B, and so it is in the span(A,B), and thus so are any linear combinations involving it.

This is important in seeing if a set spans a vector space as follows. If a vector space has "dimension" 3, it means 3 linearly independent vectors are required to generate it. But, if these three vectors were not linearly independent, then at least one of them is redundant, and so this would be equal to a span of fewer than 3 linearly independent vectors, so it could not generate that particular vector space.

For example, the complex plane can be thought of a vector space over the reals. It is also two dimensional, and 1, i spans the complex, because any complex number is a linear combination of 1 and i. However, it is easy to see that 1, 2 does not span the complex, and it is because 1 and 2 are linearly dependent.

WWGD
Gold Member
So pretty frequently I encounter questions like

a) Are these vectors linearly independent?

b) Do they span all of R? Why?

As I understand linear dependency, the linear combination of the vectors in question exists as the Null vector for some set of coefficients.

I guess you mean R^n, or is R a generic vector space? For an n-dimensional vector space V, n linearly-independent vectors --no less than that -- form a basis for V. Similar to what 1 Mile Crash said, vectors v,w that are dependent live in the same subspace U of V, and so any linear combination av+bw stays in U, i.e., av+bw gives you no information outside of U .

Fredrik
Staff Emeritus
Gold Member
The following theorem makes a connection between span and linear independence: Let V be a vector space and let S be a subset of V. The following statements are equivalent:

(a) S is a minimal spanning set.
(b) S is a maximal linearly independent set.

The definition of "spanning set" is that S is said to be a spanning set for V if S spans V. (a) means that no proper subset of S is a spanning set. (b) means that S is not a proper subset of any linearly independent set.

The theorem is sometimes used in the definition of "basis": S is said to be a Hamel basis, or just a basis, for V, if it satisfies the equivalent conditions of the theorem.

Stephen Tashi
for some set of coefficients.

such that not all of the coefficients are zeroes.

HallsofIvy