# Span of a Set of Vectors

• I
Staff Emeritus

## Main Question or Discussion Point

I'm trying to figure out something regarding the span of a set of vectors. Say we have a set of vectors $V=\{v_1, v_2, ... v_k\}$ in ℝn. If k < n then the set does not span ℝn. Why is this? Are there vectors in ℝn that aren't combinations of the vectors in $V$?

Related Linear and Abstract Algebra News on Phys.org
fresh_42
Mentor
I'm trying to figure out something regarding the span of a set of vectors. Say we have a set of vectors $V=\{v_1, v_2, ... v_k\}$ in ℝn. If k < n then the set does not span ℝn. Why is this? Are there vectors in ℝn that aren't combinations of the vectors in $V$?
Take numbers. If you want to span $\mathbb{R}^3$, can it be done by two vectors: length and width? This only applies for the case $k < n$. If we have $n$ or more vectors, we cannot say anything, because in this case, e.g. they could all be different multiples of just one vector, and then they still just span a line, although they might be many. But if we have less, then there is definitely something missing. In the example the height.

Staff Emeritus
Take numbers. If you want to span $\mathbb{R}^3$, can it be done by two vectors: length and width?
So, vectors in ℝ3 consist of 3 entries, such as $\begin{bmatrix} 1 \\ 1 \\ 1\end{bmatrix}$.
If we have a set of two vectors, such as $S = \left\{\begin{bmatrix} 1 \\ 0 \\ 0\end{bmatrix},\begin{bmatrix} 0 \\ 1 \\ 0\end{bmatrix}\right\}$, then the vector $\begin{bmatrix} 0 \\ 0 \\ 1\end{bmatrix}$ would not be a linear combination of the first two. And since the latter vector is in ℝ3, that means $S$ does not span ℝ3.

That right?

fresh_42
Mentor
Yes.
All linear combinations of vectors in $S$ have the third component zero. They span the $(x,y)$ plane, but only the plane at height $z=0$.

Staff Emeritus
Thanks Fresh. That makes sense.

Now, what happens if none of the components of each vector are zero? I mean, I'm sure they still don't span ℝ3, but I'm unsure as how to go about proving it.

fresh_42
Mentor
This is usually the more difficult kind of proof: to show something cannot be done. So it is important to list what is given. What is $\mathbb{R}^3$? A real vector space of three dimensions? Now what is dimension? All real triplets $(a,b,c)$? The $\mathbb{R}-$linear span of three linearly independent vectors?

See, to answer the question, it first must be clear what has to be shown. A proof could be: Assume $\mathbb{R}^3$ is (linearly) spanned by two vectors $v_1,v_2\in \mathbb{R}^3$. Then all $w \in \mathbb{R}^3$ can be written as $w=\alpha v_1 + \beta v_2$. The goal is to show, that there is a vector in $\mathbb{R}^3$ which cannot be written this way. Therefore the question "What is $\mathbb{R}^3$? has to be answered first. If we take the triplet definition, then a proof will result in a row echelon procedure applied on $v_1,v_2,w$ and the choice of the third component of $w$ which isn't reached by the choices of $\alpha, \beta$.

WWGD
Gold Member
2019 Award
Ok, how about this, as anargument for why 2 vectors $v_1,v_2$ are not enough:

1) Find the cross froduct $w:= v_1 \times v_2$. Then w is perpendicular to $v_1, v_2$; both $v_1, v_2$ are in the plane normal to $w$ passing through
$v_1, v_2$. For any two points there is a unique plane going through the two points.

2) Consider the sum $c_1v_1+c_2v_2$. This is a parallelogram (EDIT: Remember how the sum of vectors is defined, re the parallellogram rule/law), contained in a plane normal to $v_1, v_2$ . This sum will therefore not "escape" outside of the plane.

Last edited:
Now, what happens if none of the components of each vector are zero? I mean, I'm sure they still don't span ℝ3, but I'm unsure as how to go about proving it.
I think one can use the general result (if one is allowed to do so) that any two different bases B1 and B2 for a subspace W of ℝn(finite n) will have equal number of vectors (this result allows the dimension of W to be uniquely defined).

A basis B for W means a finite set of vectors in W so that:
-- B is linearly independent
-- span(B)=W

Now if one takes the subspace W as equal to ℝ3 itself, then this translates to:
Any two different bases for ℝ3 will have 3 vectors (since we already know at least one basis for ℝ3 with 3 vectors).

========

But this leaves the possibility that there could be some another set which is not a basis (say A) such that span(A) equals ℝ3 and the number of vectors in A are less than 3.
However, for any such vector A there will be another set B such that:
-- B⊂A
-- B is linearly independent
-- span(B)=span(A) (which is just equal to ℝ3)

This means that B is a basis for ℝ3 and hence must have 3 vectors. Since B⊂A, surely A ought to have more than 3 vectors.

=============

More directly (probably a somewhat less rigorous phrasing of post#7), one can use the reasoning for ℝ^3 that any two linearly independent vectors uniquely identify a plane (and if they aren't then they just uniquely identify a line going through origin in ℝ^3). So the set of all linear combinations will be either:
-- a line going through origin in ℝ^3
-- a plane (that also includes the origin) in ℝ^3

PeroK
Homework Helper
Gold Member
Thanks Fresh. That makes sense.

Now, what happens if none of the components of each vector are zero? I mean, I'm sure they still don't span ℝ3, but I'm unsure as how to go about proving it.
If you want a geometric argument, then any two vectors lie in a plane. All linear combinations of these two vectors must lie on the same plane, so cannot span all of $\mathbb{R}^3$.

WWGD
Gold Member
2019 Award
If you want a geometric argument, then any two vectors lie in a plane. All linear combinations of these two vectors must lie on the same plane, so cannot span all of $\mathbb{R}^3$.
Great minds think alike ; ) , see my #7.

Staff Emeritus
Ok, how about this, as anargument for why 2 vectors $v_1,v_2$ are not enough:

1) Find the cross froduct $w:= v_1 \times v_2$. Then w is perpendicular to $v_1, v_2$; both $v_1, v_2$ are in the plane normal to $w$ passing through
$v_1, v_2$. For any two points there is a unique plane going through the two points.

2) Consider the sum $c_1v_1+c_2v_2$. This is a parallelogram (EDIT: Remember how the sum of vectors is defined, re the parallellogram rule/law), contained in a plane normal to $v_1, v_2$ . This sum will therefore not "escape" outside of the plane.
Thanks WWGD. The geometric visualization makes sense. I can see that any two vectors in ℝ3 will have one vector that is perpendicular to them both, and thus ℝ3 will have a vector that's not a linear combination of the two vectors in the set. It seems to me that there are an infinite number of vectors outside of the plane formed by the span of $S$, where $S$ is any two vectors in ℝ3, right?

While I can't take the visualization any further than ℝ3, I can accept that this holds true for any ℝn.

@SSequence thanks for the explanation. I can see now that any basis for ℝn requires exactly n vectors. Any more and the set is not linearly independent. Any fewer an the set does not span ℝn.

WWGD
Gold Member
2019 Award
Thanks WWGD. The geometric visualization makes sense. I can see that any two vectors in ℝ3 will have one vector that is perpendicular to them both, and thus ℝ3 will have a vector that's not a linear combination of the two vectors in the set. It seems to me that there are an infinite number of vectors outside of the plane formed by the span of $S$, where $S$ is any two vectors in ℝ3, right?

.
Yes; take any point $p$ you know is in the plane, and translate it by any non-zero amount. The point will not be in the plane. So you can form an infinite collection $\{p+ c \}$ where $p$ is in the plane and $c \in \mathbb R^3-\{0\}$ where $0$ is the 0 vector in $\mathbb R^3$ EDIT And notice you get an infinite collection _for each $p$_ in the plane. Actually, in a sense, "most" points by many measures, are _not_ in the plane.

Erland
In general, if $V$ is a vector space and $S$ is a set of $m$ vectors in $V$ which span $V$, and $T$ is a linearly independent set of $n$ vectors in $V$, then $m \ge n$. (A consequence of this is that all bases for a vector space $V$ have the same number of elements, this common number is called the dimension of $V$. The reason is that, by definition, the set of vectors in a basis for $V$ both span $V$ and is linearly independent, so if two bases have $m$ and $n$ elements, respectively, then both $m\ge n$ and $m\le n$, so $m=n$.)
Now, the standard basis in $\mathbb R^n$: $\{\mathbf e_1,\mathbf e_2,\dots \mathbf e_n\}$, is linearly independent, by definition, so every set of vectors in $\mathbb R^n$ which span $\mathbb R^n$ must have at least $n$ elements.

Mark44
Mentor
Thanks WWGD. The geometric visualization makes sense. I can see that any two vectors in ℝ3 will have one vector that is perpendicular to them both
This isn't true in general. If u = <1, 0, 1> and v = <2, 0, 2>, both vectors have an infinite number of vectors that are perpendicular to them. The two vectors I gave determine only a line, not a plane.
Drakkith said:
, and thus ℝ3 will have a vector that's not a linear combination of the two vectors in the set. It seems to me that there are an infinite number of vectors outside of the plane formed by the span of $S$, where $S$ is any two vectors in ℝ3, right?
Drakkith said:
Well, yes. The span of the vectors in your set S of post #3 determine a plane; namely, the x-y plane. Any vector that "pokes" up out of this plan (i.e., has a nonzero z-coordinate) won't lie in this plane.
While I can't take the visualization any further than ℝ3, I can accept that this holds true for any ℝn.

@SSequence thanks for the explanation. I can see now that any basis for ℝn requires exactly n vectors. Any more and the set is not linearly independent. Any fewer an the set does not span ℝn.
But, we could have a set of n + 1 vectors that span $\mathbb R^n$ (i.e., any vector in $\mathbb R^n$ is some linear combination of these n + 1 vectors, but as you point out, this set couldn't be a basis because there are too many of them -- the set must necessarily be linearly dependent. And we could have a set consisting of n - 1 vectors that is linearly independent, but there are too few to be a basis for $\mathbb R^n$. However, this set spans a subspace of $\mathbb R^n$ of dimension n - 1

WWGD
Gold Member
2019 Award
This isn't true in general. If u = <1, 0, 1> and v = <2, 0, 2>, both vectors have an infinite number of vectors that are perpendicular to them. The two vectors I gave determine only a line, not a plane.
But this is not part of the argument, the argument is that once you find the cross-product you do define a unique plane containing the two vectors and you can show that the sum (and linear combinations) stay in the plane. EDIT: But what you say is indeed correct, two points determine only a unique line and this line can be contained in infinitely-many planes. And, yes, you do need linear independence of the vectors.

Mark44
Mentor
But this is not part of the argument, the argument is that once you find the cross-product you do define a unique plane containing the two vectors and you can show that the sum (and linear combinations) stay in the plane.
I understand that. I was responding to what Drakkith said, about "any two vectors in $\mathbb R^n$ ..."
WWGD said:
EDIT: But what you say is indeed correct, two points determine only a unique line and this line can be contained in infinitely-many planes. And, yes, you do need linear independence of the vectors.

Staff Emeritus
Sorry, I should have said two linearly independent vectors.

FactChecker
Gold Member
My two cents:
This thread would have been simpler if it had started with the example of one vector in R2. Clearly a single vector can only generate a line and not all of R2.
In the general case of Rn and k vectors (k < n), the Gram-Schmidt orthogonalization process can turn the original set of vectors into a set of basis vectors that essentially spans Rk. So it is intuitive to see that the spanned space is not the same as Rn.

PS. Unfortunately, it is not clear to me how to rigorously come up with a specific vector that is not in the space spanned by the k vectors. Maybe someone else can (or already has) given a method.

WWGD
Gold Member
2019 Award
My two cents:
This thread would have been simpler if it had started with the example of one vector in R2. Clearly a single vector can only generate a line and not all of R2.
In the general case of Rn and k vectors (k < n), the Gram-Schmidt orthogonalization process can turn the original set of vectors into a set of basis vectors that essentially spans Rk. So it is intuitive to see that the spanned space is not the same as Rn.

PS. Unfortunately, it is not clear to me how to rigorously come up with a specific vector that is not in the space spanned by the k vectors. Maybe someone else can (or already has) given a method.
Good point; Gram-Schmidt process preserves the span.
I think you can use the fundamental theorem of linear algebra : https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_algebra
EDIT to find element that is not in row space.
An element in the kernel of the operator will be in the perp space of the Row space. Since $A^{\perp} \cap A=\{0\}$ * , just find a non-zero element in the kernel, since kernel of map is the complementary subspace. Note you can also use this to extend basis of subspace into basis for space.

* This is true in "most cases"; I think there are symplectic, other spaces where it is not true.

Last edited:
StoneTemplePython
Gold Member
2019 Award
PS. Unfortunately, it is not clear to me how to rigorously come up with a specific vector that is not in the space spanned by the k vectors. Maybe someone else can (or already has) given a method.
In some ways what follows is just a recut of what @WWGD said. However, the following two approaches are a bit more algorithmic and have a different sort of feel. Here are two simple and general approaches.

1.) using $A\mathbf x = \mathbf b$, march through any valid basis -- I'd suggest the standard basis, and set $\mathbf b := \mathbf e_j$. You have to run through this 'for loop' at most $n$ times. During each iteration, run Gaussian elimination and you must fail on at least one of those $\mathbf b$'s that you are solving for. (If you don't fail, then you already have a complete basis -- a contradiction.)

2.) Here is a sketch of a useful and elegant approach: suppose you have $n-1$ vectors that are linearly independent in some $n$ dimensional vector space. Generate an $n$ vector at random (uniform over $[-1,1]$ if in reals) and with probability one you have generated a linearly independent vector. You may confirm this by appending the random vector to the others and computing a determinant or running Gaussian elimination.

Note: this approach also has appeal over finite fields. If, say, you are in $GF(2)$ your random vector over $\{0,1\}$ will be linearly independent with at worst probability $0.5$. Run $r$ independent trials and the probability that at least one of the random vectors is linearly independent is $1 - (0.5)^r$. This approach actually works quite well, btw, when dealing with very large n. E.g. if $n = 1,000$ you could simply run the experiment with $r = 20$ and get at least one vector that is linearly independent with absurdly high probability. Of course, if you only have $n-2$ vectors that are linearly independent, the probability that your random vector is linearly independent only gets better. (And $n-3$ is even better from there, and so on.)

WWGD
Gold Member
2019 Award
Yes; take any point $p$ you know is in the plane, and translate it by any non-zero amount. The point will not be in the plane. So you can form an infinite collection $\{p+ c \}$ where $p$ is in the plane and $c \in \mathbb R^3-\{0\}$ where $0$ is the 0 vector in $\mathbb R^3$ EDIT And notice you get an infinite collection _for each $p$_ in the plane. Actually, in a sense, "most" points by many measures, are _not_ in the plane.
A small caveat to this argument: the translation must be done in the "right way" , so that it is not done by adding $c$ where $c$ is a vector in the plane.

mathwonk