Why can't a set of vectors with less than n elements span ℝn?

In summary: EDIT"... above (just a few seconds too late).In summary, the conversation discusses the span of a set of vectors in ℝn and concludes that if the number of vectors in the set is less than the dimension of ℝn, then the set cannot span ℝn. This is because any two linearly independent vectors in ℝn uniquely identify a plane, and all linear combinations of these vectors must lie on the same plane, meaning they cannot span all of ℝn.
  • #1
Drakkith
Mentor
22,917
7,268
I'm trying to figure out something regarding the span of a set of vectors. Say we have a set of vectors ##V=\{v_1, v_2, ... v_k\}## in ℝn. If k < n then the set does not span ℝn. Why is this? Are there vectors in ℝn that aren't combinations of the vectors in ##V##?
 
Physics news on Phys.org
  • #2
Drakkith said:
I'm trying to figure out something regarding the span of a set of vectors. Say we have a set of vectors ##V=\{v_1, v_2, ... v_k\}## in ℝn. If k < n then the set does not span ℝn. Why is this? Are there vectors in ℝn that aren't combinations of the vectors in ##V##?
Take numbers. If you want to span ##\mathbb{R}^3##, can it be done by two vectors: length and width? This only applies for the case ##k < n##. If we have ##n## or more vectors, we cannot say anything, because in this case, e.g. they could all be different multiples of just one vector, and then they still just span a line, although they might be many. But if we have less, then there is definitely something missing. In the example the height.
 
  • #3
fresh_42 said:
Take numbers. If you want to span ##\mathbb{R}^3##, can it be done by two vectors: length and width?

So, vectors in ℝ3 consist of 3 entries, such as ##\begin{bmatrix} 1 \\ 1 \\ 1\end{bmatrix}##.
If we have a set of two vectors, such as ##S = \left\{\begin{bmatrix} 1 \\ 0 \\ 0\end{bmatrix},\begin{bmatrix} 0 \\ 1 \\ 0\end{bmatrix}\right\}##, then the vector ##\begin{bmatrix} 0 \\ 0 \\ 1\end{bmatrix}## would not be a linear combination of the first two. And since the latter vector is in ℝ3, that means ##S## does not span ℝ3.

That right?
 
  • #4
Yes.
All linear combinations of vectors in ##S## have the third component zero. They span the ##(x,y)## plane, but only the plane at height ##z=0##.
 
  • #5
Thanks Fresh. That makes sense.

Now, what happens if none of the components of each vector are zero? I mean, I'm sure they still don't span ℝ3, but I'm unsure as how to go about proving it.
 
  • #6
This is usually the more difficult kind of proof: to show something cannot be done. So it is important to list what is given. What is ##\mathbb{R}^3##? A real vector space of three dimensions? Now what is dimension? All real triplets ##(a,b,c)##? The ##\mathbb{R}-##linear span of three linearly independent vectors?

See, to answer the question, it first must be clear what has to be shown. A proof could be: Assume ##\mathbb{R}^3## is (linearly) spanned by two vectors ##v_1,v_2\in \mathbb{R}^3##. Then all ##w \in \mathbb{R}^3## can be written as ##w=\alpha v_1 + \beta v_2##. The goal is to show, that there is a vector in ##\mathbb{R}^3## which cannot be written this way. Therefore the question "What is ##\mathbb{R}^3##? has to be answered first. If we take the triplet definition, then a proof will result in a row echelon procedure applied on ##v_1,v_2,w## and the choice of the third component of ##w## which isn't reached by the choices of ##\alpha, \beta##.
 
  • Like
Likes Drakkith
  • #7
Ok, how about this, as anargument for why 2 vectors ## v_1,v_2 ## are not enough:

1) Find the cross froduct ##w:= v_1 \times v_2 ##. Then w is perpendicular to ##v_1, v_2##; both ##v_1, v_2## are in the plane normal to ##w## passing through
##v_1, v_2##. For any two points there is a unique plane going through the two points.

2) Consider the sum ##c_1v_1+c_2v_2 ##. This is a parallelogram (EDIT: Remember how the sum of vectors is defined, re the parallellogram rule/law), contained in a plane normal to ##v_1, v_2 ## . This sum will therefore not "escape" outside of the plane.
 
Last edited:
  • Like
Likes SSequence
  • #8
Drakkith said:
Now, what happens if none of the components of each vector are zero? I mean, I'm sure they still don't span ℝ3, but I'm unsure as how to go about proving it.
I think one can use the general result (if one is allowed to do so) that any two different bases B1 and B2 for a subspace W of ℝn(finite n) will have equal number of vectors (this result allows the dimension of W to be uniquely defined).

A basis B for W means a finite set of vectors in W so that:
-- B is linearly independent
-- span(B)=W

Now if one takes the subspace W as equal to ℝ3 itself, then this translates to:
Any two different bases for ℝ3 will have 3 vectors (since we already know at least one basis for ℝ3 with 3 vectors).

========

But this leaves the possibility that there could be some another set which is not a basis (say A) such that span(A) equals ℝ3 and the number of vectors in A are less than 3.
However, for any such vector A there will be another set B such that:
-- B⊂A
-- B is linearly independent
-- span(B)=span(A) (which is just equal to ℝ3)

This means that B is a basis for ℝ3 and hence must have 3 vectors. Since B⊂A, surely A ought to have more than 3 vectors.

=============

More directly (probably a somewhat less rigorous phrasing of post#7), one can use the reasoning for ℝ^3 that any two linearly independent vectors uniquely identify a plane (and if they aren't then they just uniquely identify a line going through origin in ℝ^3). So the set of all linear combinations will be either:
-- a line going through origin in ℝ^3
-- a plane (that also includes the origin) in ℝ^3
 
  • #9
Drakkith said:
Thanks Fresh. That makes sense.

Now, what happens if none of the components of each vector are zero? I mean, I'm sure they still don't span ℝ3, but I'm unsure as how to go about proving it.

If you want a geometric argument, then any two vectors lie in a plane. All linear combinations of these two vectors must lie on the same plane, so cannot span all of ##\mathbb{R}^3##.
 
  • #10
PeroK said:
If you want a geometric argument, then any two vectors lie in a plane. All linear combinations of these two vectors must lie on the same plane, so cannot span all of ##\mathbb{R}^3##.
Great minds think alike ; ) , see my #7.
 
  • Like
Likes PeroK
  • #11
WWGD said:
Ok, how about this, as anargument for why 2 vectors ## v_1,v_2 ## are not enough:

1) Find the cross froduct ##w:= v_1 \times v_2 ##. Then w is perpendicular to ##v_1, v_2##; both ##v_1, v_2## are in the plane normal to ##w## passing through
##v_1, v_2##. For any two points there is a unique plane going through the two points.

2) Consider the sum ##c_1v_1+c_2v_2 ##. This is a parallelogram (EDIT: Remember how the sum of vectors is defined, re the parallellogram rule/law), contained in a plane normal to ##v_1, v_2 ## . This sum will therefore not "escape" outside of the plane.

Thanks WWGD. The geometric visualization makes sense. I can see that any two vectors in ℝ3 will have one vector that is perpendicular to them both, and thus ℝ3 will have a vector that's not a linear combination of the two vectors in the set. It seems to me that there are an infinite number of vectors outside of the plane formed by the span of ##S##, where ##S## is any two vectors in ℝ3, right?

While I can't take the visualization any further than ℝ3, I can accept that this holds true for any ℝn.

@SSequence thanks for the explanation. I can see now that any basis for ℝn requires exactly n vectors. Any more and the set is not linearly independent. Any fewer an the set does not span ℝn.
 
  • #12
Drakkith said:
Thanks WWGD. The geometric visualization makes sense. I can see that any two vectors in ℝ3 will have one vector that is perpendicular to them both, and thus ℝ3 will have a vector that's not a linear combination of the two vectors in the set. It seems to me that there are an infinite number of vectors outside of the plane formed by the span of ##S##, where ##S## is any two vectors in ℝ3, right?

.
Yes; take any point ##p## you know is in the plane, and translate it by any non-zero amount. The point will not be in the plane. So you can form an infinite collection ##\{p+ c \} ## where ##p ## is in the plane and ##c \in \mathbb R^3-\{0\} ## where ##0## is the 0 vector in ##\mathbb R^3## EDIT And notice you get an infinite collection _for each ##p##_ in the plane. Actually, in a sense, "most" points by many measures, are _not_ in the plane.
 
  • Like
Likes Drakkith
  • #13
In general, if ##V## is a vector space and ##S## is a set of ##m## vectors in ##V## which span ##V##, and ##T## is a linearly independent set of ##n## vectors in ##V##, then ##m \ge n##. (A consequence of this is that all bases for a vector space ##V## have the same number of elements, this common number is called the dimension of ##V##. The reason is that, by definition, the set of vectors in a basis for ##V## both span ##V## and is linearly independent, so if two bases have ##m## and ##n## elements, respectively, then both ##m\ge n## and ##m\le n##, so ##m=n##.)
Now, the standard basis in ##\mathbb R^n##: ##\{\mathbf e_1,\mathbf e_2,\dots \mathbf e_n\}##, is linearly independent, by definition, so every set of vectors in ##\mathbb R^n## which span ##\mathbb R^n## must have at least ##n## elements.
 
  • Like
Likes SSequence
  • #14
Drakkith said:
Thanks WWGD. The geometric visualization makes sense. I can see that any two vectors in ℝ3 will have one vector that is perpendicular to them both
This isn't true in general. If u = <1, 0, 1> and v = <2, 0, 2>, both vectors have an infinite number of vectors that are perpendicular to them. The two vectors I gave determine only a line, not a plane.
Drakkith said:
, and thus ℝ3 will have a vector that's not a linear combination of the two vectors in the set. It seems to me that there are an infinite number of vectors outside of the plane formed by the span of ##S##, where ##S## is any two vectors in ℝ3, right?
Drakkith said:
Well, yes. The span of the vectors in your set S of post #3 determine a plane; namely, the x-y plane. Any vector that "pokes" up out of this plan (i.e., has a nonzero z-coordinate) won't lie in this plane.
While I can't take the visualization any further than ℝ3, I can accept that this holds true for any ℝn.

@SSequence thanks for the explanation. I can see now that any basis for ℝn requires exactly n vectors. Any more and the set is not linearly independent. Any fewer an the set does not span ℝn.
But, we could have a set of n + 1 vectors that span ##\mathbb R^n## (i.e., any vector in ##\mathbb R^n## is some linear combination of these n + 1 vectors, but as you point out, this set couldn't be a basis because there are too many of them -- the set must necessarily be linearly dependent. And we could have a set consisting of n - 1 vectors that is linearly independent, but there are too few to be a basis for ##\mathbb R^n##. However, this set spans a subspace of ##\mathbb R^n## of dimension n - 1
 
  • #15
Mark44 said:
This isn't true in general. If u = <1, 0, 1> and v = <2, 0, 2>, both vectors have an infinite number of vectors that are perpendicular to them. The two vectors I gave determine only a line, not a plane.
But this is not part of the argument, the argument is that once you find the cross-product you do define a unique plane containing the two vectors and you can show that the sum (and linear combinations) stay in the plane. EDIT: But what you say is indeed correct, two points determine only a unique line and this line can be contained in infinitely-many planes. And, yes, you do need linear independence of the vectors.
 
  • #16
WWGD said:
But this is not part of the argument, the argument is that once you find the cross-product you do define a unique plane containing the two vectors and you can show that the sum (and linear combinations) stay in the plane.
I understand that. I was responding to what Drakkith said, about "any two vectors in ##\mathbb R^n## ..."
WWGD said:
EDIT: But what you say is indeed correct, two points determine only a unique line and this line can be contained in infinitely-many planes. And, yes, you do need linear independence of the vectors.
 
  • Like
Likes WWGD
  • #17
Sorry, I should have said two linearly independent vectors.
 
  • Like
Likes WWGD
  • #18
My two cents:
This thread would have been simpler if it had started with the example of one vector in R2. Clearly a single vector can only generate a line and not all of R2.
In the general case of Rn and k vectors (k < n), the Gram-Schmidt orthogonalization process can turn the original set of vectors into a set of basis vectors that essentially spans Rk. So it is intuitive to see that the spanned space is not the same as Rn.

PS. Unfortunately, it is not clear to me how to rigorously come up with a specific vector that is not in the space spanned by the k vectors. Maybe someone else can (or already has) given a method.
 
  • Like
Likes WWGD
  • #19
FactChecker said:
My two cents:
This thread would have been simpler if it had started with the example of one vector in R2. Clearly a single vector can only generate a line and not all of R2.
In the general case of Rn and k vectors (k < n), the Gram-Schmidt orthogonalization process can turn the original set of vectors into a set of basis vectors that essentially spans Rk. So it is intuitive to see that the spanned space is not the same as Rn.

PS. Unfortunately, it is not clear to me how to rigorously come up with a specific vector that is not in the space spanned by the k vectors. Maybe someone else can (or already has) given a method.

Good point; Gram-Schmidt process preserves the span.
I think you can use the fundamental theorem of linear algebra : https://en.wikipedia.org/wiki/Fundamental_theorem_of_linear_algebra
EDIT to find element that is not in row space.
An element in the kernel of the operator will be in the perp space of the Row space. Since ## A^{\perp} \cap A=\{0\} ## * , just find a non-zero element in the kernel, since kernel of map is the complementary subspace. Note you can also use this to extend basis of subspace into basis for space.

* This is true in "most cases"; I think there are symplectic, other spaces where it is not true.
 
Last edited:
  • Like
Likes StoneTemplePython and FactChecker
  • #20
FactChecker said:
PS. Unfortunately, it is not clear to me how to rigorously come up with a specific vector that is not in the space spanned by the k vectors. Maybe someone else can (or already has) given a method.

In some ways what follows is just a recut of what @WWGD said. However, the following two approaches are a bit more algorithmic and have a different sort of feel. Here are two simple and general approaches.

1.) using ##A\mathbf x = \mathbf b##, march through any valid basis -- I'd suggest the standard basis, and set ##\mathbf b := \mathbf e_j ##. You have to run through this 'for loop' at most ##n## times. During each iteration, run Gaussian elimination and you must fail on at least one of those ##\mathbf b##'s that you are solving for. (If you don't fail, then you already have a complete basis -- a contradiction.)

2.) Here is a sketch of a useful and elegant approach: suppose you have ##n-1## vectors that are linearly independent in some ##n## dimensional vector space. Generate an ##n## vector at random (uniform over ##[-1,1]## if in reals) and with probability one you have generated a linearly independent vector. You may confirm this by appending the random vector to the others and computing a determinant or running Gaussian elimination.

Note: this approach also has appeal over finite fields. If, say, you are in ##GF(2)## your random vector over ##\{0,1\}## will be linearly independent with at worst probability ##0.5##. Run ##r## independent trials and the probability that at least one of the random vectors is linearly independent is ##1 - (0.5)^r##. This approach actually works quite well, btw, when dealing with very large n. E.g. if ##n = 1,000## you could simply run the experiment with ##r = 20## and get at least one vector that is linearly independent with absurdly high probability. Of course, if you only have ##n-2## vectors that are linearly independent, the probability that your random vector is linearly independent only gets better. (And ##n-3## is even better from there, and so on.)
 
  • Like
Likes jim mcnamara, FactChecker and Drakkith
  • #21
WWGD said:
Yes; take any point ##p## you know is in the plane, and translate it by any non-zero amount. The point will not be in the plane. So you can form an infinite collection ##\{p+ c \} ## where ##p ## is in the plane and ##c \in \mathbb R^3-\{0\} ## where ##0## is the 0 vector in ##\mathbb R^3## EDIT And notice you get an infinite collection _for each ##p##_ in the plane. Actually, in a sense, "most" points by many measures, are _not_ in the plane.
A small caveat to this argument: the translation must be done in the "right way" , so that it is not done by adding ##c## where ##c## is a vector in the plane.
 
  • #22
do you know gauss elimination i.e. row reduction? saying that a set of vectors spans R^n means that if those vectors are chosen as column vectors in a system of linear equation, then we can always solve for any column vector we choose to put over on the right hand side of the system. But you know that if you have only k < n columns, the row reduction process will leave a system with only k non zero rows at most, and below that all zeroes. Hence any column vector that still has non zero entries below the kth row cannot be solved for.

an abstract proof that a vector space containing n independent vectors cannot be spanned by fewer than n vectors is given in the notes below, on page 16 and following, using the famous exchange argument of Riemann, usually attributed to Steinitz.

http://alpha.math.uga.edu/%7Eroy/laprimexp.pdf
 
  • Like
Likes Drakkith

What is the span of a set of vectors?

The span of a set of vectors is the set of all possible linear combinations of those vectors. It is essentially the "space" that can be created by adding or subtracting different multiples of the vectors in the set.

How is the span of a set of vectors calculated?

To calculate the span of a set of vectors, you can use the Gaussian Elimination method or the Row-Echelon form method. Both methods involve creating a matrix with the vectors as columns and using row operations to reduce the matrix to its simplest form. The number of non-zero rows in this simplified matrix is equal to the dimension of the span.

What is the significance of the span of a set of vectors?

The span of a set of vectors helps to determine the dimension and basis of a vector space. It is also used in linear algebra to solve systems of linear equations and to understand linear transformations.

Can the span of a set of vectors be infinite?

Yes, the span of a set of vectors can be infinite. This can happen if the set of vectors is linearly dependent, meaning that one or more vectors in the set can be written as a linear combination of the others. In this case, the span will be a subspace of infinite dimension.

How does the span of a set of vectors relate to linear independence?

If the span of a set of vectors is equal to the entire vector space, then the set is said to be linearly independent. This means that none of the vectors in the set can be written as a linear combination of the others. On the other hand, if the span is a subspace of the vector space, the set is said to be linearly dependent.

Similar threads

  • Linear and Abstract Algebra
Replies
13
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
2K
Replies
8
Views
4K
  • Linear and Abstract Algebra
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
9
Views
2K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
982
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
911
Back
Top