Span of a list of vectors

In summary, the conversation discusses the concepts of linear combinations, span, and linear independence in Linear Algebra. It is mentioned that span is the smallest subspace containing a list of vectors, and that a subspace with a larger list of vectors would be a larger subspace. It is also noted that the defining characteristic of a finite-dimensional vector space is the size of its basis. Furthermore, a basis must be both linearly independent and spanning. The importance of bases in understanding vector spaces is emphasized, as well as their relationship to linear transformations and matrices. Finally, the conversation mentions the alternative viewpoint of vector spaces as fields acting on abelian groups.
  • #1
Math Amateur
Gold Member
MHB
3,990
48
In order to improve my knowledge of Linear Algebra I am reading Linear Algebra Done Right by Sheldon Axler.

In Chapter 2 under the heading Span and Linear Independence we find the following text:

"If [itex] ( v_1, v_2, ... ... v_m ) [/itex] is a list off vectors in a vector space V, then each [itex] v_j [/itex] is a linear combination of [itex] ( v_1, v_2, ... ... v_m ) [/itex].

Thus span[itex] ( v_1, v_2, ... ... v_m ) [/itex] contains each [itex] v_j [/itex].

Conversely, because because subspaces are closed under scalar multiplications and addition, every subspace V containig each [itex] v_j [/itex] must contain span[itex] ( v_1, v_2, ... ... v_m ) [/itex].

Thus the span of a list of vectors in V is the smallest subspace of V containing all the vectors in the list."

Intuitively, given that new vectors are formed by scalar multiplication and addition of other vectors that span[itex] ( v_1, v_2, ... ... v_m ) [/itex] is also the largest subspace of V containing all the vectors in the list.

Is this intuition correct? Can someone please confirm or otherwise?

Peter
 
Last edited:
Physics news on Phys.org
  • #2
No, that is not true. Let V be the vector space of ordered quadruples (a, b, c, d) and let [itex]v_1= (1, 0, 0, 0)[/itex], [itex]v_2= (0, 1, 0, 0)[/itex]. The subspace of V given by {(a, b, c, 0)} contains both vectors in the list but is not equal to their span.
 
  • #3
Oh! Just had another quick thought about this that possibly solves my confusion.

A subspace with a longer list of vectors - but also containing [itex] v_1, v_2, ... ... v_m [/itex] would be a larger subpace than span [itex] v_1, v_2, ... ... v_m [/itex] but would contain [itex] v_1, v_2, ... ... v_m [/itex] - so it would appear (no real surprises!) Axler's statement was correct in this sense.

Sorry - if this is the case I just ran to post the issue too fast - apologies!

Peter
 
  • #4
Sorry HallsofIvy I put up my second post before seeing your post.

Just considering your post now - Thanks!

Peter
 
  • #5
Thanks HallsofIvy - see the point you are making.

Appreciate your help!

Peter
 
  • #6
the defining characteristic of a finite-dimensional vector space is the size of any basis, called the dimension of the space.

a basis has two defining properties:

a) it is a linearly independent set
b) it spans the space it is a basis OF.

property (a) serves to ensure our basis is "as small as possible" by removing extraneous elements (ones we can form from linear combinations of other (basis) elements).

property (b) serves to ensure our basis is "as large as necessary", that it is a generating set of our vector space.

as one might expect, linear independence corresponds to injectivity of a linear mapping:

a linear mapping (vector space morphism) is injective if and only if it maps a linearly independent set to a linearly independent set.

spanning corresponds to surjectivity of a linear mapping:

a linear mapping is surjective if and only if it maps a spanning set (and therefore some basis) to a spanning set of the "target space".

note that a spanning set need not be linearly independent, nor does a linearly independent set need to be a spanning set.

an important consequence of the above, is that an isomorphism (of vector spaces) takes a basis to a basis. so two vector spaces are "essentially the same" if we can match up the bases element-by-element in a 1-1 correspondence. this is why bases are so important for vector spaces...they tell us "everything we need to know" (often, in the literature, one see things defined for a basis only, and then "extended by linearity").

a decent linear algebra text will have everything i have stated above as theorems, the proofs of which are straight-forward, but instructive.
 
  • #7
Thanks Deveno - most helpful!

Just read through that and plan to do some basic work on Linear Algebra - particularly linear transformations. Will check over all the points you mention and work some illustrative examples.

Basically I am interested in the structures of abstract algebra - particularly groups - but lately I keep tripping over the basics of linear algebra - must remedy that!

Thanks again!

Peter
 
  • #8
Math Amateur said:
Thanks Deveno - most helpful!

Just read through that and plan to do some basic work on Linear Algebra - particularly linear transformations. Will check over all the points you mention and work some illustrative examples.

Basically I am interested in the structures of abstract algebra - particularly groups - but lately I keep tripping over the basics of linear algebra - must remedy that!

Thanks again!

Peter

linear algebra can be "modelled" by n-tuples of field elements (vectors in n-space), and usually the field in question is either Q,R or C (so chosen because people are familiar with these fields). then "linear transformations" between two spaces can be modeled by matrices. and matrices have an arithmetic, that is, we can compute their entries by doing things we know very well. so for a first exposure, this is the route chosen (start with the concrete, then generalize).

if, however, one already knows about groups, one can view vector spaces another way:

a field acting on an abelian group.

now, the "action axioms" tell us that for every r,s in F* (we have to use F*, since F is NOT a group under multiplication), and every v in V (the abelian group):

r.(s.v) = (rs).v (usually the "dot" is omitted),
1.v = v (since 1 is the identity of F*).

since we are acting upon an abelian group, it is "natural" to require that v→r.v be MORE than just a bijection, we want it to be an element of Aut(V):

r.(u+v) = r.u + r.v

and v→r.v has the inverse map v→(1/r).v

now F has more than just a group structure on the underlying set, it also has an additive group structure, and we want this to be "compatible" with the group structure on (V,+). that is, if we call the map r→r.v σr, we want the map r→σr, to be an abelian group homomorphism from (F,+) to (Aut(V),+) (Aut(V) has a natural addition operation, called "point-wise addition": (σ+τ)(v) = σ(v) + τ(v)).

in other words:

σr+s = σr + σs, or:

(r+s).v = r.v + s.v

a careful comparison will show you these are just the defining axioms of a vector space (the first few axioms usually just state (V,+) is an abelian group).

it turns out that requiring F to be a field, although essential for solving linear equations, is somewhat "over-specialized", that in many cases, we might just want a ring, R. so the concept of vector space generalizes rather easily to the concept of an R-module (and then, an F-module is precisely a vector space). and if R = Z, the ring of integers, it turns out a Z-module is just another name for abelian group. that is, in an abelian group, a product like:

akbmcn, can be written as:

ka + mb + nc, that is: a linear combination (over Z) of a,b and c.

the whole point of linear algebra, is to reduce "complicated questions" to simpler ones, involving just basis elements (working "one dimension at a time"). this is analogous with reducing questions about a group, to questions about its set of generators. the neat thing about vector spaces is that they are "free objects", any vector space is equivalent (isomorphic) to the "free module generated by a basis set". this makes vector spaces "nicer" than groups, because most groups are merely quotients of a free object (we have relations, as well as generators), so the same set of generators can determine different groups (depending on how the generators interact). another way of expressing this is to say:

vector spaces are a "universal" construction: seen one n-dimensional vector space, you've "seen 'em all".
 

1. What is the span of a list of vectors?

The span of a list of vectors is the set of all possible linear combinations of those vectors. In other words, it is the space that can be created by scaling and adding the vectors together.

2. How do you calculate the span of a list of vectors?

The span of a list of vectors can be calculated by finding the linearly independent vectors in the list and then determining all possible linear combinations of those vectors. Alternatively, it can be calculated by creating a matrix with the vectors as columns and performing row operations to put the matrix in reduced row echelon form. The number of non-zero rows in the resulting matrix is equal to the dimension of the span.

3. What is the significance of the span of a list of vectors in linear algebra?

The span of a list of vectors is important because it represents the entire range of possible solutions to a system of linear equations. It also allows us to understand the dimension and basis of a vector space, and to determine if a set of vectors is linearly independent or linearly dependent.

4. Can the span of a list of vectors be infinite?

Yes, the span of a list of vectors can be infinite. This can occur when the list contains an infinite number of vectors, or when the vectors in the list are linearly independent and can be scaled to any magnitude.

5. How does the span of a list of vectors relate to the concept of linear independence?

The span of a list of vectors is directly related to the concept of linear independence. If the span of a list of vectors is finite, then the vectors in the list are linearly dependent. On the other hand, if the span is infinite, then the vectors in the list are linearly independent. Additionally, the number of linearly independent vectors in a list is equal to the dimension of the span.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
13
Views
2K
  • Linear and Abstract Algebra
Replies
21
Views
3K
  • Linear and Abstract Algebra
Replies
9
Views
990
  • Calculus and Beyond Homework Help
Replies
14
Views
583
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
975
Replies
2
Views
2K
Back
Top