Spanning sets, and linear independence of them

quasar_4
Messages
273
Reaction score
0
I've become sort of confused on the topic of the linear span versus spanning sets. I know that the span of a subset is the set containing all linear combinations of vectors in V. Is a spanning set then the same thing, or is it something else?

Also, in terms of bases... A basis is a linearly independent spanning set, but I thought a span was a set containing linear combinations... BUT linear combinations generally indicate linear dependence! If that's the case, how is the spanning set linearly independent? I know I'm missing something here, just not sure what! Anyone have a good description that might help? :shy:
 
Physics news on Phys.org
quasar_4 said:
I've become sort of confused on the topic of the linear span versus spanning sets. I know that the span of a subset is the set containing all linear combinations of vectors in V. Is a spanning set then the same thing, or is it something else?

I haven't heard of a spanning set, so I'll use linear span to denote the set of all finite linear combinations of (a finite number of) vectors belonging to some vector space V. It is evident that this set is a subspace of V.

Also, in terms of bases... A basis is a linearly independent spanning set, but I thought a span was a set containing linear combinations... BUT linear combinations generally indicate linear dependence! If that's the case, how is the spanning set linearly independent?

A basis of V is a set of linearly-indepedent vectors (that belong to V), which span the whole of V. If you consider the linear span of the basis vectors, it would contain all the vectors in V.

Basis : {e1, e2, e3,...,en} - these vectors are linearly-independent and belong to V. Every vector v, belonging to V, can be expressed as a linear combination of these vectors.
 
"span" and "spanning set" are, in a sense, opposites. Given a collection of vectors {v1,v2, ... , vn}, the set of all possible linear combinations of those, {a1v1+ a2v2+ ... + anvn} is its "span".

Conversely, if U is a subspace of V, a collection of vectors that has U as its span is a "spanning set" for U.

BUT linear combinations generally indicate linear dependence!"
?? Where did you get that idea? A set of vectors is independent if and only if the only way you can have a linear combination of them equal to the 0 vector is if all the coefficients in the combination are 0. It is easy to prove from that that each vector in their span can be written as a linear combination of them in only one way. In, for example, R3, the two vectors <1, 0, 0> and <0, 1, 0> are independent. There span is the set of all vectors of the form a<1, 0, 0>+ b<0, 1, 0>= <a, b, 0> but they themselves are independent.

In a sense the concepts of "spanning" and "independent" are opposites. A set containing a single vector is obviously "independent". As you add more vectors it becomes more likely that it becomes dependent. On the other hand, a set containing all vectors clearly spans the entire space. As you remove vectors it becomes more likely that you will miss one. The crucial fact for (finite-dimensional) vectors is this: In order for a set of vectors in an n-dimensional space to be independent there cannot be more than n vectors in the set. In order for a set of vectors to span the space there cannot be less than n vectors in the set. In order to be both "spanning" and "independent", there must be exactly n vectors in the set: every basis of an n-dimensional space contains n vectors.
 
Last edited by a moderator:
that makes a lot more sense. I guess I was forgetting that linear combinations can be either dependent or independent, but both are a possibility... so if the combination contains the zero vector or all the vectors are zero, then it must be dependent, and otherwise independent.

and I guess then that a linear span can be of a combination which is dependent or independent. :smile:
 
quasar_4 said:
that makes a lot more sense. I guess I was forgetting that linear combinations can be either dependent or independent, but both are a possibility... so if the combination contains the zero vector or all the vectors are zero, then it must be dependent, and otherwise independent.

and I guess then that a linear span can be of a combination which is dependent or independent. :smile:

I'm not sure that the term 'dependent/independent linear combination' makes sense. A linear combination can consist of dependent or independent vectors, if that's what you meant.

If the combination contains the zero vector, then it consists of dependent vectors, since that vector is dependent with any other vector, i.e. a set containing the zero vector is dependent.
 
Agree with radou- it isn't the linear combination that is "independent" or "dependent", it is the set of vectors- and they can be involved in many linear combinations.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top