# Infinite-dimensional vector spaces and their bases!

1. Jul 22, 2010

### Buri

I was working on a problem earlier today and I didn't know the following result:

Let S be a subset of an infinite-dimensional vector space V. Then S is a basis for V if and only if for each nonzero vector v in V, there exists unique vectors u1,u2,...,un in S and unique nonzero scalars c1,c2,...cn, such that v = (c1)u1 + (c2)u2 + ... + (cn)un.

I don't "see" how this can be true? For example, lets say I take the vector space of infinite-tuples so x = (x1, x2, ...). How is it that I can write this as a linear combination of FINITE number of elements of S (a basis of this vector space)? It just seems that I'd require an infinite number of elements of S to do so. Can anyone help me understand this?

Thanks!

Last edited: Jul 23, 2010
2. Jul 23, 2010

### ninty

That is simply equivalent to the (generalized)definition of a basis.
If you replace infinite-dimensional with finite, you'll get the usual definition.

For infinite-dimensional cases this usual basis is called Hamel(Algebraic) basis.

Whether such a basis exists depends on the assumption of Zorn's Lemma/Axiom of Choice.
Using Zorn's Lemma allow us to say that yes, there exists such a (Hamel) basis.

Even so writing out an explicit Hamel basis is non-trivial(in general not possible i think?), much less writing an element as a linear combination of said basis.

3. Jul 23, 2010

### yossell

Interesting question. Now that you mention it, I'm a bit puzzled too. I have some worries about the following answer, and my knowledge of this stuff is rusty, but maybe it's the right idea:

Notice that there's nothing that says that S must be finite. And infinite dimensional spaces do have infinite bases. But I think it's just part of the definition of a basis that every element of the vector space be a sum of finitely many elements of the basis. Eg:

http://en.wikipedia.org/wiki/Vector_space#Bases_and_dimension

If this is right, the property you're talking about would be a trivial consequence.

I take it your worry is: consider the vector space of infinite-tuples (n m o...). Supposing that every element is different from zero, this cannot be expressed as a finite sum of bases (1 0...) (0 1....) (0 0 1...).

This is true - but then this just means that in this case, unlike the finite dimensional case, this set of vectors DOES NOT form a basis for the relevant vector space, as it is not true that every element can be represented as a finite sum of them.

As I say - I'm not sure this is right.

4. Jul 23, 2010

### Buri

Maybe I'm not getting across like I wish I were. I KNOW that such a basis exists, by the maximal principal (which is equivalent to AoC), but I don't see intuitively how it can be true. So for the example I give, I don't see how a vector in the vector space of infinite-tuples can be written as a linear combination of FINITE number of vectors of the basis of the vector space (I have no idea what this basis looks like).

Would you know if there is an infinite dimensional vector space whose basis we know explicitly?

5. Jul 23, 2010

### Buri

Yes, initially I was considering the "standard basis" as the basis for the vector space of infinite-tuples, but like you have pointed out, it turns out it isn't a basis. But nonetheless, its still confusing how I could write a infinite-tuple as a linear combination of FINITE number of elements in the basis (whatever this basis may be). See what I mean?

6. Jul 23, 2010

### George Jones

Staff Emeritus
This is an explicit (Hamel) basis of a vector space that is infinite-dimensional, the vector space of infinite-tuples that only have a finite number of non-zero entries (with all other entries zero). For example, (0, -2, 0, 3, 0, 0, 0 ....) and (0, 0, 0, 7, 4 ,5, 0, 0, 0, ...) are elements of this vector space.
Since the axiom of choice/Zorn's lemma comes into play in the proof that bases exist, I think it is too much to expect that a basis can be found explicitly, even when the definition of the vector space is explicit.
I seem to remember an exercise in one of my books that goes something like: Try (and fail) to find a basis for $\mathbf{R}^\infty$. I could be wrong, though, and as I have tried (and failed) to find the book with this exercise.

7. Jul 23, 2010

### Buri

I've been considering the vector space of ALL infinite-tuples (i.e. not necessarily only those that have a finite number of nonzero entries). In the vector space you've defined, with finite number of nonzero entries, I do see that the "standard basis" is a basis for this infinite-dimensional vector space.

In Analysis on Manifolds, Munkres asks the reader to find a basis for $\mathbf{R}^\infty$. In his text, the vector space I'm talking about is referred to as $\mathbf{R}^\omega$, and the one you're talking about is $\mathbf{R}^\infty$. Just in case you are defining it differently.

8. Jul 23, 2010

### Buri

Just another thing, he also says:

There is a theorem to the effect that every vector space has a basis. The proof is non-constructive. No one has ever exhibited a basis for $\mathbf{R}^\omega$.

So maybe it could be done...

9. Jul 23, 2010

### George Jones

Staff Emeritus
Yes, I know that.

Yes, a book at which I looked defined $\mathbf{R}^\infty$ to be what Munkres defines as $\mathbf{R}^\omega$. Thinking about it, I suspect $\mathbf{R}^\omega$ is more standard. Using Munkres' notation, I meant to write:

I seem to remember an exercise in one of my books that goes something like: Try (and fail) to find a basis for$\mathbf{R}^\omega$. I could be wrong, though, and as I have tried (and failed) to find the book with this exercise.

So, if you manage to exhibit an explicit basis, you'll be famous!

10. Jul 23, 2010

### Buri

I wish! lol

See my problem is not whether such a basis exists (I know they do) or even if I can find one explicitly. I just can't see how if I had one, how it could generate an infinite-dimensional vector space. I know you've provided $\mathbf{R}^\infty$ as an example, but $\mathbf{R}^\infty$ has this "finiteness" built into it (hopefully you get what I'm trying to say) unlike $\mathbf{R}^\omega$.

Last edited: Jul 23, 2010
11. Jul 23, 2010

### yossell

Remember again that S itself may be infinite. It may just *contain* the vector
(a b c ...)
in which case you just need a trivial combination of 1 vector from S to make the vector you have in mind.

12. Jul 23, 2010

### Buri

Doesn't S HAVE to be infinite? Otherwise, V would be finite-dimensional wouldn't it? But what if S doesn't contain the vector? Then it isn't so obvious...

13. Jul 23, 2010

### yossell

Yes, it has to be infinite - so there are infinitely many vectors in S to form finite linear combinations thereof. There's nothing that restricts the basis to just
(1 0 0 0...
(0 1 0 0..
(0 0 1 0..)etc

What if S doesn't contain the vector? Then, if it really is a basis, it will contain enough other vectors from which your desired vector is a finite linear combination.

The point is, we have no very interesting, nice presentation of the basis of S. I'm just trying to show you why, given how big and complex S might be, it shouldn't be too much of a surprise that every vector of an infinite dimensional space can be represented as a finite combination of vectors in S.

14. Jul 23, 2010

### Buri

I'm starting to get what you're saying now. So I guess the basis will look something like S = { (1, 0, 1/2, 0, 0, 5, 0,...) (0,0,3,1, 0, 1/2,...) ... } that is vectors with more than just one entry (finite or infinite nonzero entries) so that each entry in x will be "taken care of". If S would look something like this it makes a bit more sense now.

Last edited: Jul 23, 2010
15. Jul 23, 2010

### Buri

How is it that enough will be finite? And not infinite? This is what I can't fully get my mind around and this has been my problem all along. I know that enough will be finite, but intuitively, don't fully understand. I guess my above post will probably make more sense after this one. Being more explicit, how is it that x (having infinite number of entries) be written as a sum of finite number of vectors of S? I guess though, my above post explains that. If x = (a1)s1 + (a2)s2 + ... + (an)sn then s1,s2,...,sn must also have infinite number of entries; otherwise, I'd need infinite number of vectors of S.

Sorry I'm thinking out loud, but I think I get it now.

Last edited: Jul 23, 2010
16. Jul 23, 2010

### Bacle

I think that linear algebra deals with fin.dim vector spaces, and infinite-dimensional
ones are dealt with in analysis; part of the reason is that there is no notion of
infinite linear combination. Instead, you need to talk about convergence, for which
you need to have a topology, which, AFAIK, comes from a metric, which itself
derives (to avoid the dreaded term "induced") from a norm, in a normed vector
space.

17. Jul 23, 2010

### Fredrik

Staff Emeritus
It's not hard to show (see the quote below) that if S is a subset of a vector space V, and we define the "subspace generated by S" as the smallest subspace that contains S, this subspace is equal to the set of all (finite) linear combinations of S. Let's call this subspace span S. A Hamel basis for V is by definition a linearly independent subset of V such that span S=V.

So the result that "if S is a Hamel basis for V, any vector in x in V can be expressed as a (finite) linear combination of members of S", follows immediately from the definition of Hamel basis, and the easy-to-prove theorem I mentioned.

(You should be able to verify for yourself that the intersection of all subspaces that contain S is the smallest subspace that contains S).

18. Jul 23, 2010

### Buri

I think I'm being misunderstood.

I know such a basis exists, and that I'll be able to take finite number of vectors in this basis (not always the same ones) to express any x in V as a linear combination of them. So I'll try to explain my confusion better. Initially, I had thought that the "standard basis" (i.e. B = {(1,0,0,...) (0,1,0,...) and so on}) would be a basis of the vector space I'm considering. But it isn't, because I'd require infinite number of vectors of B in the linear combination for x (where x has an infinite number of nonzero entries). So the reason I find this all confusing is because how is it that there is a basis S (I do know it exists however!) such that a finite number of vectors can be used for the linear combination of x which has an infinite number of nonzero entries. But I suppose that if x = (c1)u1 + (c2)u2 + ... + (cn)un then each ui themselves also have infinite number of entries to "take care of" each entry in x; otherwise, it seems like I'd need an infinite number of u's of S.

See what I mean?

Last edited: Jul 23, 2010
19. Jul 23, 2010

### yossell

You know that S is infinite - the set of basis vectors that you can draw from. We know so little about this basis - why is it hard to believe that, given an infinite set, any vector can be expressed as a finite sum of the vectors in S. For instance, trivially, you know that there's a set T of vectors such that every vector v in the space can be expressed with just ONE term - namely let T just be the set of vectors in the space. What is it about the fact that we can prune T down to an infinite basis S now make it problematic that any vector can be expressed as a finite sum of S?

If the ui is an infinite sequence, then - yes - some of the ui must have an infinite number of entries. Is this a problem?

20. Jul 23, 2010

### Buri

I believe I get it now, but I'll try explaining again what I didn't understand before.

Let x = (1,1,1,1,1,1,1,1,...) with all entries 1. If B is the "standard basis" then I'd have:

x = (1,0,0,...) + (0,1,0,0,...) + ... and so on.

Here I NEED an infinite number of vectors to represent x.

However, B isn't the right basis, as you mentioned earlier. BUT I know there exists an S (infinite) such that I'll take a finite number of vectors in S such that

x = (1,1,1,1,1,1,1,1,...) = (a1)s1 + ... + (a2)sn

See, with B each vector b1,b2,b3,... took care of exactly ONE entry of x. However with S, each (or at least some) of s1,s2,...,sn will have to take care of an infinite number of entries of x in the linear combination. I hadn't realized this. I just had the "standard basis" stuck in my head too much, so I wasn't considering a basis whose elements have more than just one entry. IF they did only have one entry, then I would need an infinite number for the linear combination (just like B above). However, this isn't the case. So it seems its all clear now.