Show that I have a basis, linear algebra

In summary, the problem is to prove that if every element in an inner product space X can be written as a linear combination of an orthonormal set {ei}i in N, then {ei}i in N is a basis for X. This can be shown by proving that no nontrivial infinite linear combination of the elements in {ei}i in N can add up to zero, which is equivalent to showing that the squared norm of the combination is equal to the sum of the squares of the coefficients. This can be done by taking limits of finite sums and using the continuity of the norm.
  • #1
MaxManus
277
1

Homework Statement



(X,<,>) is a inner product space over R
{ei}i in N is an orthonormal set in X

Show that if every element u in X can be written as a linear combination
u = [tex]\sum_{i=1}^\infty a_i e_i [/tex] then {ei}i in N is a basis for X

Homework Equations


Let {ei}be a sequence of elements in a normed vector
space V . We say that {en} is a basis for V if for each x in V there is a
unique sequence {a n} from K such that:
x = [tex] \sum_{n=1}^{\infty} a_i e_i [/tex]



The Attempt at a Solution


I'm not sure what I have to prove. Is not the definition of a basis given in the question?
 
Physics news on Phys.org
  • #2
It is true that they have given you that there is one sequence, what is left to prove is that it is unique. The proofs for linear algebra are usually quite trivial like this, but remember to read the conditions and what you are given thoroughly since it is easy like in this case to mistakenly take some details for given when in fact they are not.
 
  • #3
Thanks. Can I say that it is unique because it is an orthonormal set and therfor lineraly independent? And then the proof is complete?
 
  • #4
MaxManus said:
Thanks. Can I say that it is unique because it is an orthonormal set and therfor lineraly independent? And then the proof is complete?
I would say that you need more than that. Is the definition of an orthonormal set that all linear combinations are unique? Otherwise you need to show that link as well, as far as I know the definition of an orthonormal set is not that all linear combinations are unique.
 
  • #5
But I am on the right track?

If
[tex] a_1 e_1 + a_2 e_2 + ... e_n e_n = 0 [/tex]
Taking the norm.
[tex] || a_1 e_1 + a_2 e_2 + ... a_n e_n ||^2 = 0 [/tex]
e_i are orthonormal so the norm is 1
[tex] a_1^2 + ... + a_n^2 = 0 [/tex]
Which means that all the as are zero which means that the {e_i} are linearly independent.
 
Last edited:
  • #6
MaxManus said:
But I am on the right track?

If
[tex] a_1 e_1 + a_2 e_2 + ... e_n e_n = 0 [/tex]
Taking the norm.
[tex] || a_1 e_1 + a_2 e_2 + ... a_n e_n ||^2 = 0 [/tex]
e_i are orthonormal so the norm is 1
[tex] a_1^2 + ... + a_n^2 = 0 [/tex]
Which means that all the as are zero which means that the {e_i} are linearly independent.
You are using way too many secondary results, as I said use the definitions. You can prove it like that, yes, but it is ugly and questionable since you don't mention why [tex] a_1 e_1 + a_2 e_2 + ... e_n e_n = 0 [/tex] implies [tex] || a_1 e_1 + a_2 e_2 + ... a_n e_n ||^2 = 0 [/tex], personally I wouldn't give points for that. The definition of an orthonormal set is not that if their linear combination is zero then their norm will also be zero, look up the definition if you have to instead of guessing like this.
 
  • #7
You're also proving the wrong thing. In order to show that the representation is unique, you need a little more than linear independence, since L.I. only says that nontrivial finite linear combinations cannot be zero, and nothing about infinite sums.

I think you're on the right track though, since the result you're being asked to prove is equivalent to saying that no nontrivial infinite linear combination converges to zero (note that you have to prove this, not just assume it). And if you can show that the squared norm of [tex]\sum_{i=1}^{\infty}a_ie_i[/tex] is [tex]\sum_{i=1}^{\infty} a_i^2[/tex], then you're done, by the approach you've suggested. (Hint: take limits of finite sums, and use the continuity of the norm).

Klockan3 said:
You are using way too many secondary results, as I said use the definitions. You can prove it like that, yes, but it is ugly and questionable since you don't mention why LaTeX Code: a_1 e_1 + a_2 e_2 + ... e_n e_n = 0 implies LaTeX Code: || a_1 e_1 + a_2 e_2 + ... a_n e_n ||^2 = 0 , personally I wouldn't give points for that. The definition of an orthonormal set is not that if their linear combination is zero then their norm will also be zero, look up the definition if you have to instead of guessing like this.

He's not invoking the definition of an orthonormal set, he's using something much more basic, namely the fact that the norm of the zero vector is zero. The orthonormality is used in the next step, when he asserts that the norm of [itex]a_1e_1+\cdots +a_ne_n[/itex] is [itex]a_1^2+\cdots a_n^2[/itex], which is by the Pythagorean theorem for inner product spaces. Now, this is a derived result, but frankly I would use it on a problem like this. Since the problem is precisely to show that nontrivial infinite linear combinations of orthonormal vectors can't add up to zero, and the fastest way to do that is going through the norm.
 
  • #8
Citan Uzuki said:
He's not invoking the definition of an orthonormal set, he's using something much more basic, namely the fact that the norm of the zero vector is zero. The orthonormality is used in the next step, when he asserts that the norm of [itex]a_1e_1+\cdots +a_ne_n[/itex] is [itex]a_1^2+\cdots a_n^2[/itex], which is by the Pythagorean theorem for inner product spaces. Now, this is a derived result, but frankly I would use it on a problem like this. Since the problem is precisely to show that nontrivial infinite linear combinations of orthonormal vectors can't add up to zero, and the fastest way to do that is going through the norm.
Oh, I missed that this was an infinite dimensional linear space! I just assumed that it was finite dimensional since the topic title says linear algebra, since when you get to infinite spaces you get a lot of problems about convergence etc. Yes, then some things have to be done differently and there should be no problem with using more advanced results, if this was about the finite case I hope that you can agree that it would be strange to use a result like that in the proof.
 
  • #9
Citan Uzuki said:
You're also proving the wrong thing. In order to show that the representation is unique, you need a little more than linear independence, since L.I. only says that nontrivial finite linear combinations cannot be zero, and nothing about infinite sums.

I think you're on the right track though, since the result you're being asked to prove is equivalent to saying that no nontrivial infinite linear combination converges to zero (note that you have to prove this, not just assume it). And if you can show that the squared norm of [tex]\sum_{i=1}^{\infty}a_ie_i[/tex] is [tex]\sum_{i=1}^{\infty} a_i^2[/tex], then you're done, by the approach you've suggested. (Hint: take limits of finite sums, and use the continuity of the norm).

Thanks to both.
First Part: "no nontrivial infinite linear combination converges to zero "
Here I'm not sure where to start
Second Part:
e_i e_i = 1
e_i e_j = 0
[tex] <\sum_{i=1}^{\infty}a_i e_i,\sum_{i=1}^{\infty}a_i e_i>^2 = \lim_{N-> \infty} <\sum_{i=0}^{N}a_i e_i,\sum_{i=1}^{N}a_i e_i>^2 = \lim_{N-> \infty} \sum_{i=1}^N a_i^2 = \sum_{i=1}^{\infty}a_i^2[/tex]
Right?
 
  • #10
That's right for the second part. For the first part, you just note that if [tex]\sum_{i=1}^{\infty} a_ie_i = 0[/tex], then [tex]\sum_{i=1}^{\infty} a_i^2=0[/tex], and so each [itex]a_i[/itex] is zero. Now show that that indeed implies the representation is unique.
 
  • #11
Thanks, but I'm not sure how to show that this implies the representation is unique. I'm not supposed to use linearly independence, but what can I use.
 
  • #12
MaxManus said:
Thanks, but I'm not sure how to show that this implies the representation is unique. I'm not supposed to use linearly independence, but what can I use.

Well, you've now proven the infinite-dimensional analog of linear independence, so you can derive unique representation from that using the same method that you would derive unique representation with respect to a finite basis from finite linear independence. If you still need a hint, see below:

If you have two distinct representations of the same vector, then the difference between them is a nontrivial representation of zero
 
  • #13
Had to use the hint:-)

If u can be represented with another infinite linear combination of e_i called v then u-v = 0
[tex] u-v = \sum_{i= 1}^{\infty} a_i e_i - \sum_{i= 1}^{\infty} b_i e_i =( a_1 e_1 + a_2 e_2 + ... + a_n e_n + ...) -( b_1 e_1 + b_2 e_2 + ... + b_n e_n + ...) = (a_1 - b_1)e_1 + (a_2-b_2)e_2 + ... (a_n -b_n)e_n + ... = \sum_{i= 0}^{\infty} (a_i -b_i )e_i [/tex]

[tex] \sum_{i= 0}^{\infty} (a_i -b_i )e_i = 0 [/tex]
Taking the norm on both sides.

[tex] \sqrt{ \sum_{i=1}^{\infty}(a_i-b_i)^2} = 0 [/tex]

So a_i = b_i, the representation is unique. Is this the whole proof (or the proof at all)?
 
Last edited:
  • #14
Yeah, that's it :smile:
 
  • #15
Super, thanks for all the help.
 

1. What is a basis in linear algebra?

A basis in linear algebra is a set of vectors that can be used to span the entire vector space. This means that any vector in the vector space can be written as a linear combination of the basis vectors. A basis is also linearly independent, meaning that none of the basis vectors can be written as a linear combination of the other basis vectors.

2. How do you show that a set of vectors forms a basis?

To show that a set of vectors forms a basis, you must prove two things: linear independence and spanning the vector space. To prove linear independence, you must show that none of the vectors can be written as a linear combination of the others. To prove spanning, you must show that any vector in the vector space can be written as a linear combination of the basis vectors.

3. What is the importance of having a basis in linear algebra?

A basis is important because it allows us to represent any vector in a vector space using a linear combination of a smaller set of vectors. This simplifies computations and makes it easier to understand the properties of a vector space. Additionally, a basis is necessary for solving systems of linear equations and finding the inverse of a matrix.

4. Can a vector space have more than one basis?

Yes, a vector space can have multiple bases. This is because there can be more than one set of linearly independent vectors that can span the same vector space. However, all bases for a given vector space will have the same number of vectors, called the dimension of the vector space.

5. How can I find the basis for a given vector space?

To find a basis for a vector space, you can use the process of Gaussian elimination to reduce a set of vectors to their simplest form. The remaining linearly independent vectors will form the basis for the vector space. Another method is to use the concept of linear combinations to find a set of linearly independent vectors that span the vector space. This can be done by setting up a system of equations and solving for the coefficients.

Similar threads

  • Calculus and Beyond Homework Help
Replies
34
Views
2K
  • Calculus and Beyond Homework Help
Replies
14
Views
533
  • Calculus and Beyond Homework Help
Replies
0
Views
419
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
4
Views
953
  • Calculus and Beyond Homework Help
Replies
8
Views
1K
  • Calculus and Beyond Homework Help
Replies
14
Views
432
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
24
Views
673
Back
Top