Completeness of a basis function

  • #1
1,844
88

Main Question or Discussion Point

Hi PF!

I'm somewhat new to the concept of completeness, but from what I understand, a particular basis function is complete in a given space if it can create any other function in that space. Is this correct?

I read that the set of polynomials is not complete (unsure of the space, since Taylor series can represent all continuous functions) but that Legendre polynomials are complete. Can anyone correct, finesse, or provide a working example of this?

Thanks!
 
Last edited:

Answers and Replies

  • #2
12,653
9,173
Hi PF!

I'm somewhat new to the concept of completeness, but from what I understand, a particular basis function is complete in a given space if it can create any other basis function in that space. Is this correct?
No.
At least your question isn't specifically enough to be answered clearly. Completeness usually relates to a certain set, and there are more than one way to complete such a set. E.g. ##\mathbb{R}## is the topological completion of ##\mathbb{Q}##, whereas ##\mathbb{C}## is the algebraic completion (better: closure) of ##\mathbb{R}##.
I read that the set of polynomials is not complete (unsure of the space, since Taylor series can represent all continuous functions) ...
Taylor series are series, not polynomials.
... but that Legendre polynomials are complete. Can anyone correct, finesse, or provide a working example of this?

Thanks!
What you wrote sounds to be the completion of a few linear independent functions to a basis in a given space; or the change to an orthonormal basis; or the description of the solution space to some equations. This usage of completion is misleading, at least if not explicitly stated what has to be completed to achieve what.
 
Last edited:
  • #3
1,844
88
Thanks for the response.

What you wrote sounds to be the completion of a few linear independent functions to a basis in a given space; or the change to an orthonormal basis; or the description of the solution space to some equations. This usage of completion is misleading, at least if not explicitly stated what has to be completed to achieve what.
Hmmmm so how would you interpret the statement "complete basis functions"? Application here is with a functional, and evidently the input functions must be complete (nothing further given, I guess in physics/engineering it's assumed the reader knows the space and norm)?
 
  • #4
12,653
9,173
so how would you interpret the statement "complete basis functions"?
This depends on the context. In (this) topological forum, I would assume a topological completeness, which is the standard usage of the term. It means to expand a given metric space with all possible limits of its Cauchy sequences. Functions of various types form topological, metric spaces, like smooth functions, or square integrable functions, or just continuous functions. So my first question would be: Which space? Which metric? As you see by my examples, there is no unique answer to this, even for engineers. Smooth and continuous are both important and very different. In quantum physics, I would assume the square integrable Lebesgue space.

However, you mentioned a single function to be completed, which doesn't fit in here. You also mentioned Legendre polynomials, which point to some differential equation, in which case the solution space might be meant and completion would indicate a basis of said space. "The input functions must be complete" doesn't make sense to me, except for "the [set of all] input functions must be complete [in order to ...]" and the terms in brackets are clear by context. In any case, I would ask for the area the formulation is taken from. Completion has an inherent meaning of adding something new to something given, so two questions have to be answered beforehand: what is given, and how is new defined. In the usual sense, the given is some metric space and the new are limits of Cauchy sequences. Since this appears not to be the case here, more information (context) is definitely needed.
 
  • #5
1,844
88
Thanks for the response, and sorry it's taken me so long to reply. I'll do my best to give you precise insight on what I'm doing.

So my first question would be: Which space? Which metric?
I am trying to solve an ODE that looks like this $$L(\phi_n(s)) = \lambda \phi(s):s\in[0,1]$$ where ##L(\phi_n)\equiv \phi_n''(s)+\phi_n(s)## and the subscript ##n## denotes a normal derivative to a surface (rather than go into details here, let's just think of that subscript as being one more derivative with some extra complications). I can't solve the ODE exactly, but I can solve the weak formulation, which looks like this $$(L(\phi_n),\phi_n) = \lambda(\phi,\phi_n) : (f,g) = \int_0^1 fg\, ds.$$ I will solve the weak form through an eigenfunction expansion, so I'll let ##\phi = f_i## for some predetermined ##f_i##, so we could think of ##f_i = \sin(i \pi x)## or perhaps ##x^i(1-x)##. Then we see the weak formulation is now an algebraic eigenvalue problem with matrices.

I didn't mention BC's and this is where my question of completeness enters. With some BC's it's obvious how to formulate the trial functions, such as ##\phi(0)=\phi(1) = 0##; in this case we can let ##f_i = x(1-x)^i##. However, in general it's not so simple how to build the BCs into the function space, so I have a technique, where basically I superimpose combinations of my trial functions to automatically solve the BCs.

For example, if I'm trying to solve the BCs ##\phi(0)=\phi(1)=0## and I'm using 3 trial functions ##f_i = x^i##, then I need to take linear combinations of ##\{x,x^2,x^3\}## to satisfy the BCs, perhaps ##\{x-x^2, x-x^3\}##. In this scenario, these two new functions will be my trial functions, specifically ##f_1 = x-x^2## and ##f_2 = x-x^3##.

All of this for my final question: looking at the weak formulation, it seems what I am looking for is assurance that any ##L^2(0,1)## function can be expanded as linearly independent, linear combinations of a chosen function space.

I should specify that I am using a computer algebra package to determine the ways I should superimpose my selected trial functions. On problems where I do not have to recombine the trial functions, I get good results. However, sometimes when I recombine basis functions everything goes wrong. Do you think this is at all related to the completeness of the trial functions, once recombined?
 
  • #6
Svein
Science Advisor
Insights Author
2,025
649
  • #7
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
This depends on the context. In (this) topological forum, I would assume a topological completeness, which is the standard usage of the term. It means to expand a given metric space with all possible limits of its Cauchy sequences. Functions of various types form topological, metric spaces, like smooth functions, or square integrable functions, or just continuous functions. So my first question would be: Which space? Which metric? As you see by my examples, there is no unique answer to this, even for engineers. Smooth and continuous are both important and very different. In quantum physics, I would assume the square integrable Lebesgue space.

However, you mentioned a single function to be completed, which doesn't fit in here. You also mentioned Legendre polynomials, which point to some differential equation, in which case the solution space might be meant and completion would indicate a basis of said space. "The input functions must be complete" doesn't make sense to me, except for "the [set of all] input functions must be complete [in order to ...]" and the terms in brackets are clear by context. In any case, I would ask for the area the formulation is taken from. Completion has an inherent meaning of adding something new to something given, so two questions have to be answered beforehand: what is given, and how is new defined. In the usual sense, the given is some metric space and the new are limits of Cauchy sequences. Since this appears not to be the case here, more information (context) is definitely needed.
I believe some of these function spaces are not metrizable.
 
  • #8
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
Thanks for the response, and sorry it's taken me so long to reply. I'll do my best to give you precise insight on what I'm doing.


I am trying to solve an ODE that looks like this $$L(\phi_n(s)) = \lambda \phi(s):s\in[0,1]$$ where ##L(\phi_n)\equiv \phi_n''(s)+\phi_n(s)## and the subscript ##n## denotes a normal derivative to a surface (rather than go into details here, let's just think of that subscript as being one more derivative with some extra complications). I can't solve the ODE exactly, but I can solve the weak formulation, which looks like this $$(L(\phi_n),\phi_n) = \lambda(\phi,\phi_n) : (f,g) = \int_0^1 fg\, ds.$$ I will solve the weak form through an eigenfunction expansion, so I'll let ##\phi = f_i## for some predetermined ##f_i##, so we could think of ##f_i = \sin(i \pi x)## or perhaps ##x^i(1-x)##. Then we see the weak formulation is now an algebraic eigenvalue problem with matrices.

I didn't mention BC's and this is where my question of completeness enters. With some BC's it's obvious how to formulate the trial functions, such as ##\phi(0)=\phi(1) = 0##; in this case we can let ##f_i = x(1-x)^i##. However, in general it's not so simple how to build the BCs into the function space, so I have a technique, where basically I superimpose combinations of my trial functions to automatically solve the BCs.

For example, if I'm trying to solve the BCs ##\phi(0)=\phi(1)=0## and I'm using 3 trial functions ##f_i = x^i##, then I need to take linear combinations of ##\{x,x^2,x^3\}## to satisfy the BCs, perhaps ##\{x-x^2, x-x^3\}##. In this scenario, these two new functions will be my trial functions, specifically ##f_1 = x-x^2## and ##f_2 = x-x^3##.

All of this for my final question: looking at the weak formulation, it seems what I am looking for is assurance that any ##L^2(0,1)## function can be expanded as linearly independent, linear combinations of a chosen function space.

I should specify that I am using a computer algebra package to determine the ways I should superimpose my selected trial functions. On problems where I do not have to recombine the trial functions, I get good results. However, sometimes when I recombine basis functions everything goes wrong. Do you think this is at all related to the completeness of the trial functions, once recombined?
The space of polys. as you said, isnot complete because, e.g. the truncated Taylor series of ,e.g. #e^x# will converge to ( the non-polynomial) #e^x# . So you have a sequence of polynomials that converges to a non-polynomial. It is a nice exercise to show #e^x# is not a polynomial.Edit: Sorry, this is supposed to address your first post and not the last one.
 
  • #9
pasmith
Homework Helper
1,738
410
Hi PF!

I'm somewhat new to the concept of completeness, but from what I understand, a particular basis function is complete in a given space if it can create any other function in that space. Is this correct?

I read that the set of polynomials is not complete (unsure of the space, since Taylor series can represent all continuous functions) but that Legendre polynomials are complete. Can anyone correct, finesse, or provide a working example of this?

Thanks!
The Legendre Polynomials, being polynomials, obviously are not a complete basis.

What is true is that in spherical polar coordinates, eigenfunctions of the laplacian operator depend on the colatitude [itex]\theta \in [0, \pi][/itex] as [itex]P(\cos\theta)[/itex] where [itex]P[/itex] is a polynomial. These polynomials are known as the Legendre polynomials.

Now since we don't care what happens for [itex]\theta \in (\pi, 2\pi)[/itex] we can assume that any function we are interested in is both even and periodic with period [itex]2\pi[/itex]. That means that it can be expanded as a cosine series, so the functions [itex]\cos(n\theta)[/itex] for integer [itex]n \geq 0[/itex] are a complete basis. Using the Legendre polynomials composed with [itex]\cos\theta[/itex] amounts to a change of basis to a set of functions which are orthogonal eigenfunctions of the laplacian operator. It is in this sense that the Legendre polynomials are complete.
 
  • #10
Svein
Science Advisor
Insights Author
2,025
649
The space of polys. as you said, is not complete because, e.g. the truncated Taylor series of ,e.g. #e^x# will converge to ( the non-polynomial) #e^x# . So you have a sequence of polynomials that converges to a non-polynomial.
Yes - what of it?
The Stone-Weierstrass theorem (https://en.wikipedia.org/wiki/Stone–Weierstrass_theorem) states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function.
 
  • #11
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
  • #12
FactChecker
Science Advisor
Gold Member
5,384
1,952
S, I think I may have misunderstood the question. But the answer is correct, isn't it? A Cauchy sequence of polynomials may converge to a non-polynomial, so the space of polynomials is not closed.
I agree with that. But the OP mentions complete basis, so I wonder if it is asking about something else.
 
  • #13
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
I agree with that. But the OP mentions complete basis, so I wonder if it is asking about something else.
Yes, my bad, I think I misread the question. Pease se my last post in this thread.
 
  • #14
FactChecker
Science Advisor
Gold Member
5,384
1,952
Yes, my bad, I think I misread the question. Pease se my last post in this thread.
I saw that. I don't think it is possible to answer a question about the basis of a space unless the OP is clear which space is being talked about. I don't think that the powers of x (and so the polynomials) can be a complete basis for any set that includes functions where there is a point interior to the domain without a derivative.
 
  • #15
Svein
Science Advisor
Insights Author
2,025
649
I don't think that the powers of x (and so the polynomials) can be a complete basis for any set that includes functions where there is a point interior to the domain without a derivative.
See post #10!
 
  • #16
FactChecker
Science Advisor
Gold Member
5,384
1,952
See post #10!
An arbitrarily close approximation does not make them equal, which is required by a complete basis. In fact, a power series which converges will also have a derivative within its circle of convergence. Suppose a continuous function has no derivative at a point within (0,1). A power series can not equal that function on (0,1). Therefore, the functions ##f_n(x) = x^n, n\in N## (##N##, non-negative integers) can not be a complete basis for the continuous functions on (0,1).
 
  • #17
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
An arbitrarily close approximation does not make them equal, which is required by a complete basis. In fact, a power series which converges will also have a derivative within its circle of convergence. Suppose a continuous function has no derivative at a point within (0,1). A power series can not equal that function on (0,1). Therefore, the functions ##f_n(x) = x^n, n\in N## (##N##, non-negative integers) can not be a complete basis for the continuous functions on (0,1).
So you are thinking of a Hamel basis and I guess the polys are a schauder basis?
 
  • #18
FactChecker
Science Advisor
Gold Member
5,384
1,952
So you are thinking of a Hamel basis and I guess the polys are a schauder basis?
I think there are problems with either one. I am struggling with reconciling the Stone-Weierstrauss Theorem with the fact that a convergent power series can not equal a function that does not have a derivative at a point within the radius of series convergence. I guess that the issue is that restricting the basis to ##{x^n}## has the disadvantage that the coefficient, ##a_{n_i}## of any ##x^{n_i}## is eventually fixed in the limit ##\Sigma_{n=0}^{\infty}a_ix^n## and so can not converge to a continuous function that does not have a derivative at a point in (0,1). I will have to think about this some more with respect to the OP question.
 
  • #19
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
I think there are problems with either one. I am struggling with reconciling the Stone-Weierstrauss Theorem with the fact that a convergent power series can not equal a function that does not have a derivative at a point within the radius of series convergence. I guess that the issue is that restricting the basis to ##{x^n}## has the disadvantage that the coefficient, ##a_{n_i}## of any ##x^{n_i}## is eventually fixed in the limit ##\Sigma_{n=0}^{\infty}a_ix^n## and so can not converge to a continuous function that does not have a derivative at a point in (0,1). I will have to think about this some more with respect to the OP question.
But it goes beyond that in terms of extreme cases: the polys will also approximate nowhere-diffetentiable continuous functions. It seems like that difference between the approximation and the function provides enough wiggle room to go between differentiable polynomials and the nowhere differentiable functions. Strange. Maybe it has to see with rate of convergence.
 
  • #20
FactChecker
Science Advisor
Gold Member
5,384
1,952
But it goes beyond that in terms of extreme cases: the polys will also approximate nowhere-diffetentiable continuous functions. It seems like that difference between the approximation and the function provides enough wiggle room to go between differentiable polynomials and the nowhere differentiable functions. Strange. Maybe it has to see with rate of convergence.
Being restricted to a basis of the powers of x prevents later terms from adjusting the coefficients of the lower powers of x. Whereas, a set of convergent polynomials are can keep adjusting the lower coefficients as needed. Clearly there is something wrong with the original question since the set of polynomials includes the Legendre polynomials. So any space where the Legendre polynomials form a complete basis would also have the polynomials complete. The polynomials would not be a basis because the representations would not be unique.
 
  • #21
1,844
88
I read both your posts, but I think it's a little beyond me. However, after reading, I think I have a little clarity I can shed. Referencing post 5, I'll say the basis functions ##\phi## must be harmonic and satisfy ##d_x\phi(x=\pm 1) = 0, d_y\phi(y=-1) = 0##. These are easy to find, specifically they are sines and cosines multiplied with hyperbolic cosines. The last BC requires ##\phi_n(x=\pm 1) = 0##, where ##n## denotes a normal derivative to a curve ##\Gamma## (see picture). The ODE ##L\phi_n(s) = -\lambda \phi(s)## I described above is only a function of ##s## (rather than ##x,y##) since it is evaluated on the curve ##\Gamma##.

My question is, when I evaluate the weak formulation and enforce BCs ##\phi_n(x=\pm 1)## I get different results than when I do not enforce BCs. Again, I enforce BCs by superimposing the harmonics. Any ideas why this would happen?
 

Attachments

  • #22
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
Yes - what of it?
The Stone-Weierstrass theorem (https://en.wikipedia.org/wiki/Stone–Weierstrass_theorem) states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function.
But this happens more generally, not just here. You do not have the space fully spanned by a Hamel basis. Example: Take an infinite-dimensional Hilbert space. By a Baire Cat argument it must be Uncountably -infinite-dimensional. But orthogonal (Hamel ) bases must be countable. So , as I understand it, we then use a Schauder basis where the sum _converges_ to values in the context topology , but does not _equal_ the value. EDIT: I think the clear explanation for this "paradox" is that there are Hamel bases that are not orthogonal , but are not countable either. But sometimes it is more convenient to work with Schauder bases than with uncountable Hamel bases. Example, for ##L^2 [a,b] ## we use the standard Schauder basis ## \{ \{1\} Sin(nx), Cos(nx) ; n=1,2,...## But I am not sure of what completeness would mean here nor if we require uniqueness of the representat
 
Last edited:
  • #23
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
Being restricted to a basis of the powers of x prevents later terms from adjusting the coefficients of the lower powers of x. Whereas, a set of convergent polynomials are can keep adjusting the lower coefficients as needed. Clearly there is something wrong with the original question since the set of polynomials includes the Legendre polynomials. So any space where the Legendre polynomials form a complete basis would also have the polynomials complete. The polynomials would not be a basis because the representations would not be unique.
Is the uniqueness also required for Schauder basis or just for Hamel bases?
 
  • #24
WWGD
Science Advisor
Gold Member
2019 Award
5,180
2,483
I read both your posts, but I think it's a little beyond me. However, after reading, I think I have a little clarity I can shed. Referencing post 5, I'll say the basis functions ##\phi## must be harmonic and satisfy ##d_x\phi(x=\pm 1) = 0, d_y\phi(y=-1) = 0##. These are easy to find, specifically they are sines and cosines multiplied with hyperbolic cosines. The last BC requires ##\phi_n(x=\pm 1) = 0##, where ##n## denotes a normal derivative to a curve ##\Gamma## (see picture). The ODE ##L\phi_n(s) = -\lambda \phi(s)## I described above is only a function of ##s## (rather than ##x,y##) since it is evaluated on the curve ##\Gamma##.

My question is, when I evaluate the weak formulation and enforce BCs ##\phi_n(x=\pm 1)## I get different results than when I do not enforce BCs. Again, I enforce BCs by superimposing the harmonics. Any ideas why this would happen?
Sorry to keep hammering this, but when you use a Schauder basis this is an ordered basis ( because convergence is not always absolute) , so you may be changing the order of the terms of the sum? Are you doing finite sums ( Hamel bases, equality) or infinite sums ( Schauder bases, convergence).
 
  • #25
FactChecker
Science Advisor
Gold Member
5,384
1,952
Is the uniqueness also required for Schauder basis or just for Hamel bases?
Sorry. I wasn't very clear. I was just correcting a misconception that I had. I was misusing a fact about convergent Taylor series.
 

Related Threads on Completeness of a basis function

Replies
2
Views
5K
Replies
3
Views
1K
  • Last Post
Replies
3
Views
2K
Replies
6
Views
3K
Replies
1
Views
2K
Replies
3
Views
5K
Replies
20
Views
2K
Replies
5
Views
748
Replies
2
Views
2K
Top