Fourier series and Euclidean spaces

In summary: In that case, the string behaves like a curve, and all points along it can be connected by a smooth curve. This smooth curve can be thought of as a function. It turns out that this curve is a special case of the wave equation, and it can be solved exactly. The solution is a function that represents the sound that is being produced. In summary, the set of continuous functions is an Euclidean space with the scalar product defined. Fourier series are a way to represent a function as a series of smooth curves.
  • #1
hamsterman
74
0
A book I'm reading says that the set of continuous functions is an Euclidean space with scalar product defined as [itex]<f,g> = \int\limits_a^bfg[/itex] and then defines Fourier series as [itex]\sum\limits_{i\in N}c_ie_i[/itex] where [itex]c_i = <f, e_i>[/itex] and [itex]e_i[/itex] is some base of the vector space of continuous functions.

What I want to know is why that scalar product function was chosen. Would any function that has the properties of scalar multiplication do? Are there any other possible definitions and do they change some properties of the series?

Another thing I find odd is, why do I see differences on the Internet? Only trigonometric Fourier series are talked about. Does general decomposition of a function into a series go by some other name?

Thanks for your time.
 
Physics news on Phys.org
  • #2
hamsterman said:
A book I'm reading says that the set of continuous functions is an Euclidean space with scalar product defined as [itex]<f,g> = \int\limits_a^bfg[/itex] and then defines Fourier series as [itex]\sum\limits_{i\in N}c_ie_i[/itex] where [itex]c_i = <f, e_i>[/itex] and [itex]e_i[/itex] is some base of the vector space of continuous functions.

What I want to know is why that scalar product function was chosen. Would any function that has the properties of scalar multiplication do? Are there any other possible definitions and do they change some properties of the series?

Your Fourier series makes sense for other inner products as well. A general inner product on a set H is any function
[tex]<\cdot,\cdot>:H\times H\rightarrow \mathbb{K}[/tex]
(with [itex]\mathbb{K}=\mathbb{R}[/itex] or [itex]\mathbb{K}=\mathbb{C}[/itex]
that satisfies

1) [itex]<x,x>\geq 0[/itex] for all x in H.
2) [itex]<\alpha x+\beta y,z>=\alpha <x,z>+\beta <y,z>[/itex] for all x,y,z in H and 3) [itex]\alpha,\beta\in \mathbb{K}[/itex].
3) [itex]<x,y>=\overline{<y,x>}[/itex] for all x,y in H (if [itex]\mathbb{K}=\mathbb{R}[/itex] then this complex conjugate can be dropped of course)

This is a general inner product. There are many examples of this concept. Two important ones are:

1) On [itex]\mathbb{K}^n[/itex], we can define

[tex]<(x_1,...,x_n),(y_1,...,y_n)>=x_1\overline{y_1}+...+x_n\overline{y_n}[/tex]

2) On a compact set [itex]\Omega\subseteq \mathbb{K}^n[/itex], we can let H be the continuous functions from [itex]\Omega[/itex] to [itex]\mathbb{K}[/itex]. An inner product can be given by

[tex]<f,g>=\int_\Omega f(x)\overline{g(x)}dx[/tex]

These are the two most important examples of inner products.

But for any inner product, we can investigate the concept of Fourier series. That is, we usually have elements [itex]e_0,e_1,...\in H[/itex] such that [itex]<e_n,e_n>=1[/itex] and [itex]<e_n,e_m>=0[/itex] for n and m distinct. The Fourier series of an element x is then defined as

[itex]\sum_n <x,e_n>e_n[/itex].

This is a formal series: we don't care about convergence. However, we can prove that it actually does converges for a certain norm.

In the example of [itex]\mathbb{K}^n[/itex] above (let's take n=3 for convenience), we can take [itex]e_0=(1,0,0),~e_1=(0,1,0),~e_2=(0,0,1)[/itex]. Then the Fourier series of an element [itex](x_1,x_2,x_3)[/itex] is simply

[itex]x_1e_1+x_2e_2+x_3e_3[/itex]

In this case, it equals [itex](x_1,x_2,x_3)[/itex]. This is not always the case. To always have this, we want the [itex]e_n[/itex] to be "complete".

Another thing I find odd is, why do I see differences on the Internet? Only trigonometric Fourier series are talked about. Does general decomposition of a function into a series go by some other name?

The Fourier series you see on the internet are a very special case of the above concept. For them, we take H = the continuous functions from [itex][0,2\pi[[/itex] to [itex]\mathbb{K}[/itex] (note that [itex][0,2\pi[[/itex] is not compact, which is a technical difficulty). The inner product is that of (2).
We take the functions
[itex]e_0=\frac{1}{\sqrt{2\pi}}[/itex]
[itex]e_{2n}=\frac{1}{\sqrt{\pi}} \sin(nx)[/itex] for n>0
[itex]e_{2n+1}=\frac{1}{\sqrt{\pi}} \cos(nx)[/itex] for n>0

(This is not standard notation)

In that case, the Fourier series of an arbitrary function f can be written as

[itex]\frac{a_0}{2}+\sum_{n=1}^{+\infty} a_n \cos(nx) + \sum_{n=1}^{+\infty} b_n \sin(nx)[/itex]

for certain [itex]a_n,b_n[/itex]. (ignore the division by 2 in [itex]a_0[/itex], this is to make certain formula's easier).
This is the Fourier series that is widely known as Fourier series. But it's just a special case of the general concept.
 
Last edited:
  • #3
What I want to know is why that scalar product function was chosen.

You could argue for it in various ways. Here's the one that appeals to me the most.

If you took discrete functions, they form a finite dimensional vector space. It's isomorphic to R^n. The most obvious isomorphism is to send the functions that are equal to 1 on one point and zero elsewhere to the standard basis vectors. Then, if you transfer the dot product via that isomorphism, you get a dot product on the discrete functions. By taking discrete approximations, you can see how to extend this to continuous functions. Instead of the sum, you get an integral. So, the point of all this is that inner product is analogous to the dot product on R^n. It's literally a generalization of it.

Originally, Bernoulli realized, by physical reasoning, the possibility of representing functions in terms of trigonometric series. This results from two different points of view on a vibrating string that seem equally valid. One point of view leads to the wave equation, which can be satisfied by a waves of very general shape (any nice enough function, maybe twice differentiable, here). Another point of view results from thinking of a vibrating string as being a limit of finitely many beads, joined together by springs. From this point of view, the idea of normal modes appears. The normal modes here could be sine waves of all the different frequencies that it can vibrate at. Putting the two points of view together suggests that any nice function should be representable in terms of trigonometric series.
Would any function that has the properties of scalar multiplication do?

I don't think that's what you want to say. Maybe positive-definite, bilinear, symmetric/Hermitian form.

Are there any other possible definitions and do they change some properties of the series?

Yes, and they do change the properties.

Another thing I find odd is, why do I see differences on the Internet? Only trigonometric Fourier series are talked about. Does general decomposition of a function into a series go by some other name?

Some people call the more general thing Fourier series, but you could also call it expressing a function in terms of a Schauder basis.

http://en.wikipedia.org/wiki/Schauder_basis

Fourier series, as in trigonometric series are just the most famous and probably the most important. Something that even engineers care a lot about.

Just as the Fourier series play a role in solving wave equations and heat equations in one-dimension, there are other wave and heat problems that can be solved with different functions playing the same role as the Fourier series.
 
  • #4
hamsterman said:
A book I'm reading says that the set of continuous functions is an Euclidean space with scalar product defined as [itex]<f,g> = \int\limits_a^bfg[/itex] and then defines Fourier series as [itex]\sum\limits_{i\in N}c_ie_i[/itex] where [itex]c_i = <f, e_i>[/itex] and [itex]e_i[/itex] is some base of the vector space of continuous functions.

What I want to know is why that scalar product function was chosen. Would any function that has the properties of scalar multiplication do? Are there any other possible definitions and do they change some properties of the series?

Another thing I find odd is, why do I see differences on the Internet? Only trigonometric Fourier series are talked about. Does general decomposition of a function into a series go by some other name?

Thanks for your time.

Hey hamsterman.

One way to think about these things is to think about it in terms of a projection.

In normal 3D cartesian geometry we can take a vector <x,y,z> and project it to the x, y, and z axis (usually called i,j,k). We also know that the i, j, and k vectors are all orthogonal meaning intuitively that they are all structurally separate from one another. To make this clearer if we have v = <x,y,z> in an i,j,k basis where v = xi + yj + zk, then if I changed y to whatever I wanted then it would not in the slightest have any effect on x and z and this is the true nature of what orthogonality means: it takes something and decomposes into a bunch of things that are mutually exclusive from every other thing.

Now for things with an inner product, it turns out that this can be used provided we have a valid inner product to project things to arbitrary bases which are orthogonal (mutually exclusive like the case above) and depending on the basis, we can get information that is contextually relevant to the basis. For a Fourier series decomposition for a periodic function, we get frequency information for each harmonic and each frequency basis element is orthogonal to the others just like in <x,y,z> the i vector is orthogonal to j and k and thus changing x won't have any effect on y or z.

The decomposition for any general structure basically does the following: you provide a set of projection operators to take something and project it to some basis and then if you can show that all projection operators are orthogonal and the dimension of all the sub-spaces spanned by the spaces associated with each projection form a basis for the original space and are orthogonal then you have a valid basis.

The object itself doesn't have to be a vector in some n-dimensional space: it can be a function, or a matrix, or anything else that has a valid inner product with the right properties and also has the right properties for each subspace associated with each of the projection operators for each basis element.

In the normal 3D case we usually choose i,j,k as the standard basis but we can choose many more.

In the function case for a periodic function over some interval the Fourier basis is a good choice. When we are dealing with non-periodic functions we actually have quite a lot of different bases that are found in Wavelets and other things like ridgelets and curvelets and other similar things. We also have the Fourier transform and similar things like the z-Transform and all the other things you will find in signal processing.

We can also do this for matrices as well and if we are able to define some kind of geometry for any structure and an inner product with the right properties, then we can do this for pretty much anything provided the conditions are right.

If you are interested in the function case, look up integral transforms, analysis and wavelets for a more comprehensive treatment of the above.
 
  • #5
Thanks a lot for the thorough responses. Definitely some interesting insights there.
 

1. What is a Fourier series?

A Fourier series is a mathematical representation of a function as a sum of sine and cosine functions with different frequencies and amplitudes. It is used to approximate periodic functions and has many applications in fields such as signal processing and differential equations.

2. How are Fourier series and Euclidean spaces related?

Euclidean spaces are mathematical spaces in which the concept of distance can be defined. Fourier series can be used to represent functions in these spaces, allowing for the analysis and manipulation of these functions using the tools of Fourier analysis.

3. What is the importance of Fourier series in science and engineering?

Fourier series have a wide range of applications in fields such as physics, engineering, and mathematics. They are used to study and solve differential equations, analyze signals and data, and understand the behavior of periodic phenomena in various systems.

4. How do you calculate a Fourier series?

To calculate a Fourier series, you first need to determine the period of the function you want to represent. Then, you can use the Fourier series formula to find the coefficients of the sine and cosine functions that make up the series. This process involves integrating the function over one period and solving for the coefficients.

5. Can Fourier series be used to represent non-periodic functions?

No, Fourier series can only be used to represent periodic functions. However, there are other methods, such as the Fourier transform, that can be used to represent non-periodic functions.

Similar threads

  • Topology and Analysis
Replies
4
Views
275
  • Calculus and Beyond Homework Help
Replies
3
Views
287
  • Topology and Analysis
Replies
6
Views
2K
Replies
2
Views
388
  • Calculus and Beyond Homework Help
Replies
1
Views
217
  • Calculus and Beyond Homework Help
Replies
6
Views
914
Replies
5
Views
3K
  • Topology and Analysis
Replies
3
Views
3K
  • Topology and Analysis
Replies
6
Views
3K
  • Topology and Analysis
Replies
8
Views
3K
Back
Top