# Fourier series and Euclidean spaces

1. Apr 9, 2012

### hamsterman

A book I'm reading says that the set of continuous functions is an Euclidean space with scalar product defined as $<f,g> = \int\limits_a^bfg$ and then defines Fourier series as $\sum\limits_{i\in N}c_ie_i$ where $c_i = <f, e_i>$ and $e_i$ is some base of the vector space of continuous functions.

What I want to know is why that scalar product function was chosen. Would any function that has the properties of scalar multiplication do? Are there any other possible definitions and do they change some properties of the series?

Another thing I find odd is, why do I see differences on the Internet? Only trigonometric Fourier series are talked about. Does general decomposition of a function into a series go by some other name?

2. Apr 9, 2012

### micromass

Staff Emeritus
Your Fourier series makes sense for other inner products as well. A general inner product on a set H is any function
$$<\cdot,\cdot>\times H\rightarrow \mathbb{K}$$
(with $\mathbb{K}=\mathbb{R}$ or $\mathbb{K}=\mathbb{C}$
that satisfies

1) $<x,x>\geq 0$ for all x in H.
2) $<\alpha x+\beta y,z>=\alpha <x,z>+\beta <y,z>$ for all x,y,z in H and 3) $\alpha,\beta\in \mathbb{K}$.
3) $<x,y>=\overline{<y,x>}$ for all x,y in H (if $\mathbb{K}=\mathbb{R}$ then this complex conjugate can be dropped of course)

This is a general inner product. There are many examples of this concept. Two important ones are:

1) On $\mathbb{K}^n$, we can define

$$<(x_1,...,x_n),(y_1,...,y_n)>=x_1\overline{y_1}+...+x_n\overline{y_n}$$

2) On a compact set $\Omega\subseteq \mathbb{K}^n$, we can let H be the continuous functions from $\Omega$ to $\mathbb{K}$. An inner product can be given by

$$<f,g>=\int_\Omega f(x)\overline{g(x)}dx$$

These are the two most important examples of inner products.

But for any inner product, we can investigate the concept of Fourier series. That is, we usually have elements $e_0,e_1,...\in H$ such that $<e_n,e_n>=1$ and $<e_n,e_m>=0$ for n and m distinct. The Fourier series of an element x is then defined as

$\sum_n <x,e_n>e_n$.

This is a formal series: we don't care about convergence. However, we can prove that it actually does converges for a certain norm.

In the example of $\mathbb{K}^n$ above (let's take n=3 for convenience), we can take $e_0=(1,0,0),~e_1=(0,1,0),~e_2=(0,0,1)$. Then the fourier series of an element $(x_1,x_2,x_3)$ is simply

$x_1e_1+x_2e_2+x_3e_3$

In this case, it equals $(x_1,x_2,x_3)$. This is not always the case. To always have this, we want the $e_n$ to be "complete".

The Fourier series you see on the internet are a very special case of the above concept. For them, we take H = the continuous functions from $[0,2\pi[$ to $\mathbb{K}$ (note that $[0,2\pi[$ is not compact, which is a technical difficulty). The inner product is that of (2).
We take the functions
$e_0=\frac{1}{\sqrt{2\pi}}$
$e_{2n}=\frac{1}{\sqrt{\pi}} \sin(nx)$ for n>0
$e_{2n+1}=\frac{1}{\sqrt{\pi}} \cos(nx)$ for n>0

(This is not standard notation)

In that case, the fourier series of an arbitrary function f can be written as

$\frac{a_0}{2}+\sum_{n=1}^{+\infty} a_n \cos(nx) + \sum_{n=1}^{+\infty} b_n \sin(nx)$

for certain $a_n,b_n$. (ignore the division by 2 in $a_0$, this is to make certain formula's easier).
This is the Fourier series that is widely known as Fourier series. But it's just a special case of the general concept.

Last edited: Apr 9, 2012
3. Apr 9, 2012

### homeomorphic

You could argue for it in various ways. Here's the one that appeals to me the most.

If you took discrete functions, they form a finite dimensional vector space. It's isomorphic to R^n. The most obvious isomorphism is to send the functions that are equal to 1 on one point and zero elsewhere to the standard basis vectors. Then, if you transfer the dot product via that isomorphism, you get a dot product on the discrete functions. By taking discrete approximations, you can see how to extend this to continuous functions. Instead of the sum, you get an integral. So, the point of all this is that inner product is analogous to the dot product on R^n. It's literally a generalization of it.

Originally, Bernoulli realized, by physical reasoning, the possibility of representing functions in terms of trigonometric series. This results from two different points of view on a vibrating string that seem equally valid. One point of view leads to the wave equation, which can be satisfied by a waves of very general shape (any nice enough function, maybe twice differentiable, here). Another point of view results from thinking of a vibrating string as being a limit of finitely many beads, joined together by springs. From this point of view, the idea of normal modes appears. The normal modes here could be sine waves of all the different frequencies that it can vibrate at. Putting the two points of view together suggests that any nice function should be representable in terms of trigonometric series.

I don't think that's what you want to say. Maybe positive-definite, bilinear, symmetric/Hermitian form.

Yes, and they do change the properties.

Some people call the more general thing Fourier series, but you could also call it expressing a function in terms of a Schauder basis.

http://en.wikipedia.org/wiki/Schauder_basis

Fourier series, as in trigonometric series are just the most famous and probably the most important. Something that even engineers care a lot about.

Just as the Fourier series play a role in solving wave equations and heat equations in one-dimension, there are other wave and heat problems that can be solved with different functions playing the same role as the Fourier series.

4. Apr 9, 2012

### chiro

Hey hamsterman.

One way to think about these things is to think about it in terms of a projection.

In normal 3D cartesian geometry we can take a vector <x,y,z> and project it to the x, y, and z axis (usually called i,j,k). We also know that the i, j, and k vectors are all orthogonal meaning intuitively that they are all structurally separate from one another. To make this clearer if we have v = <x,y,z> in an i,j,k basis where v = xi + yj + zk, then if I changed y to whatever I wanted then it would not in the slightest have any effect on x and z and this is the true nature of what orthogonality means: it takes something and decomposes into a bunch of things that are mutually exclusive from every other thing.

Now for things with an inner product, it turns out that this can be used provided we have a valid inner product to project things to arbitrary bases which are orthogonal (mutually exclusive like the case above) and depending on the basis, we can get information that is contextually relevant to the basis. For a fourier series decomposition for a periodic function, we get frequency information for each harmonic and each frequency basis element is orthogonal to the others just like in <x,y,z> the i vector is orthogonal to j and k and thus changing x won't have any effect on y or z.

The decomposition for any general structure basically does the following: you provide a set of projection operators to take something and project it to some basis and then if you can show that all projection operators are orthogonal and the dimension of all the sub-spaces spanned by the spaces associated with each projection form a basis for the original space and are orthogonal then you have a valid basis.

The object itself doesn't have to be a vector in some n-dimensional space: it can be a function, or a matrix, or anything else that has a valid inner product with the right properties and also has the right properties for each subspace associated with each of the projection operators for each basis element.

In the normal 3D case we usually choose i,j,k as the standard basis but we can choose many more.

In the function case for a periodic function over some interval the fourier basis is a good choice. When we are dealing with non-periodic functions we actually have quite a lot of different bases that are found in Wavelets and other things like ridgelets and curvelets and other similar things. We also have the fourier transform and similar things like the z-Transform and all the other things you will find in signal processing.

We can also do this for matrices as well and if we are able to define some kind of geometry for any structure and an inner product with the right properties, then we can do this for pretty much anything provided the conditions are right.

If you are interested in the function case, look up integral transforms, analysis and wavelets for a more comprehensive treatment of the above.

5. Apr 10, 2012

### hamsterman

Thanks a lot for the thorough responses. Definitely some interesting insights there.