# Are waves always the sum of sine waves?

• Nick Jackson

#### Nick Jackson

I saw that somewhere and it is supposed to be something Fourier came up with but I can't find somewhere why...
Please explain (with mathematical description if possible)

I recommend you finding a good book and reading it since this is an important part of mathematics specially for physics.

What level are you at?

Try using graph paper (or excel) to draw a few sin waves. One with wave length L, another with wave length L/2, and another L/4. Then add the amplitudes together at each point along the x-axis to form a new wave. What shape do you get? If you don't recognise it try adding more sin waves in the series (L/6 etc)

Repeat for L, L/3, L/5 What shape do you get this time?

I saw that somewhere and it is supposed to be something Fourier came up with but I can't find somewhere why...
Please explain (with mathematical description if possible)
Mathematically, you can rewrite almost any function as a Fourier series, i.e. a sum of sine and cosine functions of different amplitudes and frequencies. Here's the mathematical expression: .

Since all waves can be modeled mathematically as a function, you can write them as a Fourier series, i.e. a combination of sine/cosine functions. Now, since a sine/cosine function can be interpreted to represent a sine/cosine wave, the answer to your question is YES. You can indeed look at waves as a "sum of sine/cosine waves", or as a "sum of sine waves" since the sines are just phase shifted cosines, i.e. cos(x) = sin(x + pi/2).

But it's not just the mathematics that say you can do this: In nature periodic waves will be added to each other in the same fashion as in the math, so you could say that e.g. human speech is just a combination of (very many) different periodic sine waves.

If you know some serious mathematics, read on:

But the interesting point is that the Fourier series are just one of many different "expansions" you can use to represent a function. The taylor series, which has polynomials as its basis (basis = set of linearly independent functions that span all of "Hilbert space" if you know linear algebra), is another way to do it. So while you might think it doesn't matter which expansion you choose, fact of the matter it does.
In the case of waves it's MUCH easier to use Fourier series than taylor series. The reason is that the main characteristic of the Fourier series is that their basis are periodic sine/cosine functions, and so a Fourier series is the simplest way to model a periodic functions. Now, since the vast majority of waves are represented by periodic functions, it's very easy, and indeed natural, to represent these functions mathematically as a sum of sine waves :).

Last edited:
Yes, as long as you are allowed to include complex numbers, and you are restricted to 1D. Exponential functions are waves, too, and can be represented as complex sine and cosine waves.
##e^{-x} = \cos(ix)+i \sin(ix)##

However, waves in physics generally refer to functions of multiple space dimensions and time. We can still write the solution in terms of sines and cosines, as long as the boundary conditions permit the spatial dimensions to be separated from each other.

But sometimes, the boundary conditions cause a solution in terms of sines and cosines to be inappropriate. This is often the case for a cylindrical or spherical cavity. In these cases, it is better to write the wave in terms of sums of Bessel functions or spherical Bessel functions, respectively.

Anything that vibrates is going to be a sine wave, unless it has something forcing it or damping it, and even then some sine wave in there. Since most waves propagate and vibrate freely, they will be a sum of sign waves.

It strongly depends on what do you mean by waves :)

Play around with excell and plot the waveform produced by adding sine waves of varying amplitudes and frequency, it is amazing how quickly you get very complex waveforms.

The rough intuitive reason why you can represent functions in terms of sines and cosines requires understanding what normal modes are. That's hard to describe verbally without some sort of pictures. You'd have to study a little more about oscillations to understand it, but the upshot is that for a vibrating string, under some assumptions like the vibrations are small, etc., there are are two descriptions. One is the wave equation and the other is normal modes, which, in this case are sines and cosines. If you take an arbitrary differentiable function and move it at a constant speed, it will satisfy the wave equation. If you look at it from the other point of view, that is, the normal modes point of view, that description of the vibrating string says that anything the string can do is some combination of the normal modes. It's not clear from the normal modes point of view that you can get an arbitrary differentiable function. But assuming both points are correct, they must be equivalent, so when you put the two viewpoints together, the implication is that anything the wave equation can do is something the normal modes can do, which suggests the conclusion we are seeking. Something like this seems to be the reason why Bernoulli (not Fourier!) was lead to the Fourier series representation of functions.

This is suggestive, but it's not a proof. The rigorous proof is relatively complicated and was developed over hundreds of years. Fourier contributed to the theory in several ways. Firstly, he applied the same idea to develop a theory of heat. In that theory, it's very clear that you can have non-differentiable functions, so maybe the main role that Fourier played was to suggest that you could represent an arbitrary, possibly discontinuous function this way. Fourier also seems to have contributed some arguments in that direction, but it wasn't really fully proven until much later. Dirichlet more or less settled it for nice enough functions, and some people came up with nasty enough functions that couldn't be represented as Fourier series, so it turned out that you need some weak assumptions to get it to work, and you can also get it to work for more functions if you redefine your concept of convergence. Anyway, it was a big inspiration for the development of math in the 19th century, and I don't think the theory was fully fleshed out until the 20th century.

The best book on the subject is Discourse on Fourier Series by Lanczos, but it's relatively advanced (advanced undergraduate level/early graduate level, maybe). You can also gain some insight by reading books about signal processing or the theory of waves and oscillations.

I've been sloppy and haven't talked about the requirement of being periodic and boundary conditions and so on. For a non-periodic function, you can let the period go to infinity, in some sense, and you arrive at the concept of a Fourier transform, so in some sense, it works for non-periodic functions, too, but you need an integral rather than a sum (integrate over continuous frequencies, rather than sum over discrete ones).