mesa said:
So the Fourier Series basically uses 'axes' made up of trigonomic functions (i.e. sin, cos, etc. of ∏x) . They were chosen because they have an infinate number of real roots (at least without performing a transformation); it is at these orthogonal points that the Fourier series places itself.
Which part is wrong?
It's not that: they are significant with respect to everything from geometry, to a continuous form of number theory (since they are periodic), and to the concept of using frequency to analyze things as is done in many many transforms.
This Fourier decomposition is an example of an integral transform. Basically what this means is that you have an expression (i.e. a function or a signal) and you put it through this black-box and it spits out something: in this case its a number with respect to the basis vector. Then what you do is you have an inverse process which takes the results of your black-box and reconstructs the input.
There are different kinds of black-boxes and they do different things, but they all have a black-box and a reverse black-box where if you chain the reverse box after the normal box you always get back what you put in initially (provided you meet some conditions).
As it turns out, the reason they are powerful is that they are orthogonal. Orthogonal means completely independent: kind of like if you have a vector <x,y,z> and you change y then x and z don't change at all.
There are other reasons, and you can look up any kind of signal processing and pure math to find out, but the idea of taking any function over a finite interval (from -pi to pi for the default case) and then turn it into a vector (an infinite-dimensional one) is very powerful because its a standard way to take a signal and give it geometry.
The axis correspond with the trigonometric functions and the way you interpret is that the value at each axis is like the component of each vector. For example <x,y,z> have the value of x on x-axis, y on y-axis, and z on z-axis so your thinking is a good one if you think about things in terms of a vector.
If you are interested there is a whole area called Fourier analysis of which you deal with orthogonal polynomials which create vectors (like the one above) but the vectors represent something different.
So we will get numbers for a different basis, but those numbers refer to the basis of our orthogonal polynomial and not just the trig functions.
If you want to understand geometry better, then look at a linear algebra textbook to understand all this stuff for normal vectors and then you will see how this kind of thing works: you will have to learn the finite-dimensional stuff before the infinite-dimensional stuff but the ideas are the same in both.