How can Fourier series formulas be derived without just memorizing them?

AI Thread Summary
The discussion focuses on deriving Fourier series formulas rather than simply memorizing them. It explains that these formulas can be understood through the concept of an inner product space, where sine and cosine functions serve as an orthonormal basis. The orthogonality of these functions allows for the coefficients in the series to be calculated using integrals that yield zero for different frequencies. The conversation also highlights that using complex exponentials can simplify the derivation process. Understanding the underlying mathematical principles provides clarity on how Fourier series are constructed.
lukaszh
Messages
32
Reaction score
0
Hello,
everywhere I can see this
a_n = \frac{1}{\pi}\int_{-\pi}^\pi f(t) \cos(nt)\, dt
b_n = \frac{1}{\pi}\int_{-\pi}^\pi f(t) \sin(nt)\, dt
etc... I can't find, how to derive this formulas. I'm really tired and a bit confused of this formulas, because I can't find possible way to derive them. I don't like only formula application, but I want to know, what is that formula about.
Thank you...
 
Mathematics news on Phys.org
If you have an "inner product space", that is, an vector space with an inner product defined on it, together with an orthonormal basis, v_1, v_2, ..., that is such that &lt;v_i, v_j&gt;= 0 if i\ne j and &lt;v_i, v_i&gt;= 1 for all i, and want to write v as a linear combination, v= a_1v_1+ a_2v_2+ ...+ a_nv_n, then [math]a_i= <v, v_i[/itex]. What you have is a vector space with basis cos(nx), sin(nx) with inner product &lt;f, g&gt;= \frac{1}{\pi}\int_{-\pi}^\pi f(t)g(t)dt which leads to the given formulas.
 
It works because the functions sin(nt) for different values of n are orthogonal to each other, that is,

\int^{\pi}_{-\pi} {\sin(nt) \sin (mt) dt} = 0

for n \ne m, and

\int^{\pi}_{-\pi} {\sin^2(nt) dt} = \pi

Likewise for cosines. Try a few examples if you like. Therefore if you have a function

f(t) = b_1 \sin (t) + b_2 \sin (2t) + b_3 \sin (3t) + ...

then, for example, letting n = 2:

\int^{\pi}_{-\pi} {f(t) \sin (2t) dt} = b_1 \int^{\pi}_{-\pi} {\sin (t) \sin (2t) dt}<br /> + b_2 \int^{\pi}_{-\pi} {\sin^2 (2t) dt} <br /> + b_3 \int^{\pi}_{-\pi} {\sin(3t) \sin (2t) dt} + ...

\int^{\pi}_{-\pi} {f(t) \sin (2t) dt} = b_1 \cdot 0 + b_2 \cdot \pi + b_3 \cdot 0 + ...
 
Thank you. Now I understand. Thanks Thanks
 
It is easier to work with the basis functions

e_n(x) = exp(i n x)

and define the inner product as

<f,g> = 1/(2 pi) Integral from minus pi to pi of f(x)g*(x) dx
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Thread 'Imaginary Pythagoras'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top