Why can I express Fourier coefficients as an = An*sin() and bn = An*cos()?

RaduAndrei
Messages
114
Reaction score
1
Consider the following article:
https://en.wikipedia.org/wiki/Fourier_series

At definition, they say that an = An*sin() and bn = An*cos()

So with these notations you can go from a sum having sin and cos to a sum having only sin but with initial phases.

Why can I write an = An*sin() and bn = An*cos() ?
It seems out of the blue.
 
Mathematics news on Phys.org
Substitute the 2nd equation to the first equation.
 
I know that by substitution we get from one form to another.
But my question is why I can write cos(phi) = a/sqrt(a^2+b^2) and sin(phi) = -b/sqrt(a^2+b^2) ?
I see that by taking cos(phi)^2 + sin(phi)^2 I get 1, so is good.

But why I can write cos(phi) like that? Writing cos(phi) like that, then from cos(phi)^2 + sin(phi)^2 = 1, I get sin(phi). But why I can write cos(phi) in the first place like that?

It is just arbitrary? If I write cos(phi) = a, then I find sin(phi)...then, ok. Is fine.I can see that. But writing as a/sqrt(a^2+b^2), it does not seem so straight-forward. Maybe there is a property that for any two numbers a,b then I can write cos(phi) in that way. I do not know.Going from the trigonometric Fourier sum to the exponential form, we use Euler's formula to write cos() = 1/2(e^+e^) and sin too. So I have Euler's formula here.
 
Last edited:
Expression like ##A_n \cos \phi_n## only depends on the index ##n##, so there is no harm in writing them in a more simple way such as ##a_n##.
RaduAndrei said:
Maybe there is a property that for any two numbers a,b then I can write cos(phi) in that way.
If you want to picture it that way, you first have to draw a right triangle and define which sides ##a## and ##b## correspond to, and which angle ##\phi## corresponds to.
 
Aa, ok. Now makes sense. Thanks.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top