Uncovering the Hidden Power of Fourier Series

AI Thread Summary
Fourier series provide a powerful method for analyzing periodic functions, such as square waves, by expressing them as infinite sums of sine and cosine functions. While simple functions can be represented with conditional statements, this approach complicates integration and differentiation, making Fourier series more practical for these operations. In electronics, modeling a nominal square wave using Fourier series allows for easier analysis of system responses, particularly in linear systems where the response to combined inputs can be calculated straightforwardly. The ability to express arbitrary initial conditions as a Fourier series is essential for solving complex problems, such as those involving heat equations. Ultimately, Fourier series simplify the analysis of periodic functions in various applications, making them a valuable tool in mathematical and engineering contexts.
matqkks
Messages
280
Reaction score
5
If we have a simple periodic function (square wave) which can be easily written but the Fourier series is an infinite series of sines and cosines. Why bother with this format when we can quite easily deal with the given periodic function? What is the whole point of dealing this long calculation?
 
Mathematics news on Phys.org
What makes you think "we can quite easily deal with the given periodic function"? Can you give an example?
 
I meant to say why write this as an infinite series when it can be expressed quite easily as a waveform. For Maclaurin or Taylor series it makes it easier to handle some difficult functions such as sin(sqrt(x)).
 
If all one intends to do with a function is write it down, then it might be simpler to write the function as a rule which has some "if ...then..." conditions in it than write it as an infinite series. For example, a square wave (as a function of time alone) could be written in a format like: if ( n < t < n+1 and n is an even integer) then f(t) = 1. Otherwise f(t) = 0. If you need to integrate or differentiate a function, the "if...then..." conditions can be a nuisance and it may simpler to deal with the infinite series.

A true square wave isn't differentiable at the jumps. In a situation (such as in electronics) when we are dealing with a nominal square wave, we could make a realistic model for the nominal square wave by using some "if...then..." conditions to round the shape of the jumps. However, it may be simpler to think of the square wave as an infinite series and then neglect some of the terms of the series in order to achieve the same sort of approximation.

The "response" of some physical systems to an "input" (e.g. the effect of a electronic filter on an input signal) may be simple to analyze when the input is a sine or cosine function. For a linear system, the response to the sum of inputs is the sum of the responses to the individual inputs. Hence the simplest analysis is often to represent an input signal as a sum of sine and cosine functions and compute the response as the sum of the individual responses.
 
  • Like
Likes matqkks
Maybe one day, you'll be abducted by aliens who will ask you to solve a "cooling off" problem for a thermally isolated rod.

And then you'd be able to answer them because you know how to solve the problem when the initial conditions are sinusoidal, those being the eigenfunctions of the problem (they die off exponentially because the heat equation doesn't like steep temperature gradients, so it kills them off rapidly), so if you can express the arbitrary initial conditions as a Fourier series, you'd be good to go.
 
  • Like
Likes matqkks
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top