Taylor series vs. Fourier series

jaejoon89
Messages
187
Reaction score
0
Is a Fourier series essentially the analogue to a Taylor series except expressing a function as trigs functions rather than as polynomials? Like the Taylor series, is it ok only for analytic functions, i.e. the remainder term goes to zero as n->infinity?
 
Physics news on Phys.org
A Taylor series has to be expanded around a specific point, and the coefficients consist of the derivatives of the function at that point: in particular, the function must be infinitely differentiable there. Convergence may be limited to a neighborhood of a certain radius around that point.

The Fourier series for a function is not dependent upon a specific point. A function need not be infinitely differentiable at any point (or even differentiable at all) to have a Fourier series. Every function that is integrable (L^1) has a formal Fourier series, i.e., the coefficients exist.

Mere continuity is sufficient to ensure convergence almost everywhere. More generally, if f is any function in L^p for p > 1, then the Fourier series for f converges almost everywhere. (This is a very hard result that wasn't obtained until the late 1960s.) On the other hand, there exists an L^1 function whose Fourier series diverges at every point.
 
The Taylor series is essentialy the Fourier series on a loop around the point of expansion.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top