Taylor Series to Approximate Functions

V0ODO0CH1LD
Messages
278
Reaction score
0
I get the many proofs behind it and all of the mechanics of how to use it. What I don't get is why it works..
What was the though process of Brook Taylor when he devised his thing? I get that each new term is literally being added to previous ones along the y-axis to approximate the y value of the original function. But why is it that the coefficient that goes in front of x^n is the nth derivative of the function over n factorial? How does that makes sense? I know it works, but it seem like magic to me.. And I can't help but hate that.
 
Mathematics news on Phys.org
V0ODO0CH1LD said:
I get the many proofs behind it and all of the mechanics of how to use it. What I don't get is why it works..
What was the though process of Brook Taylor when he devised his thing? I get that each new term is literally being added to previous ones along the y-axis to approximate the y value of the original function. But why is it that the coefficient that goes in front of x^n is the nth derivative of the function over n factorial? How does that makes sense? I know it works, but it seem like magic to me.. And I can't help but hate that.

My calculus professor said the same thing, he thought it was amazing that local information like all the derivatives at a point would give all the global information.

Here's how you might see it's okay that the coefficient should be the derivative.

Take y=f(x)=ax^2+bx+c. Then y'(x)=2ax+b, y''(x)=2a, so...
y(0)=c, y'(0)=b, y''(0)=2a.

So if a function can be represented as a power series (an infinite polynomial), then it's derivatives match up with the coefficients just so.

That's the "algebraic" thinking. Here's some "geometric" thinking. The linear approximation gives the closest straight line to f, that is

f(x)≈L(x)=f(0)+f'(0)(x-0).

A closer fit than a line is a second degree polynomial, that is, a parabola, call it T_2,

f(x)≈T_2(x)=f(0)+f'(0)(x-0)+f''(0)/2*(x-0)^2.

A cubic polynomial could give a closer fit.

Now let the degree n go to infinity. If f is "analytic", then the taylor polynomials will converge to f. Most elementary functions (like trig, exp, etc) are analytic.

Statistically, most functions are not analytic (for instance discontinuous functions). Even if we consider infinitely differentiable functions. Take f(x)=e^(-1/x^2) (If we define f(0)=0, it becomes infinitely differentiable). Then it turns out all the derivatives at the origin are zero. So, if it were "analytic", then f(x)=f(0)+f'(0)(x-0)+...=0, contradiction.

Analytic functions can be studied in more detail in a subject called complex analysis, which is a very bizarre subject, with a very strange collection of facts. I used to think of it as the black magic subject. I think the wildness (or lack of) can be understood a little bit intuitively as it is the study of a very small collection of functions between two planes which are conformal, that is, right angles are mapped to right angles (well, almost, except for where the derivative is zero, but then the angles are still mapped in a fairly restricted way).
 
In addition to the previous poster (who did a very good job), my method of understanding taylor polynomials worked like this:

I actually love the idea of Taylor Polynomials because in a way they are set up like "it would make sense that this would approximate that function, so let's see if it does" (of course in development it was much more rigorous).

The idea is that if you have an equation that is equal to the function at a given point and is equal to every derivative of that function at the same point then this equation should be able to predict with good accuracy the function in the region around the given point.

Imagine you have an equation f(x) that has the value a at 0 and f'(x) has value b at 0.
You start by defining an equation p(x) such that p(x)=f(0) or more generally p(x)= a

Then you say you want p'(x) to equal f'(x).
So setting up that equation: p'(x)=b
Then you antidifferentiate to get p(x)=\frac{b^2}{2}+f(0) (just basic antidifferentiation)
Remembering that you can switch f(0) with a, you get the 2nd degree taylor polynomial.

Do you see why assuming the arbitrary derivative and antidifferentiating backwards gives you the taylor polynomial?
 
I'm not sure what you mean by "why it works". Works for what? I suspect that what you are marveling at simply is not true. The set of all functions that can be arbitrarily approximated by their Taylor polynomials, that is that are equal to their Taylor series, is a very very small part of the set of all possible functions.

For one thing, the set of all continuous functions is a very small set, comparatively. The set of all differentiable functions, the set of all twice differentiable functions, ... , the set of all infinitely differentable functions, are each far smaller than the previous set.

And even for infinitely differentiable functions it is NOT true that they can be arbitrarily closely approximated by their Taylor polynomials. For example the function f(x)= e^{-1/x^2} if x is not 0, f(0)= 0, is infinitely differentiable but all derivatives are 0 at x= 0. That is, all its Taylor polynomials are are identcally 0 and so cannot approximate f not matter how large an n you use.

What is true is that functions that can be arbitrarily closely approximated by Taylor polynomials (they are called "analytic functions") are very useful and so we work with them much more than other functions.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top