Taylor Series to Approximate Functions

In summary, the conversation discusses the concept of Taylor polynomials and how they are used to approximate functions. The coefficient in front of x^n is the nth derivative of the function over n factorial, which may seem like magic but can be understood through both algebraic and geometric reasoning. However, it is important to note that not all functions can be accurately approximated by their Taylor polynomials, as demonstrated by the function e^(-1/x^2). Overall, the concept of Taylor polynomials is useful in understanding and analyzing analytic functions.
  • #1
V0ODO0CH1LD
278
0
I get the many proofs behind it and all of the mechanics of how to use it. What I don't get is why it works..
What was the though process of Brook Taylor when he devised his thing? I get that each new term is literally being added to previous ones along the y-axis to approximate the y value of the original function. But why is it that the coefficient that goes in front of x^n is the nth derivative of the function over n factorial? How does that makes sense? I know it works, but it seem like magic to me.. And I can't help but hate that.
 
Mathematics news on Phys.org
  • #2
V0ODO0CH1LD said:
I get the many proofs behind it and all of the mechanics of how to use it. What I don't get is why it works..
What was the though process of Brook Taylor when he devised his thing? I get that each new term is literally being added to previous ones along the y-axis to approximate the y value of the original function. But why is it that the coefficient that goes in front of x^n is the nth derivative of the function over n factorial? How does that makes sense? I know it works, but it seem like magic to me.. And I can't help but hate that.

My calculus professor said the same thing, he thought it was amazing that local information like all the derivatives at a point would give all the global information.

Here's how you might see it's okay that the coefficient should be the derivative.

Take y=f(x)=ax^2+bx+c. Then y'(x)=2ax+b, y''(x)=2a, so...
y(0)=c, y'(0)=b, y''(0)=2a.

So if a function can be represented as a power series (an infinite polynomial), then it's derivatives match up with the coefficients just so.

That's the "algebraic" thinking. Here's some "geometric" thinking. The linear approximation gives the closest straight line to f, that is

f(x)≈L(x)=f(0)+f'(0)(x-0).

A closer fit than a line is a second degree polynomial, that is, a parabola, call it T_2,

f(x)≈T_2(x)=f(0)+f'(0)(x-0)+f''(0)/2*(x-0)^2.

A cubic polynomial could give a closer fit.

Now let the degree n go to infinity. If f is "analytic", then the taylor polynomials will converge to f. Most elementary functions (like trig, exp, etc) are analytic.

Statistically, most functions are not analytic (for instance discontinuous functions). Even if we consider infinitely differentiable functions. Take f(x)=e^(-1/x^2) (If we define f(0)=0, it becomes infinitely differentiable). Then it turns out all the derivatives at the origin are zero. So, if it were "analytic", then f(x)=f(0)+f'(0)(x-0)+...=0, contradiction.

Analytic functions can be studied in more detail in a subject called complex analysis, which is a very bizarre subject, with a very strange collection of facts. I used to think of it as the black magic subject. I think the wildness (or lack of) can be understood a little bit intuitively as it is the study of a very small collection of functions between two planes which are conformal, that is, right angles are mapped to right angles (well, almost, except for where the derivative is zero, but then the angles are still mapped in a fairly restricted way).
 
  • #3
In addition to the previous poster (who did a very good job), my method of understanding taylor polynomials worked like this:

I actually love the idea of Taylor Polynomials because in a way they are set up like "it would make sense that this would approximate that function, so let's see if it does" (of course in development it was much more rigorous).

The idea is that if you have an equation that is equal to the function at a given point and is equal to every derivative of that function at the same point then this equation should be able to predict with good accuracy the function in the region around the given point.

Imagine you have an equation f(x) that has the value a at 0 and f'(x) has value b at 0.
You start by defining an equation p(x) such that [itex]p(x)=f(0) [/itex] or more generally [itex] p(x)= a [/itex]

Then you say you want p'(x) to equal f'(x).
So setting up that equation: [itex]p'(x)=b[/itex]
Then you antidifferentiate to get [itex]p(x)=[/itex][itex]\frac{b^2}{2}[/itex][itex]+f(0)[/itex] (just basic antidifferentiation)
Remembering that you can switch f(0) with a, you get the 2nd degree taylor polynomial.

Do you see why assuming the arbitrary derivative and antidifferentiating backwards gives you the taylor polynomial?
 
  • #4
I'm not sure what you mean by "why it works". Works for what? I suspect that what you are marveling at simply is not true. The set of all functions that can be arbitrarily approximated by their Taylor polynomials, that is that are equal to their Taylor series, is a very very small part of the set of all possible functions.

For one thing, the set of all continuous functions is a very small set, comparatively. The set of all differentiable functions, the set of all twice differentiable functions, ... , the set of all infinitely differentable functions, are each far smaller than the previous set.

And even for infinitely differentiable functions it is NOT true that they can be arbitrarily closely approximated by their Taylor polynomials. For example the function [itex]f(x)= e^{-1/x^2}[/itex] if x is not 0, f(0)= 0, is infinitely differentiable but all derivatives are 0 at x= 0. That is, all its Taylor polynomials are are identcally 0 and so cannot approximate f not matter how large an n you use.

What is true is that functions that can be arbitrarily closely approximated by Taylor polynomials (they are called "analytic functions") are very useful and so we work with them much more than other functions.
 
  • #5


The Taylor series is a powerful mathematical tool that allows us to approximate functions with polynomials. It was developed by Brook Taylor in the early 18th century, and it has since become an essential tool in many areas of science and engineering.

The thought process behind Taylor's work was to find a way to represent a function as an infinite sum of polynomials. By doing this, he was able to approximate the function at any point by evaluating the polynomial at that specific point. This would allow for a more accurate representation of the function, especially for functions that are not easily expressed as simple equations.

The reason why the coefficient in front of x^n is the nth derivative of the function over n factorial is based on the fundamental idea of calculus - that the slope of a function at a point is equal to its derivative at that point. This means that the first derivative represents the slope of the tangent line, the second derivative represents the curvature of the function, and so on.

By taking higher derivatives and dividing by n factorial, we are essentially accounting for the change in slope, curvature, and higher order changes in the function at a particular point. This allows us to create a more accurate polynomial approximation of the function.

While it may seem like magic at first, the Taylor series is actually based on solid mathematical principles and has been rigorously proven to work. It is a testament to the power of mathematics and its ability to describe and approximate the world around us.

So, while it may be frustrating to not fully understand the inner workings of the Taylor series, it is important to appreciate the beauty and effectiveness of this mathematical tool. With continued study and practice, the concepts behind it will become clearer, and you will be able to fully appreciate its value in scientific research and applications.
 

What is a Taylor series?

A Taylor series is a mathematical representation of a function as an infinite sum of terms. It is used to approximate a function by adding more terms to the series, which results in a more accurate approximation.

Why do we use Taylor series to approximate functions?

Taylor series are useful because they allow us to approximate complex functions with simpler ones. This can be helpful in situations where we need to evaluate a function at a certain point but do not have a direct way to do so.

What is the formula for a Taylor series?

The formula for a Taylor series is f(x) = f(a) + f'(a)(x-a) + f''(a)(x-a)^2/2! + f'''(a)(x-a)^3/3! + ... where f'(a), f''(a), f'''(a), ... are the derivatives of the function evaluated at the point a.

What is the difference between a Taylor series and a Maclaurin series?

A Taylor series is a general representation of a function, while a Maclaurin series is a special case of a Taylor series where the point a is equal to 0. This means that the Maclaurin series only includes non-negative powers of x.

How do we know when a Taylor series is a good approximation of a function?

The accuracy of a Taylor series approximation depends on the number of terms used in the series. The more terms we include, the closer the approximation will be to the actual function. We can also use a remainder term to estimate the error in the approximation.

Similar threads

Replies
3
Views
697
  • Introductory Physics Homework Help
Replies
5
Views
317
  • General Math
Replies
11
Views
1K
Replies
19
Views
2K
Replies
3
Views
977
Replies
7
Views
1K
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Replies
4
Views
2K
Back
Top