What Are the Real Benefits of Using Taylor Series for Function Approximations?

mathmathmath
Messages
6
Reaction score
0
hi everyone, I am just learning the taylor series at school. I am slightly confused.

in my textbook, one of hte exercises is to find hte nth degree taylor polynomial of x^4 about a=-1. n is 4 in this case

so this gives me a long polynomial. i understand that inputting any x value into this polynomial yields the sameresult as the x^4 function. however I am slightly confusd at why anyone would do this. my guess is that this question is for hte sake of practice, and ideally taylor approximations would only be made for more complex functions. is this true?

one more question, what does "approximation ABOUT a=x" mean really? I've learned linear approximation so i assume its somewhat similar.. but I am still confused. thank you very much for your help
 
Physics news on Phys.org
mathmathmath said:
in my textbook, one of hte exercises is to find hte nth degree taylor polynomial of x^4 about a=-1. n is 4 in this case

so this gives me a long polynomial. i understand that inputting any x value into this polynomial yields the sameresult as the x^4 function. however I am slightly confusd at why anyone would do this. my guess is that this question is for hte sake of practice, and ideally taylor approximations would only be made for more complex functions. is this true?

Yes, partially. Observe that your function x^{4} is of degree 4 and its derivatives are 4x^{3}, 12x^{2}, 24x and 24. Evidently, derivatives of order 5 and greater are identically zero. As a result, terms in the Taylor expansion involving derivatives of orders 5 and greater are identically zero. This is a special case of a function whose higher order derivatives are zero.

In general, one uses Taylor's Theorem to evaluate n-th degree polynomial approximations to functions represented by power series. Imagine adding to a n-th order polynomial, a leading term of degree n+1. Then add another term with degree n+2 and so on. As n tends to \infty, this is no longer a polynomial (except in special cases such as yours, when the coefficients of higher order terms are identically zero) but rather a series. In general however, these coefficients depend also on n. Write down the Taylor's series expansions for trigonometric functions like sin(x) and cos(x) and you will see what I mean.

one more question, what does "approximation ABOUT a=x" mean really? I've learned linear approximation so i assume its somewhat similar.. but I am still confused. thank you very much for your help

A part of this question has been answered in the last paragraph above.

Sometimes in applications one is required to judge the behavior of a function in the neighbourhood of a point, i.e. very close to a point. In such cases, the function can be closely approximated by a few terms of the Taylor expansion in such a neighbourhood. A Linear approximation is one step towards approximation. Then there are second order, third order approximations, and so on. In general, a continuous function which has some well defined derivatives can be approximated by a Taylor polynomial. The strength or extent of approximation is given by a remainder term, which can be made as small as desired by increasing the number of terms in the approximation.

For a better explanation, you can look at this: http://en.wikipedia.org/wiki/Taylor_series
 
mathmathmath said:
hi everyone, I am just learning the taylor series at school. I am slightly confused.

in my textbook, one of hte exercises is to find hte nth degree taylor polynomial of x^4 about a=-1. n is 4 in this case

so this gives me a long polynomial. i understand that inputting any x value into this polynomial yields the sameresult as the x^4 function.
Actually, no. As you say below, the Taylor polynomial approximates the function.

however I am slightly confusd at why anyone would do this. my guess is that this question is for hte sake of practice, and ideally taylor approximations would only be made for more complex functions. is this true?
First you should understand that not all functions have Taylor series (they have to be infinitely differentiable). Second, even if a function has a Taylor series, its Taylor series is not necessarily equal to the function. (For most "interesting", the "analytic functions" it is.)
There are many reasons why one might want a Taylor series for functions. For one thing, it is the Taylor series for ex, sin(x), and cos(x) that allow us to prove that eix= cos(x)+ i sin(x). One method for solving differential equations uses the information about the derivative contained in the differential equation to construct the solution's Taylor series rather than a closed formula for the solution. Finally, the Taylor polynomials give a good method of actually calculating values for such functions as ex, trig functions, and log(x).

one more question, what does "approximation ABOUT a=x" mean really? I've learned linear approximation so i assume its somewhat similar.. but I am still confused. thank you very much for your help[/QUOTE]
Why would you assume linear? The first order Taylor polynomial for a differentiable function, f(a)+ f'(a)(x- a), is linear of course (it's the equation of the tangent line at a) but higher order Taylor polynomials are not- they are polynomials! Notice, by the way, that the tangent line must approximate the function pretty well for x close to a but, in general, gets worse as you farther from a. If we wanted an approximation that was pretty good close to some other value of x, say b, then we would calculate the Taylor series "about x= b" instead. In general, the error made by approximating f(x) by its nthp order Taylor polynomial is proportional to (x-a)n+1. The farther we are from a, the worse the error is so it is important to find the Taylor series "about x= a" for some a that is important to the problem.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top