Calculate error using Lagrange formula

etha
Messages
2
Reaction score
0
hi everyone! I'm having difficulty figuring this problem out. so here goes:

f(x) = sin(x)

Use the Lagrange formula to find the smallest value of n so that the nth degree Taylor polynomial for f centered at x = 0 approximates f at x = 1 with an error of no more that 0.001.

whatever help anyone can provide would be great
 
Physics news on Phys.org
Well, the first thing I would do is write out the "Lagrange" formula for the error! Then follow that formula. Knowing that sin(x) and cos(x) are never larger than 1 helps.
 
Back
Top