# Understanding Lagrange Error Analysis

• jordanl122
In summary, the Lagrange error is an expression of the derivative of a function at a certain point between two points, which is evaluated using the mean value theorem. It can arise as the result of using the mean value theorem to justify the remainder term in an expansion of a function.
jordanl122
In my BC calc class, we just finished working through most of series and sequences, and as we were reviewing years past free response questions on the topic, and in 2004, they dropped a lagrange error analysis. I've been looking at different explanations, but I'm not getting the concept. It seems like it is just the next term of the series, so how can it account for all of the remaining terms for a polynomial approximation of a function like f(x) = sin (x). Also, what is the deal with the whole c being between x and a. If anyone could enlighten me as to the meaning of all this, I would be most appreciative. Maybe a sample problem...

I am assuming that by the Lagrange error you mean the expression
$$\frac{f^{(n+1)}(c)}{(n+1)!}(x-x_{0})^{n+1}$$
This is almost the same as the expression for the $(n+1)$th term, but (as you seem to have noticed) the derivative of $f$ is evaluated at some $c$ between $x$ and $x_0$.

Basically the $c$ can come about as a result of using the mean value theorem to justify the remainder term. Actually finding $c$ may be tricky, but because you know it's between x and $x_{0}$, you can find the maximum value of the error based on that. For example, if you have an expansion which gives an error term $R_{5}$ of
$$\frac{\cos c}{3!}x^{5}$$
you know that the maximum value of the error is
$$\frac{1}{5!}x^{5}$$
since $|\cos c|$ is at most equal to unity. Furthermore, you could consider an expansion where $x$ is only between 0 and 1. Then the biggest the error term could be is
$$\frac{1}{120}$$
Finding the maximum error over a certain range doesn't give you the true remainder for a particular value of x, but it gives a "worst case scenario".

I hope this makes some sense.

I assume you're familiar with differential approximation and the mean value formula, right?

Differential approximation says:

f(y) ~ f(x) + f'(x) (y - x)

And the mean value theorem says (after a little rearrangement) that there exists a c in [x, y] such that:

f(y) = f(x) + f'(c) (y - x)

A Taylor series is just a vast generalization of differential approximation -- it includes higher order derivatives. The error term is simply the corresponding generalization of the mean value theorem.

arg, i just went to a bc calculus session at my school. we did a lagrange error problem. the whole concept baffles me for some reason. I am sure many won't be on the ap test, because i managed a 5 on the practice one; however, i would love to learn to do it. anyone want to do a sample problem?

## 1. What is Lagrange error analysis?

Lagrange error analysis is a method used to estimate the error in an approximation for a mathematical function. It is named after the mathematician Joseph-Louis Lagrange and is often used in calculus to determine the accuracy of a Taylor series approximation.

## 2. How is Lagrange error analysis calculated?

The Lagrange error is calculated by evaluating the error function at a specific value, called the center point, and then finding the maximum value of the absolute value of the error function over the interval being approximated.

## 3. What is the purpose of Lagrange error analysis?

The purpose of Lagrange error analysis is to determine the accuracy of an approximation for a mathematical function. It allows us to understand how close our approximation is to the true value and to determine if further improvements are needed.

## 4. What are some practical applications of Lagrange error analysis?

Lagrange error analysis is commonly used in engineering and science fields, such as physics and chemistry, to determine the accuracy of mathematical models and predictions. It is also used in computer science for error analysis in numerical methods and algorithms.

## 5. What are the limitations of Lagrange error analysis?

Lagrange error analysis assumes that the function being approximated is continuous and that the derivatives of the function exist and are continuous. It also assumes that the error function is decreasing, which may not always be the case. Additionally, it only provides an estimate of the error and cannot guarantee the exact error in the approximation.

• Calculus
Replies
14
Views
1K
• Calculus
Replies
24
Views
3K
• Calculus
Replies
11
Views
2K
• Calculus and Beyond Homework Help
Replies
8
Views
441
• Classical Physics
Replies
3
Views
637
• Calculus
Replies
10
Views
2K
• STEM Educators and Teaching
Replies
67
Views
5K
• Other Physics Topics
Replies
3
Views
1K
• Classical Physics
Replies
18
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K