# Proof involving Taylor Polynomials / Lagrange Error Bound

## Homework Statement

I'm given that the function $f(x)$ is n times differentiable over an interval $I$ and that there exists a polynomial $Q(x)$ of degree less than or equal to $n$ s.t.
$$\left|f(x) - Q(x)\right| \leq K\left|x - a\right|^{n+1}$$
for a constant $K$ and for $a \in I$

I am to show that $Q(x)$ is the Taylor polynomial for $x$ at $a$

## Homework Equations

For $Q(x)$ to be the Taylor polynomial for $x$ at $a$:

$$Q(x) = \sum_{k = 0}^{n}\frac{f^{(k)}(a)}{k!} (x - a)^k$$

The $n^{th}$ error/remainder term of $Q(x)$ is:
$$R_{n,a}(x) = \left|f(x) - Q(x)\right|$$

The Lagrange error bound is:
$$R_{n,a}(x) = \left|f(x) - Q(x)\right| \leq \frac{M}{(n+1)!}|x - a|^{n+1}$$
where $M = sup\left|f^{n+1}(x)\right|$

## The Attempt at a Solution

I think I could easily go from knowing that $Q(x)$ was the Taylor polynomial of $f$ around $a$ and prove that there exists a constant $K = \frac{M}{(n+1)!}$ s.t. $\left|f(x) - Q(x)\right| \leq K\left|x - a\right|^{n+1}$, since this would just be proving the Lagrange error bound theorem, and would just involve some integration/induction. However, perhaps because I keep thinking about how to prove the converse, I'm completely stuck on how to go from knowing that Q and K exist to make the initial inequality true, to proving that Q is the Taylor polynomial. I'm positive that I'm over-complicating the problem, but I just cannot figure out where to start. Does anyone have any suggestions or jumping off points for this proof?

Even just a general suggestion on how you might start trying to deconstruct a proof like this would be greatly appreciated.

Hello, I think I may have found a way to prove it (I also have it for homework :D).
I'm new to this forum so I dunno know how to make the math symbols all purty-like so I'll just type things out in English. Apologies for the rough appearance.

I think you solve it by contradiction. Assume that the polynomial has the desired traits but IS NOT the Taylor polynomial. Since it's a polynomial (analytic), it can be written as a power series, the same form as the Taylor polynomial. However, the sequence of coefficients will be different for at least one coefficient. So you can write the polynomial as the sum of two power series -- one of them being the Taylor polynomial (call it P_n) and the other one being the exact "correction power series" (call it P_c). Remember that the correction power series should then have a non-zero coefficient by hypothesis. Then that means you can write the abs value expression of |f(x) - P(x)| as |P_n(x) + M(x-c)^(n+1) - P_n(x) - P_c| = |M(x-c)^(n+1) - P_c| [M is the error bound coefficient - the (n+1)-th derivative of f evaluated at some value in the interval divided by (n+1)!] . Remember then that expression is less than C|x-c|^(n+1) for all x by hypothesis. Make x arbitrarily close to c (c + epsilon, epsilon small) to get the desired contradiction, as the epsilon^(n+1) terms shrink much, much faster than does the correction polynomial (which only has terms of epsilon of at most n-th power). The contradiction then implies that P(x) is indeed P_n(x).

At least I think that's a valid proof. I should mention I'm only an undergrad student and that I feel I have a mind much better suited for physics than analysis. Hope that helps, or at the very least spurns you towards finding a better (and possibly more correct) proof.

Hmm, I had tried a proof by contradiction, but I hadn't gotten far. Your proof seems to make perfect sense, though. I'll have to chew it over and try to justify the logic of each step to myself before my assignment is due. Thank goodness I have the next few days off!

Thank you so much for your help!