I Determination of error in interpolating polynomial

  • I
  • Thread starter Thread starter MAXIM LI
  • Start date Start date
  • Tags Tags
    Existence Proof
Click For Summary
The discussion centers on proving a theorem related to the error in interpolating polynomials in two variables, which was presented in a lecture without proof. The theorem states that for a continuous function f, the error between f and its interpolating polynomial p can be expressed using derivatives of f and products of differences from interpolation points. Participants suggest exploring the multivariate Taylor expansion as a potential approach to derive the proof. There is confusion regarding the notation used in the theorem, particularly concerning the interpretation of derivatives and products. A recommendation is made to consult resources on multidimensional Taylor series for further clarification and guidance on the proof steps.
MAXIM LI
Messages
6
Reaction score
2
TL;DR
Help needed
Professor showed this result in the lecture without giving any proof (after proving the existence of the interpolating polynomial in two variables). I've been trying to prove it myself or find a book where is proved but I failed. This is the theorem:

Let
$$ x_0 < x_1 < \cdots < x_n \in [a, b], \quad y_0 < y_1 < \cdots < y_m \in [c, d],$$

$$ M = \{ (x_i, y_j) : 0 \leq i \leq n, 0 \leq j \leq m \}, \quad f \in \mathcal{C}^{m + n + 2}([a,b] \times [c,d]), $$

$$ p \in \Pi_{n, m} : p(x_i, y_j) = f(x_i, y_j) \quad \forall 0 \leq i \leq n, 0 \leq j \leq m. $$

Then, for all ##(x, y) \in (x_0, x_n) \times (y_0, y_m)## there exist ##\xi, \xi' \in (x_0, x_n), \eta, \eta' \in (y_0, y_m)## such that
$$ f(x, y) - p(x, y) = \frac{1}{(n + 1)!} \frac{\partial^{n + 1} f(\xi, y)}{\partial x^{n + 1}} \prod_{i = 0}^n (x - x_i) $$
$$ + \frac{1}{(m + 1)!} \frac{\partial^{m + 1} f(x, \eta)}{\partial y^{m + 1}} \prod_{j = 0}^m (y - y_j) $$
$$ - \frac{1}{(n + 1)! (m + 1)!} \frac{\partial^{n + m + 2} f(\xi', \eta')}{\partial x^{n + 1} \partial y^{m + 1}} \prod_{i = 0}^n (x - x_i) \prod_{j = 0}^m (y - y_j) $$

I appreciate any kind of help, even if it is only from where could I start the proof.

Edit 1 (clarification):

$$ \Pi_{n, m} = \{ p(x, y) = \sum_{i = 0}^n \sum_{j = 0}^m a_{i,j} x^i y^j : a_{i, j} \in \mathbb{R} \quad \forall 0 \leq i \leq n, 0 \leq j \leq m \} $$
 
Last edited by a moderator:
Physics news on Phys.org
On a quick view, I would look for the multivariate Taylor and see how far I get, or a multidimensional version of an intermediate value theorem possibly applied to a double induction along ##n## and ##m.##

However, I haven't done any of it, it's just ideas.
 
Help?
 
MAXIM LI said:
Help?
I had hoped that you would have started the calculations, especially as ##f(\xi,y)## occurs on the right side of ##\dfrac{\partial^{n+1} f(\xi,y)}{\partial^{n+1} x}## which makes the term
$$
\dfrac{\partial^{n+1} f(\xi,y)}{\partial^{n+1} x}\prod_{i=0}^n(x-x_i)
$$
confusing and I have to guess whether this means
$$
f(\xi,y)\cdot\dfrac{\partial^{n+1} }{\partial^{n+1} x}\prod_{i=0}^n(x-x_i)
$$

Anyway, your formula is a direct consequence of the multivariate Taylor expansion, here the bivariate version in two variables. Of course, I could type what my textbook says and use very likely a different notation than your textbook uses, so let me instead directly ask you what you know about Taylor's theorem for functions ##f\, : \,\mathbb{R}^2\longrightarrow \mathbb{R}\;##?
 
fresh_42 said:
I had hoped that you would have started the calculations, especially as ##f(\xi,y)## occurs on the right side of ##\dfrac{\partial^{n+1} f(\xi,y)}{\partial^{n+1} x}## which makes the term
$$
\dfrac{\partial^{n+1} f(\xi,y)}{\partial^{n+1} x}\prod_{i=0}^n(x-x_i)
$$
confusing and I have to guess whether this means
$$
f(\xi,y)\cdot\dfrac{\partial^{n+1} }{\partial^{n+1} x}\prod_{i=0}^n(x-x_i)
$$

Anyway, your formula is a direct consequence of the multivariate Taylor expansion, here the bivariate version in two variables. Of course, I could type what my textbook says and use very likely a different notation than your textbook uses, so let me instead directly ask you what you know about Taylor's theorem for functions ##f\, : \,\mathbb{R}^2\longrightarrow \mathbb{R}\;##?

I've found a explanation in this book in chapter 6 section 6, but it doesn't explain the concrete steps and I'm managing to figure them out.
 
I do not have this book and it needs a lot of notation to write down what I have. It all is a rephrasing of the Taylor expansion for two variables
$$
f((x+\xi,y+\eta))=\sum_{|\alpha|\leq k} \dfrac{D^\alpha f(x,y)}{\alpha!} (\xi,\eta)^\alpha +\sum_{|\alpha|= k+1} \dfrac{D^\alpha f((x,y)+\vartheta (\eta,\xi) )}{\alpha!} (\xi,\eta)^\alpha
$$
From there it is a few steps to
$$
f((x+\xi,y+\eta))=f(x,y)+\bigl\langle \nabla f((x,y))\, , \,(\xi,\eta) \bigr\rangle +\dfrac{1}{2}\sum_{i,j=1}^2 \dfrac{\partial^i }{\partial x^i}\dfrac{\partial^j}{\partial x^j}f((x,y))\cdot (\xi^i,\eta^j)+o\left(\|(\xi,\eta)\|^2\right)
$$
Now set ##\xi=x_i-x## and ##\eta=y_i-y## and the derivatives become the coefficients ##a_{i,j}## of the three homogenous polynomials of degree ##0,1,## and ##2.##
The actual formulas and the steps between them are in my book which is unfortunately not in English. Hence my recommendation is to search for the multi- or at least two-dimensional version of Taylor's theorem and what happens if you only consider the first three terms up to degree two.

https://teaching.smp.uq.edu.au/scims/Num_analysis/Taylor.html
is an example I found on searching "multidimensional Taylor series".
 
  • Like
Likes BvU, MAXIM LI and WWGD