# Newton Divided Difference Interpolation Polynomial

• I
f(x)= a(0) + a1(x-x(0)) + a2(x-x(1))(x-x(0))

I am having a hard time understanding the intuition of (x-x(1))(x-x(0)) being multiplied by the coefficient a(2). For example, if I added a(3) to the equation, I would have had to multiply a(3) by (x-x(0))(x-x(1))(x-x(2)). I've researched the Mean Value Theorem and can't make the connection. I don't understand how the multiplications keep increasing. I understand that it needs 4 points to be able to compute the 3rd divided difference. I can't seem to grasp how it works. I am a visual learner, but can slowly reason through arithmetic. Thanks for you help!

Stephen Tashi
The best explanation, I've found on the web is https://mat.iitm.ac.in/home/sryedida/public_html/caimna/interpolation/nddf.html . However, the typography on that page is horribly mangled when I view it.

There are two concepts involve in this topic. The first concept is "divided difference". The second concept is how Newtons polynomial formula is a consequence of the properties of a "divided difference". (The formula for newtons polynomial is not the formula that defines a "divided difference". )

Let's devote this post just to the topic of "divided differences".

You can think of "divided differences" as finite approximations to derivatives.

The "first divided difference of ##f## evaluated at ##x_0,x_1## is denoted by ##f[x_0,x_1]## It is defined as ##f[x_0,x_1] = \frac{ f(x_1) - f(x_0)} {x_1 - x_0} ##.

To make the analogy with a derivative clear, think of "##x_1##" as "##x_0 + h##" and "##x_1 - x_0##" as "##h##". Then ##f[x_0,x_1] = \frac{ f(x_0+h) - f(x_0)}{h} ##. If we took ##lim_{h\rightarrow 0} ## of that algebraic expression, we would have ##f'(x_0)##.

The second divided difference ##f[x_0, x_1, x_2]## can be regarded as an approximation for the second derivative ##f''(x_0)## computed as the approximate rate of change between two approximations for the first derivative.

##f[x_0, x_1, x_2] = \big{(} \frac{ f(x_2)- f(x_1) } {x2- x_1} - \frac{f(x_1) - f(x_0)}{x_1 - x_0} \big{)} / (x_2 - x_0) ##

It's more concise to denote that expression as ##f[x_0, x_1,x_2] = \frac{ f[x_2,x_1] - f[x_1,x_0] } {x_2 - x_0}##

The third divided difference can regarded as an approximation for a third derivative. If we wrote out the algebra it would be the average rate of change between two approximations for second derivatives and the approximations for each of the second derivatives would involve two approximations for first derivatives. So it's more humane to define the third divided difference using the notation ##f[x_0,x_1,x_2,x_3] = \frac{ f[x_1,x_2,x_3] - f[x_0,x_1,x_2]}{x3 - x_0}##.

As you recall, the first derivative of a first degree polynomial is a constant, the second derivative of a second degree polynomial is a constant, and, in general, the n-th derivative of a n-degree polynomial is a constant.

It's an amazing fact that an analogous statement holds true for divided differences of polynomial functions, even though the divided differences do not involve taking any limits. The n-th divided difference of an n-th degree polynomial ##f(x)## is a constant. This is true even if we choose the points ##x_0,x_1,x_2,...,x_n## to be widely separated.

Another way of asserting the same claim is to say that the n-th divided difference of an n-th degree polynomial is an "invariant" of the polynomial. If we have two sets of points ##\{a_1,a_2,...a_n\}## and ##\{b_1,b_2,...b_n\}## then ##f[a_0,a_1,a_2,...,a_n] = f[b_0,b_1,b_2,...,b_n]## for an nth-degree polynomial ##f(x)## because both sides of the equation are equal to the same constant.