How can I accurately interpolate a C1 function with infinite second derivative?

bruno67
Messages
32
Reaction score
0
I have to interpolate a function between a small number of points n (say, 3-5), at which I know both the value of the function and its first derivative. Normally, this would be a good candidate for polynomial interpolation.

The only problem is that at the first point the function is only once differentiable (its second derivative is infinite), while it is perfectly smooth in the rest of the interval. How can I get an estimate of the interpolation error? The usual estimate found in textbooks requires the function to have at least n+1 continuous derivatives in the closed interval in which one is interpolating.

Thanks.
 
Mathematics news on Phys.org
If you know the second derivative is infinite at one end of the range, you should use that fact to choose the form of your interpotation function, otherwise you will be introducing systematic errors and reducing the order of accuracy.

Presumably you know the general form of the function near the end point, so you should include it in your interpolation, for eample

y = ax3/2 + b + cx + dx2 + ...

or
y = sqrt(x)(ax + bx2 + cx3 + ...)
or whatever.

The square roots are just an example of a function that gives an infinite second derivative. Use the function that fits the physics of your situation. That may be a "special function" like a Bessel function etc, not a polynomial.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top