# Error estimate for integral of interpolated function

1. May 2, 2015

### maka89

Hi,
I have a project at uni where I am making an approximate analytical expression for a parameter and comparing the result with more rigorous simulations. The thing is: the parameter I am comparing is the 2d-integral of the results from the simulations over some area, and the simulation results comes out on a regular grid.

I have made a script in python and my procedure is:

1. Aquire a function by interpolating the output of the simulation ( which is on a 101x101 grid ).
2. Integrate this function over the integration area using an adaptive integration routine.
3. Compare result with the analytical expression.

Now, I would like to find out how much error there is in the integral due to the interpolation. Is there any way to do this?

Thanks =)

2. May 2, 2015

### Staff: Mentor

Do you have different interpolation algorithms available?
Can you run the interpolation on a subset (like 51x51, taking every second entry in each dimension)?
Is it possible to run the simulation with more data points (at least once or twice)?

As an upper limit on the error, a simple "add all 101*101 values" should give an approximation that is worse than all interpolations but not completely wrong.

3. May 2, 2015

### maka89

I use this python function at the moment(it uses spline interpolation):
http://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.interpolate.interp2d.html

Yes, I can run the interpolation on a subset, but can't get new simulations with more points atm.

So basically using the midpoint method and use the difference as an error estimate? Guess that should give some estimate to the error. That was indeed a simple, yet good idea that I hadn't considered. Thanks =)

Last edited: May 2, 2015