Wasn't sure where to ask this but here goes: Suppose one needs to work out the area under an experimental peak using numerical integration and every data point has an error in y. How do you go about providing a sensbile error on the integrated area? My current thinking is that the error in numerical integration is much less than the error on the data points, and the error can be estimated by either: Adding the errors to the data points and numerically integrating a max area, and then take 2*(maxarea - area) as the error but since it is unlikely all data points are at the maxium error multiply this by 0.687 (1 standard deviation). Alternatively can you just numerically integrate the error bars? Anyone know what the proper way of doing this is?