# How to calculate error propagation for several measurements?

jinawee
I'm having trouble with error propagation analysis. When you make a single measurement of several variables, say (x,y,z) and you calculate a function f(x,y,z), you just have to apply the common formula of error propagation:

$$\sigma_f(x,y,z)=\sqrt{\left| \frac{\partial f}{\partial x} \right| \sigma_x^2 + \left| \frac{\partial f}{\partial y} \right| \sigma_y^2 + \left| \frac{\partial f}{\partial z} \right| \sigma_z^2}$$

But I don't know what to do when you have many values for each (x,y,z).

Suppose you measure m and x in a spring to calculate the elasticity constant via k(x,m)=mg/x.The variables m and x are measured with an intrinsic error of the apparatus. You repeat the measurement several times leaving the variables fixed. The error could be combined using the cuadratic sum.

Now, you change m, so x varies too. You repeat the same process above. This would introduce a dispersion in k.

How is the error of k in this case? It seems that possible solutions could be Demming regression, total least squares, Monter Carlo methods... but I'm quite lost, a simple example would be helpful.

To give some numbers (totally invented):

Measurement #1: m=10.0,10.1 and x=5.3,5.0.

Measurement #2: m=21.0,20.2 and x=10.4,10.2.

Gold Member
Ok, first, the formula you have posted applies only when there is some kind of statistical error. You need to have some level of confidence that the variation in your results is statistical, at least in some sense. That is, you need to know that the sigma's really are statistical as opposed to systematic.

If you think about the uncertainty due to measurement this may, or may not, be suitable. For example, consider a ruler with a smallest division of 1 mm. You might quote +/- 0.5 mm as the error. This might be suitable to quote as a sigma for the formula you used.

Now you did four measurements all together. That is not enough to reasonably estimate a sigma. So it may be that you want to substitute the measurement error for sigma. That may be all you have.

So you can take K = mg/x to define your function f. Take K as a function of m and x and plug it into your formula. So instead of f you have K(x,m). Instead of x, y, z you have x, m. And you need to get the partial of K with respect to x and then m, and use measurement uncertainties for sigma x and sigma m.

But you have four independent measurements. There is no reason to think that the uncertainty with a mass of 10 (or so) is the same as with a mass of 20 (or so). So you have to do this formula four times. This will give you four different estimates of sigma K. And that's all you can do, since there isn't any reason to expect them to be the same. Hint: What is the partial of m/x with respect to x?

You should find that the two in Measurement #1 are similar to each other, the two in #2 similar to each other, but the two groups are different.

Mentor
With two measurements per set we certainly need some external source to estimate the uncertainties on the mass and distance measurements. Usually they can be split into two components, a statistical one (different in each measurement, like reading values off a scale) and a systematic one (the same for all - calibration errors and similar things).
With the statistical uncertainty and your measurements you can get an average (of just two values in this case), calculate k in both cases and then make an average of those two. The systematic uncertainties are a bit more tricky (if they are significant). A miscalibrated scale will usually be wrong both at 10kg and 20kg in a similar way, and effects like "not putting the ruler at the correct zero position" might be the same for both sets of measurements as well. For a lab course, it is probably sufficient to assume systematic uncertainties apply to both sets of measurements in the same way, so you can add them afterwards (e.g. 1% mass scale uncertainty leads to 1% relative uncertainty on the final result).

jinawee
Thanks for the answers. But if one should leave both values of k separated wouldn't it become very messy if the are many measurements, say 100? What is it done in serious experiments?

Btw, for simplicity, I was supposing no variable correlations and no systematic error.

Mentor
In serious experiments, each k would certainly be evaluated separately as the spring might show deviations from Hooke's law. If it seems constant enough to make an average (within a certain range), then correlations between the individual values are taken into account.

Computer programs do the individual calculations, of course. If you have 2 or 100 sets of measurements does not matter much.

But if one should leave both values of k separated wouldn't it become very messy if the are many measurements, say 100?

What objection would you have to estimating the standard deviation of k from the 100 calculated values of mg/x ?

What is it done in serious experiments?

Applying statistics to real life problems is subjective. Serious experimenters in different fields may use different methods. You should look at published reports or ask experimenters in your field of interest what they do. It can be more a cultural matter than a mathematical problem.

For example, as DEvans points out, the formula:

$$\sigma_f(x,y,z)=\sqrt{\left| \frac{\partial f}{\partial x} \right| \sigma_x^2 + \left| \frac{\partial f}{\partial y} \right| \sigma_y^2 + \left| \frac{\partial f}{\partial z} \right| \sigma_z^2}$$

is an approximation and it is an approximation that holds provided you make several assumptions. You say

When you make a single measurement of several variables, say (x,y,z) and you calculate a function f(x,y,z), you just have to apply the common formula of error propagation:

That reveals that the formula is traditional in your field of work, perhaps a traditional in most physical science experiments. But someone working in the social sciences or testing pharmaceuticals might follow a different tradition.

Mentor
What objection would you have to estimating the standard deviation of k from the 100 calculated values of mg/x ?
Nonlinearity of the spring and correlated uncertainties.
If you measure the same point 100 times, you get a good estimate for your statistical uncertainty, of course (neglecting issues like temperature drift over time that could also have an influence - and tons of other things we did not think of so far).
That reveals that the formula is traditional in your field of work, perhaps a traditional in most physical science experiments. But someone working in the social sciences or testing pharmaceuticals might follow a different tradition.
*must resist to comment the quality of statistics in those fields*