How to calculate error propagation for several measurements?

In summary, the conversation discusses the topic of error propagation analysis and how to calculate the error in a function that depends on multiple variables. The formula for error propagation is provided, but the individual is unsure of how to apply it when there are many values for each variable. They give an example of calculating the elasticity constant for a spring and ask for a simple example to better understand the concept. The conversation also touches on the importance of considering both statistical and systematic uncertainties in experiments and how this can be done in different fields.
  • #1
jinawee
28
2
I'm having trouble with error propagation analysis. When you make a single measurement of several variables, say (x,y,z) and you calculate a function f(x,y,z), you just have to apply the common formula of error propagation:

$$\sigma_f(x,y,z)=\sqrt{\left| \frac{\partial f}{\partial x} \right| \sigma_x^2 + \left| \frac{\partial f}{\partial y} \right| \sigma_y^2 + \left| \frac{\partial f}{\partial z} \right| \sigma_z^2}$$

But I don't know what to do when you have many values for each (x,y,z).Suppose you measure m and x in a spring to calculate the elasticity constant via k(x,m)=mg/x.The variables m and x are measured with an intrinsic error of the apparatus. You repeat the measurement several times leaving the variables fixed. The error could be combined using the cuadratic sum.

Now, you change m, so x varies too. You repeat the same process above. This would introduce a dispersion in k.

How is the error of k in this case? It seems that possible solutions could be Demming regression, total least squares, Monter Carlo methods... but I'm quite lost, a simple example would be helpful.

To give some numbers (totally invented):

Measurement #1: m=10.0,10.1 and x=5.3,5.0.

Measurement #2: m=21.0,20.2 and x=10.4,10.2.
 
Physics news on Phys.org
  • #2
Ok, first, the formula you have posted applies only when there is some kind of statistical error. You need to have some level of confidence that the variation in your results is statistical, at least in some sense. That is, you need to know that the sigma's really are statistical as opposed to systematic.

If you think about the uncertainty due to measurement this may, or may not, be suitable. For example, consider a ruler with a smallest division of 1 mm. You might quote +/- 0.5 mm as the error. This might be suitable to quote as a sigma for the formula you used.

Now you did four measurements all together. That is not enough to reasonably estimate a sigma. So it may be that you want to substitute the measurement error for sigma. That may be all you have.

So you can take K = mg/x to define your function f. Take K as a function of m and x and plug it into your formula. So instead of f you have K(x,m). Instead of x, y, z you have x, m. And you need to get the partial of K with respect to x and then m, and use measurement uncertainties for sigma x and sigma m.

But you have four independent measurements. There is no reason to think that the uncertainty with a mass of 10 (or so) is the same as with a mass of 20 (or so). So you have to do this formula four times. This will give you four different estimates of sigma K. And that's all you can do, since there isn't any reason to expect them to be the same. Hint: What is the partial of m/x with respect to x?

You should find that the two in Measurement #1 are similar to each other, the two in #2 similar to each other, but the two groups are different.
 
  • #3
With two measurements per set we certainly need some external source to estimate the uncertainties on the mass and distance measurements. Usually they can be split into two components, a statistical one (different in each measurement, like reading values off a scale) and a systematic one (the same for all - calibration errors and similar things).
With the statistical uncertainty and your measurements you can get an average (of just two values in this case), calculate k in both cases and then make an average of those two. The systematic uncertainties are a bit more tricky (if they are significant). A miscalibrated scale will usually be wrong both at 10kg and 20kg in a similar way, and effects like "not putting the ruler at the correct zero position" might be the same for both sets of measurements as well. For a lab course, it is probably sufficient to assume systematic uncertainties apply to both sets of measurements in the same way, so you can add them afterwards (e.g. 1% mass scale uncertainty leads to 1% relative uncertainty on the final result).
 
  • #4
Thanks for the answers. But if one should leave both values of k separated wouldn't it become very messy if the are many measurements, say 100? What is it done in serious experiments?

Btw, for simplicity, I was supposing no variable correlations and no systematic error.
 
  • #5
In serious experiments, each k would certainly be evaluated separately as the spring might show deviations from Hooke's law. If it seems constant enough to make an average (within a certain range), then correlations between the individual values are taken into account.

Computer programs do the individual calculations, of course. If you have 2 or 100 sets of measurements does not matter much.
 
  • #6
jinawee said:
But if one should leave both values of k separated wouldn't it become very messy if the are many measurements, say 100?

What objection would you have to estimating the standard deviation of k from the 100 calculated values of mg/x ?

What is it done in serious experiments?

Applying statistics to real life problems is subjective. Serious experimenters in different fields may use different methods. You should look at published reports or ask experimenters in your field of interest what they do. It can be more a cultural matter than a mathematical problem.

For example, as DEvans points out, the formula:

$$\sigma_f(x,y,z)=\sqrt{\left| \frac{\partial f}{\partial x} \right| \sigma_x^2 + \left| \frac{\partial f}{\partial y} \right| \sigma_y^2 + \left| \frac{\partial f}{\partial z} \right| \sigma_z^2}$$

is an approximation and it is an approximation that holds provided you make several assumptions. You say

When you make a single measurement of several variables, say (x,y,z) and you calculate a function f(x,y,z), you just have to apply the common formula of error propagation:

That reveals that the formula is traditional in your field of work, perhaps a traditional in most physical science experiments. But someone working in the social sciences or testing pharmaceuticals might follow a different tradition.
 
  • #7
Stephen Tashi said:
What objection would you have to estimating the standard deviation of k from the 100 calculated values of mg/x ?
Nonlinearity of the spring and correlated uncertainties.
If you measure the same point 100 times, you get a good estimate for your statistical uncertainty, of course (neglecting issues like temperature drift over time that could also have an influence - and tons of other things we did not think of so far).
Stephen Tashi said:
That reveals that the formula is traditional in your field of work, perhaps a traditional in most physical science experiments. But someone working in the social sciences or testing pharmaceuticals might follow a different tradition.
*must resist to comment the quality of statistics in those fields*
 

1. What is error propagation and why is it important?

Error propagation is the process of quantifying how uncertainties in measured quantities affect the uncertainty in a calculated result. It is important because it helps to determine the overall accuracy and reliability of a measurement or calculation.

2. What are the main sources of error in measurement?

The main sources of error in measurement include systematic errors, random errors, and human error. Systematic errors are consistent and can be caused by flawed equipment or incorrect procedures. Random errors are unpredictable and can be caused by environmental factors or limitations of the measurement tools. Human error refers to mistakes made by the person conducting the measurement.

3. How do you determine the error propagation for two measurements using addition or subtraction?

To determine the error propagation for two measurements using addition or subtraction, you must add the absolute uncertainties of the two measurements. The absolute uncertainty is the uncertainty associated with each measurement, which can be determined from the precision of the measuring tool used. The resulting sum is the absolute uncertainty of the calculated result.

4. How do you determine the error propagation for two measurements using multiplication or division?

To determine the error propagation for two measurements using multiplication or division, you must add the relative uncertainties of the two measurements. The relative uncertainty is the ratio of the absolute uncertainty to the measured value. The resulting sum is the relative uncertainty of the calculated result, which can then be converted to an absolute uncertainty by multiplying it by the calculated result.

5. Can error propagation be used for more than two measurements?

Yes, error propagation can be used for any number of measurements. The general rule is to add the absolute uncertainties for addition and subtraction, and to add the relative uncertainties for multiplication and division. For more complex calculations involving multiple measurements, the error propagation formula can be applied iteratively to determine the overall uncertainty of the final result.

Similar threads

  • Other Physics Topics
Replies
1
Views
2K
  • General Math
Replies
5
Views
959
  • Other Physics Topics
Replies
1
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • General Math
Replies
3
Views
1K
  • Introductory Physics Homework Help
Replies
15
Views
920
  • Other Physics Topics
Replies
4
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
1K
  • Other Physics Topics
Replies
4
Views
2K
  • Other Physics Topics
Replies
9
Views
4K
Back
Top