Finding Uncertainty of Average Values

  • Context: Undergrad 
  • Thread starter Thread starter marvolo1300
  • Start date Start date
  • Tags Tags
    Average Uncertainty
Click For Summary
SUMMARY

The discussion focuses on calculating the uncertainty of the average of three measurements: 3.30±0.1, 3.32±0.1, and 3.31±0.1. It establishes that the errors in these measurements add when summing, leading to an average uncertainty of ±0.01. The conversation also introduces the concept of weighted averages when incorporating a more precise measurement, 3.303±0.010, which results in a new average of 3.3032±0.0098. The importance of understanding the distinction between precision and uncertainty in engineering contexts is emphasized, particularly when designing safety-critical structures.

PREREQUISITES
  • Understanding of basic statistics, particularly mean and error propagation.
  • Familiarity with the concepts of independent and correlated measurements.
  • Knowledge of weighted averages and their application in data analysis.
  • Basic principles of uncertainty in measurements and their implications in engineering.
NEXT STEPS
  • Study the propagation of uncertainty in measurements using the formula for error propagation.
  • Learn about weighted averages and how to apply them in statistical analysis.
  • Explore the differences between precision and accuracy in measurement contexts.
  • Investigate engineering safety standards and how uncertainty impacts design decisions.
USEFUL FOR

Engineers, data analysts, and researchers involved in experimental design and measurement analysis, particularly those focused on precision and uncertainty in engineering applications.

marvolo1300
Messages
15
Reaction score
0
Let's I have three values, 3.30±0.1, 3.32±0.1, and 3.31±0.1.

How would I find the uncertainty of the average of these values?
 
Physics news on Phys.org
There's not enough information here. Are the measurements independent? Correlated? Anticorrelated?
 
Vanadium 50 said:
There's not enough information here. Are the measurements independent? Correlated? Anticorrelated?

Sorry, I'm not sure what you mean. These measurements are the same length recorded 3 times.
 
There is an engineering "rule of thumb" that "When measurements add, their errors add. When measurements multiply their relative errors add."

That's because if U= f+ g, then dU= df+ dg but if U= f(g), dU= fdg+ gdf so that, dividing by fg= U, dU/U= dg/g+ df/f.

Having said all of that, you are adding the three measurement so their errors add (the 3 you divide by to get the average has no error so doesn't count).

Here, the error for each measurement is .01 so the error in the sum is .03 and, dividing by 3, the error in the average is .01 again. That should be no surprise.

The average of the three values is, of course, 3.31\pm 0.01.

A direct way to see the same thing is to argue that the largest the three numbers could be is 3.30+.01= 3.31, 3.31+ .01= 3.32, and 3.32+ .01= 3.33 so the largest their sum could be is 3.31+ 3.32+ 3.33= 9.96 and the largest the average could be is 9.96/3= 3.32. The smallest the three numbers could be is 3.30- .01= 3.29, 3.31- .01= 3.30, and 3.32- .01= 3.31. The smallest the sum could be is 3.29+ 3.30+ 3.31= 9.90 so the smallest the average could be is 9.90/3= 3.30. That is, the average could be as large as 3.31+ .01 and the smallest is 3.31- .01. That means the correct value lies in the range 3.31\pm .01.
 
Assuming the measurements truly are independent and that the errors are Gaussian, the standard approach is that the error of a sum of numbers is the square root of the sum of the squares of the individual errors. The arithmetic means is the sum divided by the number of samples. Similarly, one divides the RSS error by the number of samples to get an estimate of the error in that average.

In this case, all of the errors are equal (0.1). The RSS error is 0.1√3. Dividing by 3 yields 0.1/√3, or about 0.06.
 
For a function of three variables f(x,y,z), the propagation of errors formula is:
\sigma_f^2=<br /> \left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 + <br /> \left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +<br /> \left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +<br /> 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+<br /> 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+<br /> 2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}

http://en.wikipedia.org/wiki/Propagation_of_uncertainty

Assuming that the covariances (\sigma_{ij}) are all zero then you get the result D H showed.

The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
 
Last edited:
DaleSpam said:
For a function of three variables f(x,y,z), the propagation of errors formula is:
\sigma_f^2=<br /> \left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 + <br /> \left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +<br /> \left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +<br /> 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+<br /> 2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+<br /> 2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}

http://en.wikipedia.org/wiki/Propagation_of_uncertainty

Assuming that the covariances (\sigma_{ij}) are all zero then you get the result D H showed.
That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.

The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
DaleSpam said:
The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
Side note: I think Halls 0.01 figure is a (repeated) typo. He obviously meant 0.1 rather than 0.01. His 0.01 figure is not a conservative error estimate.


-----------------------------------------------------

One last item on this: Suppose you made a fourth measurement, but now use more precise instrumentation. Suppose the measurement is 3.303±0.010. How to combine this more precise estimate with those less precise measurements, and how to compute the error in this new average?

You don't want to use a simple arithmetic mean any more. That more precise measurement should have more weight. What you want is a weighted average, and when you have error estimates on hand, the "best" weight from either a maximum likelihood estimator (MLE) or a best linear unbiased estimator (BLUE) perspective is the inverse of the square of the uncertainty. Once again assuming independent, unbiased, and Gaussian measurements,
\bar x = \frac{\sum_i \frac{x_i}{\sigma_i^2}} {\sum_i \frac{1}{\sigma_i^2}}
The best estimate of the error is
\sigma^2 = \frac{1} {\sum_i \frac{1}{\sigma_i^2}}

The weighted average with our new, ultra-precise measurement becomes 3.3032±0.0098. The new measurement dominates over the old ones, as it should. The weighted average is almost the same as the value yielded by this single measurement, and those less precise measurements didn't do much to decrease the uncertainty in the weighted average.
 
  • Like
Likes   Reactions: LCHL
It makes no sense to quote a number as 3.32+/- 0.1. This means the number lies between 3.30 and 3.40... You cannot justify the 0.02
3.32 means you know it is not 3.33 or 3.31 otherwise you would have recorded it as such.
3.32 implies +/- 0.01
 
Sure it does. Look at the fine structure constant, http://physics.nist.gov/cgi-bin/cuu/Value?alph:
\alpha=7.2973525698\times10^{-3}\pm0.00000000\times10^{-3}.

You need to understand the difference between precision and uncertainty. This instrument apparently has a precision of 1/100 but an uncertainty of 1/10.
 
  • #10
D H said:
That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.
It's not just the risk of a correlation. For engineering purposes, you may want to allow for the worst case. Assuming that the given ranges of the individual values represent hard limits, not merely some number of standard deviations, adding those ranges may be entirely appropriate. Converting to standard deviations may lead to inadequate safeguards if too few standard deviations are taken, or grossly excessive ones if enough standard deviations are used to achieve (under a Normal distribution) 99% confidence.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
5K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K