Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Uncertainty of an Average

  1. Jun 9, 2012 #1
    Let's I have three values, 3.30±0.1, 3.32±0.1, and 3.31±0.1.

    How would I find the uncertainty of the average of these values?
  2. jcsd
  3. Jun 9, 2012 #2

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    There's not enough information here. Are the measurements independent? Correlated? Anticorrelated?
  4. Jun 9, 2012 #3
    Sorry, I'm not sure what you mean. These measurements are the same length recorded 3 times.
  5. Jun 9, 2012 #4


    User Avatar
    Staff Emeritus
    Science Advisor

    There is an engineering "rule of thumb" that "When measurements add, their errors add. When measurements multiply their relative errors add."

    That's because if U= f+ g, then dU= df+ dg but if U= f(g), dU= fdg+ gdf so that, dividing by fg= U, dU/U= dg/g+ df/f.

    Having said all of that, you are adding the three measurement so their errors add (the 3 you divide by to get the average has no error so doesn't count).

    Here, the error for each measurement is .01 so the error in the sum is .03 and, dividing by 3, the error in the average is .01 again. That should be no surprise.

    The average of the three values is, of course, [itex]3.31\pm 0.01[/itex].

    A direct way to see the same thing is to argue that the largest the three numbers could be is 3.30+.01= 3.31, 3.31+ .01= 3.32, and 3.32+ .01= 3.33 so the largest their sum could be is 3.31+ 3.32+ 3.33= 9.96 and the largest the average could be is 9.96/3= 3.32. The smallest the three numbers could be is 3.30- .01= 3.29, 3.31- .01= 3.30, and 3.32- .01= 3.31. The smallest the sum could be is 3.29+ 3.30+ 3.31= 9.90 so the smallest the average could be is 9.90/3= 3.30. That is, the average could be as large as 3.31+ .01 and the smallest is 3.31- .01. That means the correct value lies in the range [itex]3.31\pm .01[/itex].
  6. Jun 9, 2012 #5

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Assuming the measurements truly are independent and that the errors are Gaussian, the standard approach is that the error of a sum of numbers is the square root of the sum of the squares of the individual errors. The arithmetic means is the sum divided by the number of samples. Similarly, one divides the RSS error by the number of samples to get an estimate of the error in that average.

    In this case, all of the errors are equal (0.1). The RSS error is 0.1√3. Dividing by 3 yields 0.1/√3, or about 0.06.
  7. Jun 9, 2012 #6


    Staff: Mentor

    For a function of three variables f(x,y,z), the propagation of errors formula is:
    \left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 +
    \left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +
    \left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +
    2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+
    2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+
    2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}[/tex]


    Assuming that the covariances ([itex]\sigma_{ij}[/itex]) are all zero then you get the result D H showed.

    The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
    Last edited: Jun 9, 2012
  8. Jun 9, 2012 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.

    Side note: I think Halls 0.01 figure is a (repeated) typo. He obviously meant 0.1 rather than 0.01. His 0.01 figure is not a conservative error estimate.


    One last item on this: Suppose you made a fourth measurement, but now use more precise instrumentation. Suppose the measurement is 3.303±0.010. How to combine this more precise estimate with those less precise measurements, and how to compute the error in this new average?

    You don't want to use a simple arithmetic mean any more. That more precise measurement should have more weight. What you want is a weighted average, and when you have error estimates on hand, the "best" weight from either a maximum likelihood estimator (MLE) or a best linear unbiased estimator (BLUE) perspective is the inverse of the square of the uncertainty. Once again assuming independent, unbiased, and Gaussian measurements,
    [tex]\bar x = \frac{\sum_i \frac{x_i}{\sigma_i^2}} {\sum_i \frac{1}{\sigma_i^2}}[/tex]
    The best estimate of the error is
    [tex]\sigma^2 = \frac{1} {\sum_i \frac{1}{\sigma_i^2}}[/tex]

    The weighted average with our new, ultra-precise measurement becomes 3.3032±0.0098. The new measurement dominates over the old ones, as it should. The weighted average is almost the same as the value yielded by this single measurement, and those less precise measurements didn't do much to decrease the uncertainty in the weighted average.
  9. Jun 9, 2012 #8
    It makes no sense to quote a number as 3.32+/- 0.1. This means the number lies between 3.30 and 3.40........ You cannot justify the 0.02
    3.32 means you know it is not 3.33 or 3.31 otherwise you would have recorded it as such.
    3.32 implies +/- 0.01
  10. Jun 9, 2012 #9

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Sure it does. Look at the fine structure constant, http://physics.nist.gov/cgi-bin/cuu/Value?alph:

    You need to understand the difference between precision and uncertainty. This instrument apparently has a precision of 1/100 but an uncertainty of 1/10.
  11. Jun 9, 2012 #10


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    It's not just the risk of a correlation. For engineering purposes, you may want to allow for the worst case. Assuming that the given ranges of the individual values represent hard limits, not merely some number of standard deviations, adding those ranges may be entirely appropriate. Converting to standard deviations may lead to inadequate safeguards if too few standard deviations are taken, or grossly excessive ones if enough standard deviations are used to achieve (under a Normal distribution) 99% confidence.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Uncertainty of an Average