Finding Uncertainty of Average Values

In summary: I suspect you are trying to say it has an uncertainty of 1/100, yes?In summary, the conversation discusses the uncertainty of finding the average of three values, with one being more precise than the others. The recommended approach is to use a weighted average, with the more precise measurement having a higher weight. This results in a smaller uncertainty for the weighted average. Additionally, the difference between precision and uncertainty is explained, using the fine structure constant as an example.
  • #1
marvolo1300
15
0
Let's I have three values, 3.30±0.1, 3.32±0.1, and 3.31±0.1.

How would I find the uncertainty of the average of these values?
 
Physics news on Phys.org
  • #2
There's not enough information here. Are the measurements independent? Correlated? Anticorrelated?
 
  • #3
Vanadium 50 said:
There's not enough information here. Are the measurements independent? Correlated? Anticorrelated?

Sorry, I'm not sure what you mean. These measurements are the same length recorded 3 times.
 
  • #4
There is an engineering "rule of thumb" that "When measurements add, their errors add. When measurements multiply their relative errors add."

That's because if U= f+ g, then dU= df+ dg but if U= f(g), dU= fdg+ gdf so that, dividing by fg= U, dU/U= dg/g+ df/f.

Having said all of that, you are adding the three measurement so their errors add (the 3 you divide by to get the average has no error so doesn't count).

Here, the error for each measurement is .01 so the error in the sum is .03 and, dividing by 3, the error in the average is .01 again. That should be no surprise.

The average of the three values is, of course, [itex]3.31\pm 0.01[/itex].

A direct way to see the same thing is to argue that the largest the three numbers could be is 3.30+.01= 3.31, 3.31+ .01= 3.32, and 3.32+ .01= 3.33 so the largest their sum could be is 3.31+ 3.32+ 3.33= 9.96 and the largest the average could be is 9.96/3= 3.32. The smallest the three numbers could be is 3.30- .01= 3.29, 3.31- .01= 3.30, and 3.32- .01= 3.31. The smallest the sum could be is 3.29+ 3.30+ 3.31= 9.90 so the smallest the average could be is 9.90/3= 3.30. That is, the average could be as large as 3.31+ .01 and the smallest is 3.31- .01. That means the correct value lies in the range [itex]3.31\pm .01[/itex].
 
  • #5
Assuming the measurements truly are independent and that the errors are Gaussian, the standard approach is that the error of a sum of numbers is the square root of the sum of the squares of the individual errors. The arithmetic means is the sum divided by the number of samples. Similarly, one divides the RSS error by the number of samples to get an estimate of the error in that average.

In this case, all of the errors are equal (0.1). The RSS error is 0.1√3. Dividing by 3 yields 0.1/√3, or about 0.06.
 
  • #6
For a function of three variables f(x,y,z), the propagation of errors formula is:
[tex]\sigma_f^2=
\left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 +
\left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +
\left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+
2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}[/tex]

http://en.wikipedia.org/wiki/Propagation_of_uncertainty

Assuming that the covariances ([itex]\sigma_{ij}[/itex]) are all zero then you get the result D H showed.

The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
 
Last edited:
  • #7
DaleSpam said:
For a function of three variables f(x,y,z), the propagation of errors formula is:
[tex]\sigma_f^2=
\left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 +
\left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +
\left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+
2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}[/tex]

http://en.wikipedia.org/wiki/Propagation_of_uncertainty

Assuming that the covariances ([itex]\sigma_{ij}[/itex]) are all zero then you get the result D H showed.
That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.

The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
DaleSpam said:
The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
Side note: I think Halls 0.01 figure is a (repeated) typo. He obviously meant 0.1 rather than 0.01. His 0.01 figure is not a conservative error estimate.


-----------------------------------------------------

One last item on this: Suppose you made a fourth measurement, but now use more precise instrumentation. Suppose the measurement is 3.303±0.010. How to combine this more precise estimate with those less precise measurements, and how to compute the error in this new average?

You don't want to use a simple arithmetic mean any more. That more precise measurement should have more weight. What you want is a weighted average, and when you have error estimates on hand, the "best" weight from either a maximum likelihood estimator (MLE) or a best linear unbiased estimator (BLUE) perspective is the inverse of the square of the uncertainty. Once again assuming independent, unbiased, and Gaussian measurements,
[tex]\bar x = \frac{\sum_i \frac{x_i}{\sigma_i^2}} {\sum_i \frac{1}{\sigma_i^2}}[/tex]
The best estimate of the error is
[tex]\sigma^2 = \frac{1} {\sum_i \frac{1}{\sigma_i^2}}[/tex]

The weighted average with our new, ultra-precise measurement becomes 3.3032±0.0098. The new measurement dominates over the old ones, as it should. The weighted average is almost the same as the value yielded by this single measurement, and those less precise measurements didn't do much to decrease the uncertainty in the weighted average.
 
  • Like
Likes LCHL
  • #8
It makes no sense to quote a number as 3.32+/- 0.1. This means the number lies between 3.30 and 3.40... You cannot justify the 0.02
3.32 means you know it is not 3.33 or 3.31 otherwise you would have recorded it as such.
3.32 implies +/- 0.01
 
  • #9
Sure it does. Look at the fine structure constant, http://physics.nist.gov/cgi-bin/cuu/Value?alph:
[itex]\alpha=7.2973525698\times10^{-3}\pm0.00000000\times10^{-3}[/itex].

You need to understand the difference between precision and uncertainty. This instrument apparently has a precision of 1/100 but an uncertainty of 1/10.
 
  • #10
D H said:
That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.
It's not just the risk of a correlation. For engineering purposes, you may want to allow for the worst case. Assuming that the given ranges of the individual values represent hard limits, not merely some number of standard deviations, adding those ranges may be entirely appropriate. Converting to standard deviations may lead to inadequate safeguards if too few standard deviations are taken, or grossly excessive ones if enough standard deviations are used to achieve (under a Normal distribution) 99% confidence.
 

What is the definition of "Uncertainty of an Average"?

The uncertainty of an average refers to the range of possible values in which the true average of a set of measurements may fall. It takes into account the errors and variations in the measurements and provides a measure of how confident we can be in the calculated average.

How is the uncertainty of an average calculated?

The uncertainty of an average is typically calculated by taking the standard deviation of the measurements and dividing it by the square root of the sample size. This value represents the standard error of the mean and provides an estimate of the uncertainty in the calculated average.

Why is it important to consider the uncertainty of an average?

Considering the uncertainty of an average is important because it allows us to understand the reliability and precision of our measurements. It also helps us to make informed decisions and draw accurate conclusions based on the data.

How does sample size affect the uncertainty of an average?

The uncertainty of an average decreases as the sample size increases. This is because a larger sample size provides more data points, reducing the impact of individual measurement errors and resulting in a more accurate average with lower uncertainty.

What are some methods for reducing the uncertainty of an average?

Some methods for reducing the uncertainty of an average include increasing the sample size, improving the precision of the measuring instruments, and conducting multiple trials. Additionally, carefully controlling and minimizing sources of error can also help to reduce uncertainty.

Similar threads

  • Other Physics Topics
Replies
8
Views
2K
  • Other Physics Topics
Replies
1
Views
2K
  • Other Physics Topics
Replies
3
Views
3K
  • Introductory Physics Homework Help
Replies
4
Views
256
Replies
12
Views
786
  • Other Physics Topics
Replies
13
Views
3K
  • Other Physics Topics
Replies
2
Views
3K
Replies
6
Views
1K
Replies
1
Views
599
  • Atomic and Condensed Matter
Replies
1
Views
699
Back
Top