- #1
marvolo1300
- 15
- 0
Let's I have three values, 3.30±0.1, 3.32±0.1, and 3.31±0.1.
How would I find the uncertainty of the average of these values?
How would I find the uncertainty of the average of these values?
Vanadium 50 said:There's not enough information here. Are the measurements independent? Correlated? Anticorrelated?
That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.DaleSpam said:For a function of three variables f(x,y,z), the propagation of errors formula is:
[tex]\sigma_f^2=
\left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 +
\left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +
\left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+
2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}[/tex]
http://en.wikipedia.org/wiki/Propagation_of_uncertainty
Assuming that the covariances ([itex]\sigma_{ij}[/itex]) are all zero then you get the result D H showed.
The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
Side note: I think Halls 0.01 figure is a (repeated) typo. He obviously meant 0.1 rather than 0.01. His 0.01 figure is not a conservative error estimate.DaleSpam said:The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
It's not just the risk of a correlation. For engineering purposes, you may want to allow for the worst case. Assuming that the given ranges of the individual values represent hard limits, not merely some number of standard deviations, adding those ranges may be entirely appropriate. Converting to standard deviations may lead to inadequate safeguards if too few standard deviations are taken, or grossly excessive ones if enough standard deviations are used to achieve (under a Normal distribution) 99% confidence.D H said:That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.
The uncertainty of an average refers to the range of possible values in which the true average of a set of measurements may fall. It takes into account the errors and variations in the measurements and provides a measure of how confident we can be in the calculated average.
The uncertainty of an average is typically calculated by taking the standard deviation of the measurements and dividing it by the square root of the sample size. This value represents the standard error of the mean and provides an estimate of the uncertainty in the calculated average.
Considering the uncertainty of an average is important because it allows us to understand the reliability and precision of our measurements. It also helps us to make informed decisions and draw accurate conclusions based on the data.
The uncertainty of an average decreases as the sample size increases. This is because a larger sample size provides more data points, reducing the impact of individual measurement errors and resulting in a more accurate average with lower uncertainty.
Some methods for reducing the uncertainty of an average include increasing the sample size, improving the precision of the measuring instruments, and conducting multiple trials. Additionally, carefully controlling and minimizing sources of error can also help to reduce uncertainty.