DaleSpam said:
For a function of three variables f(x,y,z), the propagation of errors formula is:
\sigma_f^2=<br />
\left| \frac{\partial f}{\partial x} \right|^2 \sigma_x^2 + <br />
\left| \frac{\partial f}{\partial y} \right|^2 \sigma_y^2 +<br />
\left| \frac{\partial f}{\partial z} \right|^2 \sigma_z^2 +<br />
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial y} \sigma_{xy}+<br />
2 \frac{\partial f}{\partial x} \frac{\partial f}{\partial z} \sigma_{xz}+<br />
2 \frac{\partial f}{\partial z} \frac{\partial f}{\partial y} \sigma_{zy}
http://en.wikipedia.org/wiki/Propagation_of_uncertainty
Assuming that the covariances (\sigma_{ij}) are all zero then you get the result D H showed.
That of course is a big if, and it is why engineers use a rule of thumb that tends to overestimate errors. Better safe than sorry.
The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
DaleSpam said:
The engineering rule of thumb given by HallsOfIvy is exactly that, an easy calculation used by engineers to conservatively approximate the errors easily. For engineers, the conservative part of the approximation is important, i.e. when designing some structure or device that may injure people it is better to overestimate your errors.
Side note: I think Halls 0.01 figure is a (repeated) typo. He obviously meant 0.1 rather than 0.01. His 0.01 figure is not a conservative error estimate.
-----------------------------------------------------
One last item on this: Suppose you made a fourth measurement, but now use more precise instrumentation. Suppose the measurement is 3.303±0.010. How to combine this more precise estimate with those less precise measurements, and how to compute the error in this new average?
You don't want to use a simple arithmetic mean any more. That more precise measurement should have more weight. What you want is a weighted average, and when you have error estimates on hand, the "best" weight from either a maximum likelihood estimator (MLE) or a best linear unbiased estimator (BLUE) perspective is the inverse of the square of the uncertainty. Once again assuming independent, unbiased, and Gaussian measurements,
\bar x = \frac{\sum_i \frac{x_i}{\sigma_i^2}} {\sum_i \frac{1}{\sigma_i^2}}
The best estimate of the error is
\sigma^2 = \frac{1} {\sum_i \frac{1}{\sigma_i^2}}
The weighted average with our new, ultra-precise measurement becomes 3.3032±0.0098. The new measurement dominates over the old ones, as it should. The weighted average is almost the same as the value yielded by this single measurement, and those less precise measurements didn't do much to decrease the uncertainty in the weighted average.