brainpushups said:
Ok. So, by the OPs method (which I've seen called quadrature addition of uncertainty), estimates the uncertainty at about 7 cubic millimeters. If we accept that the uncertain digit is also considered significant then there are four significant digits.
The quadrature method controls for overestimations in uncertainty from the general idea you presented (finding the highest and lowest possible values). The big idea is that, provided we assume the uncertainties are completely random there tends to be a good probability there will be a net canceling canceling effect in the uncertainty (one value being above the true value and another being below the true value). Thus, if you use the absolute highest and lowest values you have, often, overestimated the overall uncertainty. See, for example, An Introduction to Error Analysis by John Taylor for an excellent readable reference.
Of course, in reality the errors are random, with some unknown type of probability distributions. Hopefully the mean (expected values) of the individual errors equal zero, so the "central" value of the error bar is the actual mean of the data distribution, and the part after the ##\pm## sign is a measure of spread, such as a simple multiple of the standard deviation. Typically one would expect the error distribution to peak at zero and fall off as we go away from 0 on either side, so having errors near the end of the stated interval is unlikely. And, of course, in order to have ABC near the theoretical lower bound we would need all three of A, B and C to be at their lower bound, or nearly so, and the probability of all three being low is 1/8. It is more likely that we get partial cancellation, with some of the factors being too high and others to low. So, yes, indeed, the theoretical lower and upper bounds I gave are not likely to be realistic.
I realize that the "usual" way of estimating these things is to assume that ##A = \alpha+\epsilon_a##, ##B = \beta + \epsilon_b## and ##C = \gamma + \epsilon_c##, where ##\alpha, \beta, \gamma## are the unknown "true" values and the ##\epsilon##s are independent random errors with ##E(\epsilon_j) = 0## and some standard deviations ##\sigma_j## that are related to ##\pm 0.01## in some way; typically, ##.01 = k \sigma## for some ##k## near 1 (maybe a bit < 1 or a bit > 1). Then, keeping only first-order terms, we have (approximately) ##V \equiv ABC = \alpha \beta \gamma + \epsilon_v##, where ##E(\epsilon_v) = 0## and the variance ##\sigma_v^2## of ##\epsilon_v## is
$$ \sigma_v^2 = \sigma_1^2 + \sigma_2^2 + \sigma_3^2$$.
If all three ##\sigma##s are equal (as they are in this case), then ##\sigma_v = \sqrt{3} \sigma##, so the appropriate error would be ##\pm \sqrt{3} (.01) \doteq 0.017##.
I know all that, but I still do not get the
four significant figure valuation that you gave. I am perfectly happy to go with with three significant figures, however.
BTW: if we do not take the first-order approximation in ##V## then we have quadratic and cubic terms in the ##\epsilon_k##, so ##\epsilon_v## need no longer have mean 0 and will no longer have variance given by the simple sum-of-variances formula. If we wanted a good picture of the actual probability distribution of ##\epsilon_v##, we might have a sufficiently complicated and difficult probability problem that be best approach would be to use Monte-Carlo simulation.