SamBam77 said:
From learning about error analysis years ago, I vaguely remember something about weighting the measurement by the error.
Such a method is used. First let's be clear on what the method calculates! Terms like "mean", "standard deviation" are ambiguous. We have to distinguish between "mean of the population", "mean of the sample" and "an estimator of the mean of the population computed from the sample". Similar distinctions are required for the term "standard deviation". (Using unambiguous language is cumbersome and even experts prefer ambiguity. They rely on context to make precise meanings clear.)
There are "estimators for the mean of the population" that are formulas which weight values of the sample by the reciprocals of the known population standard deviations of different processes that measure them.
The term "error" is vague. If you state a numerical "error" (like .002 cm) for the population mean of W, you might mean it is "an estimate of the standard deviation of an estimator for the mean of W". Alternatively, you might mean the "error" to be an absolute guarantee. For example "plus or minus .002 cm" might assert that the reported value is within that span from the actual value (with probability 1).
In your case, you haven't said that different measuring equipment is used on different objects, so I don't know how you would assign a different estimate of "error" (however we define it) to different measurements. If a measuement device has an "error" that is stated as a percent of its reported measurement, then bigger measurements tend to have bigger absolute error. That would be a reason for asserting that different measurments come from populations that have different standard deviations.
Since the goal is to estimate the population mean of W from a formula W(a,b), one thing to consider if how to get an "unbiased" estimate. In mathematical statistics, a "statistic" is an algorithm or formula whose input is values from the sample. Following the tradition of using ambiguous language, a particular value of a statistic (like 2.43 children per family) is also called a "statistic". An "estimator" is a statistic in the sense of an algorithm. (Strictly speaking a specific value for an estimator , like 2.43, should be called an "estimate", not an "estimator".) Since an estimator depends on the random values in a sample, it is a random variable and has its own mean and standard deviation, which are not necessarily equal to the mean and standard deviation of the population from which the samples are taken. In the happy situation where the mean of the estimator is equal to the mean of the population, we called the estimator "unbiased"
In your example, it would be nice to know that the measurements a and b are unbiased estimators. For example, think of the measuring device as an estimator that performs some process on the sample. It would nice to know that (theoretically) the mean value of the population of all possible measurements on a single given actual value of a is equal to that value - i.e. that errors "cancel out".
Suppose the measurements of a and b are unbiased estimators of their respective population means. This does not imply that the estimator of the population mean of W based on the function W(a,b) is also unbiased. If W(a,b) is a linear function of its arguments, then a straightforward average of values of W(a,b) will be unbiased estimator for the population mean of W. But if W(a,b) is not linear (for example if it involves products ab, powers a^2, trig functions etc.) then W(a,b) might be a biased estimator. It is possible to create unbiased estimators from biased estimators, if enough specifics are known.
From a mathematical point of view, there are many interesting aspects to real world problems involving relatively straighforward physical measurements. However, I don't recall reading any article that pursued them to my taste. I suppose people dealing with those problems don't write for an audience fascinated by probability theory.