Error Propagation and Uncertainties in Parameters

Click For Summary
SUMMARY

The discussion centers on the complexities of error propagation and uncertainty in calculating a derived quantity (W) from measured parameters (a and b). It highlights that while uncertainties in a and b can be quantified, the variability in W due to material imperfections complicates the estimation of an overall average value for W. The conversation emphasizes the importance of distinguishing between different types of means and standard deviations, and the necessity of understanding whether the measurements of a and b are unbiased, especially when W is a non-linear function of these parameters.

PREREQUISITES
  • Understanding of error propagation in measurements
  • Familiarity with statistical concepts such as mean, standard deviation, and estimators
  • Knowledge of non-linear functions and their implications in statistical analysis
  • Experience with measurement uncertainty and calibration of instruments
NEXT STEPS
  • Study the principles of error analysis in physical measurements
  • Learn about unbiased estimators and their significance in statistical inference
  • Explore methods for calculating weighted averages in the presence of measurement uncertainty
  • Investigate the implications of non-linear relationships in statistical modeling
USEFUL FOR

Researchers, statisticians, and engineers involved in experimental design and data analysis, particularly those dealing with measurement uncertainties and error propagation in derived quantities.

SamBam77
Messages
25
Reaction score
0
There is a quantity (W) that I would like to calculate that is, ultimately, a function of parameters that I can measure directly (a and b),
<br /> W = W(a, b)<br />
But I cannot measure a and b perfectly, there will be some uncertainty in these measurements. This uncertainty will propagate into my calculated value of W. This part, I can do.

My problem is that W, itself, will vary slightly over the sample being measured. There is uncertainty, so to speak, in W because of imperfections in the material unrelated to my measurements. Therefore, I can measure a and b once and get values a1 and b1, and calculate W1, with their respective errors. But then I can measure a and b again and get a2, b2, and then calculate a W2 ≠ W1.

If I repeat the measures of a and b many times, I will get many values for W. How, then, can I combine all these values for W into a single value and express it with some uncertainty on the overall average W?
 
Physics news on Phys.org
The format of your data isn't clear. Let's say we have a population of objects and, for simplicity, say that each object has a single value of W. You are interested in the mean value of W for the population. Do yor data include repeated measurements of the same object as well as measurements of different objects? What do you know about the statistical properties of the instruments used to measure b and a ?
 
Stephen Tashi said:
The format of your data isn't clear. Let's say we have a population of objects and, for simplicity, say that each object has a single value of W. You are interested in the mean value of W for the population. Do yor data include repeated measurements of the same object as well as measurements of different objects? What do you know about the statistical properties of the instruments used to measure b and a ?
If we assume that each object only has a single W, then no, this would not include multiple measurements on the same object.
In reality, however, the W for an object can vary on the same object and there can be multiple measurements on the same object to assess this possibility. But, for simplicity, we'll assume that different regions of the same 'real' object can be thought of as separate objects with a single W, and I only preform a single measurement on each.

For the sake of discussion, let's say the two parameters are measured as follows:
Parameter a: the volume of a fluid measured in a graduated cylinder, with the volume estimated to one decimal place greater than the graduations printed on the device; there is uncertainty in the last decimal place.
Parameter b: the pH of a solution, measured with pH (litmus) paper, with the value of the pH estimated by comparing the color of the paper to a reference chart. The uncertainty of qualitatively comparing colors makes it easier to say that your value lies within a a certain range.From learning about error analysis years ago, I vaguely remember something about weighting the measurement by the error.
 
SamBam77 said:
From learning about error analysis years ago, I vaguely remember something about weighting the measurement by the error.

Such a method is used. First let's be clear on what the method calculates! Terms like "mean", "standard deviation" are ambiguous. We have to distinguish between "mean of the population", "mean of the sample" and "an estimator of the mean of the population computed from the sample". Similar distinctions are required for the term "standard deviation". (Using unambiguous language is cumbersome and even experts prefer ambiguity. They rely on context to make precise meanings clear.)

There are "estimators for the mean of the population" that are formulas which weight values of the sample by the reciprocals of the known population standard deviations of different processes that measure them.

The term "error" is vague. If you state a numerical "error" (like .002 cm) for the population mean of W, you might mean it is "an estimate of the standard deviation of an estimator for the mean of W". Alternatively, you might mean the "error" to be an absolute guarantee. For example "plus or minus .002 cm" might assert that the reported value is within that span from the actual value (with probability 1).

In your case, you haven't said that different measuring equipment is used on different objects, so I don't know how you would assign a different estimate of "error" (however we define it) to different measurements. If a measuement device has an "error" that is stated as a percent of its reported measurement, then bigger measurements tend to have bigger absolute error. That would be a reason for asserting that different measurments come from populations that have different standard deviations.

Since the goal is to estimate the population mean of W from a formula W(a,b), one thing to consider if how to get an "unbiased" estimate. In mathematical statistics, a "statistic" is an algorithm or formula whose input is values from the sample. Following the tradition of using ambiguous language, a particular value of a statistic (like 2.43 children per family) is also called a "statistic". An "estimator" is a statistic in the sense of an algorithm. (Strictly speaking a specific value for an estimator , like 2.43, should be called an "estimate", not an "estimator".) Since an estimator depends on the random values in a sample, it is a random variable and has its own mean and standard deviation, which are not necessarily equal to the mean and standard deviation of the population from which the samples are taken. In the happy situation where the mean of the estimator is equal to the mean of the population, we called the estimator "unbiased"

In your example, it would be nice to know that the measurements a and b are unbiased estimators. For example, think of the measuring device as an estimator that performs some process on the sample. It would nice to know that (theoretically) the mean value of the population of all possible measurements on a single given actual value of a is equal to that value - i.e. that errors "cancel out".

Suppose the measurements of a and b are unbiased estimators of their respective population means. This does not imply that the estimator of the population mean of W based on the function W(a,b) is also unbiased. If W(a,b) is a linear function of its arguments, then a straightforward average of values of W(a,b) will be unbiased estimator for the population mean of W. But if W(a,b) is not linear (for example if it involves products ab, powers a^2, trig functions etc.) then W(a,b) might be a biased estimator. It is possible to create unbiased estimators from biased estimators, if enough specifics are known.

From a mathematical point of view, there are many interesting aspects to real world problems involving relatively straighforward physical measurements. However, I don't recall reading any article that pursued them to my taste. I suppose people dealing with those problems don't write for an audience fascinated by probability theory.
 
Last edited:
To be sure, the vocabulary can be confusing and I am sure I am misusing it plenty in this discussion.
There are "estimators for the mean of the population" that are formulas which weight values of the sample by the reciprocals of the known population standard deviations of different processes that measure them.

Well it sounds like this is what I want. I want to be able to report a value (estimate) for W, along with an “error” in this value, which I arrive at after studying many samples. Other people who see my value will know to expect to measure something close to my W on their own (completely independent) sample(s), possibly with a discrepancy of around my reported “error.” Either an “error” in the sense of an estimate of the standard deviation of the estimator of the mean of W, or as a range of values into which a measurement of W is guaranteed to fall is fine with me, as long as I know which one I am getting.

How do I know if the measurements of a and b are unbiased?
The estimator of W is not a linear function of a and b, there are some non-linear elements in it.

So I guess what you are saying is that this is actually a complicated thing to do correctly?
 
SamBam77 said:
Well it sounds like this is what I want. I want to be able to report a value (estimate) for W, along with an “error” in this value, which I arrive at after studying many samples. Other people who see my value will know to expect to measure something close to my W on their own (completely independent) sample(s), possibly with a discrepancy of around my reported “error.”

I like the way you defined your goal because, whether you intended to or not, it emphasizes that you want a psychological outcome - "people's expectations". Applying statistics to real life problems is highly subjective. If you want to persuade an audience of people (including yourself) then you should study what they accept as evidence. For example, if you are writing a report for a journal, or thesis committee, or company management, you should look at other publications they have approved or written themselves.

Either an “error” in the sense of an estimate of the standard deviation of the estimator of the mean of W, or as a range of values into which a measurement of W is guaranteed to fall is fine with me, as long as I know which one I am getting.

Which of those you can do in a mathematical fashion depends on what you know about errors in measuring a and b. The typical situation is that there are some manufacturers specifications for measuring equipment and some "calibration sticker". It's also typical that nobody can tell you exactly what these specifications mean - are the errors they report estimates of standard deviations or are they absolute guarantees?

How do I know if the measurements of a and b are unbiased?

The sociology of this is that most technicians assume the properly functioning measuring equipment provides an unbiased estimate. To take a scientific approach depends on what you are measuring - what tests of the equipment you could make yourself.

The estimator of W is not a linear function of a and b, there are some non-linear elements in it.

We'd have to look at the specific form of W to see if it needs "unbiasing". For example, if the errors in measuring a and b are uncorrelated then the product term ab wouldn't introduce any bias.

So I guess what you are saying is that this is actually a complicated thing to do correctly?

It's complicated to do correctly to the standard of mathematical precision, but most people don't use that standard when facing practical problems involving probability.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 22 ·
Replies
22
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K