How do we account for multiple sources of uncertainty in a measurement?

  • Thread starter Thread starter i_love_science
  • Start date Start date
  • Tags Tags
    Uncertainties
Click For Summary

Discussion Overview

The discussion revolves around how to account for multiple sources of uncertainty in measurements, specifically focusing on temperature measurements using a thermometer with a defined precision. Participants explore the implications of significant figures in reporting uncertainties and the nature of measurement errors.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants suggest that the absolute uncertainty in a temperature increase measured with a thermometer should be reported to one significant figure, typically as +/- 1 degree Celsius.
  • Others argue that the significant figures in uncertainties may depend on the first digit of the uncertainty, referencing Benford's Law as a potential influence.
  • One participant emphasizes that the measured temperature should be reported to the same number of decimal places as the uncertainty, which could affect how the temperature change is expressed.
  • Another participant discusses the implications of significant figures, noting that reporting a temperature with a decimal point implies a different level of precision than without it, which could lead to ambiguity in the measurement's significance.
  • Some participants raise the idea of interpreting the uncertainty in terms of standard deviation, questioning whether the uncertainty could be treated as a normal distribution or if it reflects quantization errors due to rounding.
  • There is acknowledgment that real-world measurements often involve multiple, complex errors that may not be easily characterized or independent.

Areas of Agreement / Disagreement

Participants express differing views on how to report uncertainties and the nature of measurement errors. There is no consensus on the best approach to account for multiple sources of uncertainty, and the discussion remains unresolved.

Contextual Notes

Participants note that the treatment of uncertainties may depend on specific definitions and assumptions about the nature of the measurement errors, which are not fully resolved in the discussion.

i_love_science
Messages
80
Reaction score
2
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
 
Chemistry news on Phys.org
The uncertainty you quote with the measurement is just an estimate of the true uncertainty in the measurement, so it should only be given to 1 significant figure in most cases.

There is a slight exception if the first digit of the uncertainty begins with a ##1## (or sometimes a ##2##), in which case you might sometimes include a second significant figure in the quoted uncertainty (this is a consequence of Benford's Law).

Here I would probably use ##\pm 1 ^o C##.

Edit: Also, N.B. that the measurement should be quoted to the same number of decimal places as the uncertainty!
 
Last edited by a moderator:
  • Like
Likes   Reactions: i_love_science
i_love_science said:
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
When you say the thermometer can only be read to +/- 0.5 degrees then you can only report the measured temperature to the nearest whole degree. In this case then you would report the temperature change as from 30. degrees to 50. degrees or a temperature change of 20. degrees. The decimal point makes the trailing zero significant. If you add the next zero then you are implying that the precision is +/- 0.05 degrees. Without the decimal point the trailing zero is ambiguous and and would not be considered significant.

Looking further at this case, the lower measured temperature of 30. degrees implies that the actual temperature is somewhere between 29.5 and 30.5 degrees and the 50. degree measurement implies an actual temperature between 49.5 and 50.5 degrees. So the actual temperature change could be a maximum of 29.5 to 50.5 or 21 degrees and the minimum possible change would be from 30.5 to 49.5 or 19 degrees for a total uncertainty of 2 degrees or +/- 1 degree.
 
  • Like
Likes   Reactions: i_love_science and Tom.G
I'm an adult physics student and am just now learning probability so everything looks like a probability, question. Especially because you used the word "uncertainty".

Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees? So that, with a confidence interval of alpha = 0.05 or 95% of the time the measurement 30 will be within 29.5 to 30.5? Too deep for me.

I'm sure I'm getting things wrong here. In real life, they tell you which calibrator to see that your thermometer isn't getting damaged. There is a specified time period, I think once a year when you have to recalibrate your thermometers.

Then, you measure your thermometer at the last little hash mark carved into the side and trust your eyes. If it's closer to 30 than 29 or 31.

It's said to be 30. , in practice, lab professionals who use their results treating patients just say 30

.

Doctors say 98.6

.
 
fdegregorio said:
Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees?
It depends.

If the error were a random measurement error characterized by a normal distribution, for instance, then we might take the 0.5 degree error as indicating the standard deviation of that error distribution.

But it seems far more likely that we are talking about a quantization error involved in rounding the actual measurement to the nearest whole degree. In this case the error distribution is going to have a flat distribution with a cut-off on either end. Meanwhile, under this same assumption (and assuming independence) the error distribution for the difference is going to be a triangle shape with a peak in the middle. [The assumption of independence is questionable here]

Of course, the real world truth is somewhat messier. Often, one has multiple errors that are not all well known, individually identified or equipped with simple or independent distributions.
 
  • Like
Likes   Reactions: fdegregorio

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
Replies
20
Views
7K
Replies
15
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
9
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 0 ·
Replies
0
Views
4K