Sigfigs and Uncertainties

  • #1
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
 

Answers and Replies

  • #2
etotheipi
Gold Member
2019 Award
2,908
1,862
The uncertainty you quote with the measurement is just an estimate of the true uncertainty in the measurement, so it should only be given to 1 significant figure in most cases.

There is a slight exception if the first digit of the uncertainty begins with a ##1## (or sometimes a ##2##), in which case you might sometimes include a second significant figure in the quoted uncertainty (this is a consequence of Benford's Law).

Here I would probably use ##\pm 1 ^o C##.

Edit: Also, N.B. that the measurement should be quoted to the same number of decimal places as the uncertainty!
 
Last edited:
  • Like
Likes i_love_science
  • #3
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
When you say the thermometer can only be read to +/- 0.5 degrees then you can only report the measured temperature to the nearest whole degree. In this case then you would report the temperature change as from 30. degrees to 50. degrees or a temperature change of 20. degrees. The decimal point makes the trailing zero significant. If you add the next zero then you are implying that the precision is +/- 0.05 degrees. Without the decimal point the trailing zero is ambiguous and and would not be considered significant.

Looking further at this case, the lower measured temperature of 30. degrees implies that the actual temperature is somewhere between 29.5 and 30.5 degrees and the 50. degree measurement implies an actual temperature between 49.5 and 50.5 degrees. So the actual temperature change could be a maximum of 29.5 to 50.5 or 21 degrees and the minimum possible change would be from 30.5 to 49.5 or 19 degrees for a total uncertainty of 2 degrees or +/- 1 degree.
 
  • Like
Likes i_love_science and Tom.G
  • #4
I'm an adult physics student and am just now learning probability so everything looks like a probability, question. Especially because you used the word "uncertainty".

Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees? So that, with a confidence interval of alpha = 0.05 or 95% of the time the measurement 30 will be within 29.5 to 30.5? Too deep for me.

I'm sure I'm getting things wrong here. In real life, they tell you which calibrator to see that your thermometer isn't getting damaged. There is a specified time period, I think once a year when you have to recalibrate your thermometers.

Then, you measure your thermometer at the last little hash mark carved into the side and trust your eyes. If it's closer to 30 than 29 or 31.

It's said to be 30. , in practice, lab professionals who use their results treating patients just say 30

.

Doctors say 98.6

.
 
  • #5
jbriggs444
Science Advisor
Homework Helper
2019 Award
9,315
4,016
Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees?
It depends.

If the error were a random measurement error characterized by a normal distribution, for instance, then we might take the 0.5 degree error as indicating the standard deviation of that error distribution.

But it seems far more likely that we are talking about a quantization error involved in rounding the actual measurement to the nearest whole degree. In this case the error distribution is going to have a flat distribution with a cut-off on either end. Meanwhile, under this same assumption (and assuming independence) the error distribution for the difference is going to be a triangle shape with a peak in the middle. [The assumption of independence is questionable here]

Of course, the real world truth is somewhat messier. Often, one has multiple errors that are not all well known, individually identified or equipped with simple or independent distributions.
 
  • Like
Likes fdegregorio

Related Threads on Sigfigs and Uncertainties

  • Last Post
Replies
4
Views
837
  • Last Post
Replies
6
Views
2K
Replies
0
Views
5K
Top