Sigfigs and Uncertainties

A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.

etotheipi
The uncertainty you quote with the measurement is just an estimate of the true uncertainty in the measurement, so it should only be given to 1 significant figure in most cases.

There is a slight exception if the first digit of the uncertainty begins with a ##1## (or sometimes a ##2##), in which case you might sometimes include a second significant figure in the quoted uncertainty (this is a consequence of Benford's Law).

Here I would probably use ##\pm 1 ^o C##.

Edit: Also, N.B. that the measurement should be quoted to the same number of decimal places as the uncertainty!

Last edited by a moderator:
• i_love_science
A thermometer which can be read to a precision of +/- 0.5 degrees celsius is used to measure a temperature increase from 30.0 degrees celsius to 50.0 degrees celsius.
What is the absolute uncertainty in the measurement of the temperature increase?

Do sigfig rules for addition and subtraction apply also to uncertainties?
For the example above, would the uncertainty be +/- 1 degrees celsius (retaining one sigfig only -- not using sigfig rules in uncertainty) or would it be +/- 1.0 degrees celsius (retaining 2 sigfig / 1 decimal place -- using sigfig/decimal place rules in uncertainty).

Thank you.
When you say the thermometer can only be read to +/- 0.5 degrees then you can only report the measured temperature to the nearest whole degree. In this case then you would report the temperature change as from 30. degrees to 50. degrees or a temperature change of 20. degrees. The decimal point makes the trailing zero significant. If you add the next zero then you are implying that the precision is +/- 0.05 degrees. Without the decimal point the trailing zero is ambiguous and and would not be considered significant.

Looking further at this case, the lower measured temperature of 30. degrees implies that the actual temperature is somewhere between 29.5 and 30.5 degrees and the 50. degree measurement implies an actual temperature between 49.5 and 50.5 degrees. So the actual temperature change could be a maximum of 29.5 to 50.5 or 21 degrees and the minimum possible change would be from 30.5 to 49.5 or 19 degrees for a total uncertainty of 2 degrees or +/- 1 degree.

• i_love_science and Tom.G
I'm an adult physics student and am just now learning probability so everything looks like a probability, question. Especially because you used the word "uncertainty".

Could 30 degrees +- 0.5 degrees on a well-calibrated thermometer mean 30 degrees +-2* sd=2*0.25 degrees? So that, with a confidence interval of alpha = 0.05 or 95% of the time the measurement 30 will be within 29.5 to 30.5? Too deep for me.

I'm sure I'm getting things wrong here. In real life, they tell you which calibrator to see that your thermometer isn't getting damaged. There is a specified time period, I think once a year when you have to recalibrate your thermometers.

Then, you measure your thermometer at the last little hash mark carved into the side and trust your eyes. If it's closer to 30 than 29 or 31.

It's said to be 30. , in practice, lab professionals who use their results treating patients just say 30

.

Doctors say 98.6

.

jbriggs444
• 