- #1

fog37

- 1,378

- 95

I was recently pondering on significant figures and uncertainty reminding myself that there is no perfect measurement: every measurement involves an error caused by the instrument and/or the operator.

A measurement should be executed as many times as possible and not just once. The arithmetic average of those measurements gives the best value. The standard deviation of the collected measurements becomes the error

**(caveat: the error can also be the instrument sensitivity instead of the standard deviation when all the collected measurements are the same).**Fundamentally, a measurement is properly represented as interval of possible values: $$A\pm \Delta{A}$$

**Example:**we want to add two measurements, ##A+\Delta{A}## and ##B+\Delta{B}##. The final answer should be ##(A+B) \pm (\Delta{A}+\Delta{B}##. The uncertainties simply add in this case. When we add ##(A+B)##, the answer should have an many decimals as the addend having the least amount of decimals, correct?

When significant figures are first introduced in physics and chemistry books, we learn the general rules for addition, subtraction, multiplication, division teaching us how many sig figs and decimals the final answer should have. But there is no discussion on how to handle and manipulate the uncertainties associated to the involved numbers. Why? The measurements are presented without their uncertainty term ##\pm \Delta {A}##. We only learn that the rightmost digit is significant but also doubtful and uncertain. Any number should always be accompanied by its uncertainty term ##\pm \Delta {A}## . Is the assumption that the uncertainties are baked into the last significant figure? I guess those rules just provide us with a way to combine the numbers but completely neglect the uncertainty of each measurement and the uncertainty of the final answer...