# Significant figures and uncertainty in measurements

• I
Hello,
I was recently pondering on significant figures and uncertainty reminding myself that there is no perfect measurement: every measurement involves an error caused by the instrument and/or the operator.

A measurement should be executed as many times as possible and not just once. The arithmetic average of those measurements gives the best value. The standard deviation of the collected measurements becomes the error (caveat: the error can also be the instrument sensitivity instead of the standard deviation when all the collected measurements are the same). Fundamentally, a measurement is properly represented as interval of possible values: $$A\pm \Delta{A}$$
Example: we want to add two measurements, ##A+\Delta{A}## and ##B+\Delta{B}##. The final answer should be ##(A+B) \pm (\Delta{A}+\Delta{B}##. The uncertainties simply add in this case. When we add ##(A+B)##, the answer should have an many decimals as the addend having the least amount of decimals, correct?

When significant figures are first introduced in physics and chemistry books, we learn the general rules for addition, subtraction, multiplication, division teaching us how many sig figs and decimals the final answer should have. But there is no discussion on how to handle and manipulate the uncertainties associated to the involved numbers. Why? The measurements are presented without their uncertainty term ##\pm \Delta {A}##. We only learn that the rightmost digit is significant but also doubtful and uncertain. Any number should always be accompanied by its uncertainty term ##\pm \Delta {A}## . Is the assumption that the uncertainties are baked into the last significant figure? I guess those rules just provide us with a way to combine the numbers but completely neglect the uncertainty of each measurement and the uncertainty of the final answer...

.Scott
Homework Helper
There are some serious problems with your description of how measurement error is handled and described.
The topic is called "Measurement System Analysis". See https://en.wikipedia.org/wiki/Measurement_system_analysis
That will give you other links. Of particular interest is ANOVA:
https://en.wikipedia.org/wiki/ANOVA_gauge_R&R
https://www.spss-tutorials.com/anova-what-is-it/

ANOVA deals with the measurement of items that are expected to be the same - such a products coming off an assembly line. So it is a more general case of the measurements than you described. But is certainly shows the shortcomings of describing a measurement simply with a number of significant digits or within a range.

Describing the uncertainty as +/-##\Delta A## can provide more information than just the number of significant digits; but it does not provide a full description of the error. Better is to indicate the range within a specified standard deviation - but even that may not be appropriate in cases where the error distribution is not Gaussian.

Measurement set result 1: ##A \pm\Delta A##
Measurement set result 2: ##B \pm\Delta B##
Total of A and B: ??
The ##\pm\Delta##'s likely represent the range where some percent of the measurements will lie. You cannot simple add ##\Delta A## and ##\Delta B## together.

Last edited:
anorlunda
The arithmetic average of those measurements gives the best value. The standard deviation of the collected measurements becomes the error
Actually the standard deviation of the sample indicates the error in each measurement, not the error in the best value as indicated by the mean. With a few constraints on the underlying distribution which are almost always met (first and second moments exist, see central limit theorem) the standard deviation of the mean is smaller by the square root of the number of measurements

Example: we want to add two measurements, A+ΔAA+ΔAA+\Delta{A} and B+ΔBB+ΔBB+\Delta{B}. The final answer should be (A+B)±(ΔA+ΔB(A+B)±(ΔA+ΔB(A+B) \pm (\Delta{A}+\Delta{B}. The uncertainties simply add in this case.

No, that is not correct.

First, I should note that your premise implies that the error of each measurement is an interval over which the probability is uniform. That is rarely the case. However that’s ok. It could be true, and it is terrifically useful for explaining why the uncertainties don’t just add. However, I want to note that this explanation generalizes to other probability functions.

So, suppose the probabilities of A and B are uniform as you indicate. You are correct that the range of possible values for the sum go from -dA-dB. To dA+dB, but are all those values equally probable? For the sum to be off by dA + dB, A had to be +dA and B had to be +dB. There is exactly one way to get that result, however for the sum to be off by 0, A could have been +dA and B could have been -dB or A could be -dA and B could be dB. Similar pairs exist at every value in between. There are a much larger number of ways 0 error happens than the extremes happen. (In that argument I assumed dA and dB are the same, but the argument can be generalized). The probability distribution for A+B is not uniform. In fact it’s a triangle. Also you wouldn’t say its width is dA+dB. Not being uniform it’s tough to compare the widths. What should we use? Full width half max? Well by that measure the error in the sum is no bigger than each measurement. However if you care to calculate the standard deviation you will find it is sqrt(2) wider. The errors don’t just add. The add in quadrature. How can we be sure? See the central limit theorem.

Also note that the uniform distribution became a triangle. Add in some more measurements and the probability quickly transforms into a Gaussian. This is why a Gaussian is the standard distribution. When averaging a bunch of things the joint probability becomes Gaussian regardless of the underlying distribution (with a few caveats) and almost everything we measure is really an average of a lot of underlying variations.

BTW Pseudo random number generators produce uniformly distributed numbers on the 0,1 interval. If you ever need Gaussian distributed random numbers just average 8 or 10 random numbers

But there is no discussion on how to handle and manipulate the uncertainties associated to the involved numbers. Why?

Significant digits are an approximate way to do exactly what you are asking about. The assumption is that the number also represents the uncertainty. The uncertainty is assumed to be +\- 1/2 of the final digit. Add or subtract, and the proper error propagation would say the error is sqrt(2) bigger, but that is still the same digit, so keeping the same digit is about right. When multiplying if one value is much less precise than the other proper error propagation gives the same result as significant digits. If the are the same order proper error propagation would only increase the error bar by a sqrt(2). Etc. So for simple calculations significant digits is a pretty good first approximation of proper error propagation

Stephen Tashi
Hello,
I was recently pondering on significant figures and uncertainty reminding myself that there is no perfect measurement: every measurement involves an error caused by the instrument and/or the operator.

If you are formulating a mathematical model of how to represent errors in measurements, then (as the saying goes) you are "opening a can of worms". It's certainly possible to invent reasonable models of errors in measurements. The complications come when your model collides with different world views.

A basic problem is terminology. To different people, words such as "error bar", "uncertainty", "significant figures" , "##\pm 0.48##" can have different interpretations and connections to mathematical models - or, often, no specified connection to a mathematical model.

For example, it appears you yourself us the notation "##\triangle A##" ambiguously. On the one hand it may indicate the standard deviation of a random variable. On the other hand, it may indicate a guaranteed limit of accuracy for the measuring device.

In discussing calculations, at the extreme end of the spectrum you run into those who take a bureaucratic point of view - namely that procedures are defined by certain documented standards - e.g. https://www.nist.gov/services-resources/standards-and-measurements.

If you want a set of procedures based on a mathematical model, you will have to be specific about the meaning of ##\triangle ##". For example, if we interpret the ##\triangle## to mean the standard deviation of a random variable then the situation for two independent random variables is ##( \triangle (A+B))^2 = (\triangle A)^2 + (\triangle B)^2 ##. It is the variances that are additive, not the standard deviations.

Mathematical statistics is a conceptually complicated topic. For example, you say "The arithmetic average of those measurements gives the best value". How will you define "best" value mathematically? You are entering into the field of statistical estimation and "estimators". (E.g. various concepts of "best" - unbiased estimators, least squares estimators, minimum variance estimators)

Mister T
Gold Member
When significant figures are first introduced in physics and chemistry books, we learn the general rules for addition, subtraction, multiplication, division teaching us how many sig figs and decimals the final answer should have. But there is no discussion on how to handle and manipulate the uncertainties associated to the involved numbers. Why?

To keep things simple. These discussions of sig figs in introductory textbooks didn't start to appear in those books until around the 1970's, when the calculator replaced the slide rule as the weapon of choice for students enrolled in those classes. Professors reacted to answers on tests and homework that included strings of 10 digits, because that's what the calculator displays.

If you want to do a better error analysis you start, in my opinion, by looking at the relative error. For example, ##\frac{\Delta A}{A}##. You add relative errors to get the total relative error. So, for example, if a rectangle measures ##l## by ##w## the area ##A## equals ##lw## and the relative errors add so that##\frac{\Delta A}{A}=\frac{\Delta l}{l}+\frac{\Delta w}{w}##.

.Scott
Homework Helper
To keep things simple. These discussions of sig figs in introductory textbooks didn't start to appear in those books until around the 1970's, when the calculator replaced the slide rule as the weapon of choice for students enrolled in those classes. Professors reacted to answers on tests and homework that included strings of 10 digits, because that's what the calculator displays.
My 6th grade class was drilled on significant decimal places in 1964. That predates Wang's introduction of the nixie tube calculators. There were other devices at that time, mostly mechanical, but they were not commonly available to students.

At the time, most student calculations were done by hand - with some assistance from log tables and slide rules.
In that environment, significant digits were even more important - because it told you how much effort was required to produce an answer of appropriate precision.

Stephen Tashi
My 6th grade class was drilled on significant decimal places in 1964. That predates Wang's introduction of the nixie tube calculators. There were other devices at that time, mostly mechanical, but they were not commonly available to students.

At the time, most student calculations were done by hand - with some assistance from log tables and slide rules.
In that environment, significant digits were even more important - because it told you how much effort was required to produce an answer of appropriate precision.

I'm curious how you were drilled on significant figures vis-a-vis tables. If a problem involved a ladder of 2.5 ft leaning against a wall at a 70 deg angle, how many significant figures did you use from a table of sin(x)? All that happened to be given in the table?

Thanks Everyone,

If you don't mind, let me back up for a minute on a few points:
• Performing calculations with sig figs by hand results in answers with less sig figs that when we use a calculator. Why? Why does the calculator produce so many sig figs in its answer? What causes that?
• So, is it procedurally correct, when measuring a length of an object, like for example measuring the length of a paper sheet with a ruler (smallest division is mm), to first collect multiple measurements of that length and eventually express the length as ##average \pm uncertainty## where the average is the arithmetic average? How do we obtain the uncertainty? Is it the standard deviation or the instrument sensitivity? For me, the terms uncertainty and error are synonyms.
Thank you!

jtbell
Mentor
Why does the calculator produce so many sig figs in its answer?
It's because a calculator is stupid and doesn't know the rules for sig figs. You have to apply the rules yourself.

My calculator (an ancient HP 11C) allows me to set the number of decimal places that it displays. In scientific-notation mode, that determines the number of sig figs displayed.

Stephen Tashi
• So, is it procedurally correct, when measuring a length of an object,
Correct by what criteria?

• How do we obtain the uncertainty? Is it the standard deviation or the instrument sensitivity?

As mentioned previously, "uncertainty" does not have a universally accepted and standard definition.
• For me, the terms uncertainty and error are synonyms.

That would pair the ambiguous term "error" with the ambiguous term "uncertainty". Perhaps you want to say that the sample standard deviation is your definition of the "uncertainty".

A mathematical model of the procedure you describe depends on how the measurement is modeled. Does the person making the measurement only record the result to the nearest mm? - or do they make a guess like 102.25 mm vs 102.75 mm? - or use notation like 102-103 mm to indicate an interval?

Dale
Mentor
2020 Award
How do we obtain the uncertainty? Is it the standard deviation or the instrument sensitivity? For me, the terms uncertainty and error are synonyms.