# Why do we use significant figures in calculations instead of the rate of Uncertainty?

## Main Question or Discussion Point

Why do we use significant figures in calculations instead of the rate of Uncertainty?

(2,5$$\pm$$0,4000)*4,000=(10$$\pm$$1,6).
The number 2,5 has 2 significant figures, wich is the same as we will write in the answer, while it should be only one significant figure in the answer if we take the rate of uncertainity into account.

If the rate of uncertainity rises when you multiplices numbers that are bigger than 1.
Then why don't write the rate of uncertainity in the calculations?

My point is that it seems stupid to use significant figures instead of the rate of uncertainty in multiplication as the real answer could be far from the answer you would get with significant figures. You won't know how far away your answer could be and sometimes that's neccesary knowledge... am I wrong?

I'm a little confused over this please tell me if I'm wrong somwhere and why. Thank you :)

Last edited:

Related Other Physics Topics News on Phys.org
Borek
Mentor

Depends on whom you ask. In general significant figures are a poor man's version of expressing uncertainty and they shouldn't be treated too seriously. The only place they are treated seriously is chemistry - but even there many think those putting too much weight to them are in error. Thats a source of heated discussion that starts about once each year at the otherwise very good discussion list for chemistry educators. Don't care too much.

My point is that it seems stupid to use significant figures instead of the rate of uncertainty in multiplication as the real answer could be far from the answer you would get with significant figures. You won't know how far away your answer could be and sometimes that's neccesary knowledge... am I wrong?

Staff Emeritus
2019 Award

In addition to Borek's reply, to correctly propagate uncertainties requires calculus. We need some way of handling precision in pre-calculus classes: hence significant figures. You can't teach everything at once.

In addition to Borek's reply, to correctly propagate uncertainties requires calculus. We need some way of handling precision in pre-calculus classes: hence significant figures. You can't teach everything at once.
Thank you. Then at least I was not wrong?

jtbell
Mentor

to correctly propagate uncertainties requires calculus.
Actually, you don't need calculus. Suppose you have a quantity f which depends on two measured quantities x and y, with uncertainties $\Delta x$ and $\Delta y$. Assuming x and y are independent (not correlated), first calculate

$$\Delta_x f = f(x+\Delta x, y) - f(x,y)$$

$$\Delta_y f = f(x, y+\Delta y) - f(x,y)$$

that is, the variation that the uncertainties in x and y each produce in f, assuming the other quantity is held constant. Then combine these in quadrature:

$$\Delta f = \sqrt{(\Delta_x f)^2 + (\Delta_y f)^2}$$

This is easily generalized for more than two independently measured quantities.

The formula that I've seen with derivatives calculates the differentials using the first term of a Taylor series expansion:

$$f(x+\Delta x, y) = f(x,y)+\frac{\partial f}{\partial x} \Delta x +...$$

Actually, you don't need calculus. Suppose you have a quantity f which depends on two measured quantities x and y, with uncertainties $\Delta x$ and $\Delta y$. Assuming x and y are independent (not correlated), first calculate

$$\Delta_x f = f(x+\Delta x, y) - f(x,y)$$

$$\Delta_y f = f(x, y+\Delta x) - f(x,y)$$

that is, the variation that the uncertainties in x and y each produce in f, assuming the other quantity is held constant. Then combine these in quadrature:

$$\Delta f = \sqrt{(\Delta_x f)^2 + (\Delta_y f)^2}$$

This is easily generalized for more than two independently measured quantities.
I'm sorry but I don't think that I understand it, could you give me an example? And I guess my problem is why is this equation $$\Delta_x f = f(x+\Delta x, y) - f(x,y)$$ correct? I mean why
is $$\Delta_x f$$ equal to that wich is in the equation?.

that is, the variation that the uncertainties in x and y each produce in f, assuming the other quantity is held constant.
Asuming that what quantity is constant? The quantity f?

Then I have not been reading about Taylor series expansions yet but I think I can save that for later if it's not a to important part. I'm also having problems with writing all the symbols on this site by the way, I dont know how it works yet.

jtbell
Mentor

I'm sorry but I don't think that I understand it, could you give me an example?
Suppose you want to find the acceleration of gravity, g, by measuring the time t it takes for an object to fall a distance h from rest, so

$$g = \frac{2h}{t^2}$$

You measure h = 2.75 ± 0.01 m, and t = 0.75 ± 0.01 s. Without taking uncertainties into account, you get g = 2(2.75)/0.75^2 = 9.778 m/s^2.

Now change h by its uncertainty, keep t at its original value, and calculate a "varied" value of g = 2(2.76)/0.75^2 = 9.813 m/s^2. The difference from the original value of g is 0.035.

Now change t by its uncertainty, keep h at its original value and calculate another "varied" value of g = 2(2.75)/0.76^2 = 9.522 m/s^2. The difference from the original value of g is -0.256.

Combine the two differences in quadrature:

$$\Delta g = \sqrt{0.035^2 + (-0.256)^2} = 0.258$$

Rounding g and its uncertainty to the same number of decimal places, you would write your final result as g = 9.8 ± 0.3, or maybe g = 9.78 ± 0.26 m/s^2.

why is $$\Delta_x f$$ equal to that wich is in the equation?.
This is the amount by which f changes when you change x by $\Delta x$, keeping y constant. If y were an exactly-known value, $\Delta_x f$ would be the uncertainty in f.

Similarly, $\Delta_y f$ is the amount by which f changes when you change y by $\Delta y$, keeping x constant. If x were an exactly-known value, $\Delta_y f$ would be the uncertainty in f.

If both x and y are uncertain, then their two uncertainties might act either in the same or opposite directions on the value of f, so you can't simply add $\Delta_x f$ and $\Delta_y f$ together. If you assume that the random measurement errors are distributed "normally" (i.e. according to a Gaussian distribution), then it's possible to prove that

$$(\Delta f)^2 = (\Delta_x f)^2 + (\Delta_y f)^2$$

(see a textbook on probabilty and statistics.)

For how to write equations like I did above, see here: