Do Extra Sig Figs Affect Uncertainty in Calculation Results?

  • Context: Undergrad 
  • Thread starter Thread starter billabuwl50
  • Start date Start date
Click For Summary

Discussion Overview

The discussion revolves around the implications of significant figures (sig figs) on the uncertainty of calculated results, particularly in the context of averaging measurements. Participants explore whether the presence of extra sig figs affects the uncertainty of a quantity or if it is merely a mathematical convention.

Discussion Character

  • Debate/contested
  • Mathematical reasoning
  • Conceptual clarification

Main Points Raised

  • Some participants suggest that significant figures are not a strict mathematical rule but rather a concept from inductive sciences that reflects measurement precision.
  • One participant questions whether having an extra sig fig after averaging indicates a change in uncertainty or is simply a mathematical fact.
  • Another participant argues that one should not end up with extra sig figs that exceed the precision of the original measurements.
  • A specific example is provided where averaging several measurements results in a number with more sig figs than the original, prompting questions about the implications for uncertainty.
  • It is noted that significant figures are intended to provide a rough estimate of error, with a general assumption that a number is accurate to within half of a unit in the last place.
  • One participant explains that the real average should lie within a specific interval based on the precision of the data, indicating that certain results may not be justified.
  • There is a discussion about whether to drop extra sig figs after calculations, with differing opinions on the matter.

Areas of Agreement / Disagreement

Participants express differing views on the role and interpretation of significant figures in relation to uncertainty. There is no consensus on whether extra sig figs indicate a change in uncertainty or if they should be retained in the final result.

Contextual Notes

The discussion highlights limitations in understanding how significant figures relate to measurement uncertainty, particularly regarding the assumptions made in calculations and the potential for errors to cancel out.

billabuwl50
Messages
8
Reaction score
0
When your doing sig figs and you make an action such as average densities, and you end up with an extra sig fig, is there an actual change in the uncertainty if the quantity or is it just a rule of Math.

If I was forced to guess I would say the it was just a mathematical rule.
 
Physics news on Phys.org
Sig figs aren't actually a math rule, since math doesn't deal with sig figs. They're a concept derived in inductive sciences, where measurements aren't 100% precise. Using significant figures, you give others an idea as to how precise your measurements are.

You shouldn't be ending up with extra sig figs that are more precise than your original... i.e., if your measurements looked like:

2.78
3.14

then calculating and coming up with a number such as

14.63

can be allowed, depending on the operation (there is an extra sig fig there), but something like:

1.463

shouldn't show up usually
 
for example

I have 2.73, 2.73, 2.78, 2.74, and 2.54. I want to average these usign sig figs. Currently they each have three. If I add them up you get 13.52. So now if you divide it by five you get 2.704, and you have still used sig fig rules.

Would this indicate an actual change in the uncertainty of the quanity, or is this a mathematical fact?
 
Don't you drop the extra once you're done?

So your final would be 2.70?
 
Nope, it stays. I just don't know what it means.
 
In general, significant figures (which aren't really math at all) are supposed to give a rough bound on the error. In particular, a number is presumed accurate to within 1/2 of a unit in the last place in general. With that assumption in place for the data, it's easy to see that the real average must then lie in the interval [2.699, 2.709]. Thus 2.70 is justified because the error is within 0.9 ULP (not as good as the 0.5 ULP of the data, but good enough). 2.704 isn't justified at all, since then the error would be within 5 ULPs, which is pretty bad.

Of course these are worst-case -- the errors usually cancel out, giving a better precision than interval arithmatic would suggest.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
6K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 10 ·
Replies
10
Views
20K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K