Why is one significant figure the norm in uncertainties?

In summary: Also, if it were true, I would probably try to get a better measure of the central value.In summary, the use of only one significant figure for uncertainty is taught as a general rule, but it may not always be the best approach. Some suggest using more complex rules to determine the number of significant figures for uncertainty, taking into account the precision of the measurement. However, in most cases, it is unlikely that the uncertainty will be known significantly better than the central value, making the use of one significant figure for uncertainty a reasonable approximation.
  • #1
TheCanadian
367
13
Why is it that only one significant figure for any uncertainty is taught? It seems like a nice general rule, but isn't it an unnecessary constraint and ultimately a poor rule in many cases? For example ## 5 \pm 1 ## could mean an error of either 0.5 or 1.5, which is a very large range. Wouldn't it be best to keep an additional significant figure in this case? If not, could you possibly expand on why? I have also seen more than one significant figure widely reported in research, and it doesn't seem to be too problematic in publications, yet I see this commonly taught at the high school and undergraduate level. Any advice would be great!
 
Physics news on Phys.org
  • #2
TheCanadian said:
Why is it that only one significant figure for any uncertainty is taught? It seems like a nice general rule, but isn't it an unnecessary constraint and ultimately a poor rule in many cases? For example ## 5 \pm 1 ## could mean an error of either 0.5 or 1.5, which is a very large range. Wouldn't it be best to keep an additional significant figure in this case? If not, could you possibly expand on why? I have also seen more than one significant figure widely reported in research, and it doesn't seem to be too problematic in publications, yet I see this commonly taught at the high school and undergraduate level. Any advice would be great!

Er... ## 5 \pm 1 ## means that the value could be in the range of 4 to 6. Not sure where you get that the error is ".. either 0.5 or 1.5.." here.

There is no set standard for uncertainty. It depends on the precision of the measurement or instrument. Maybe what you are being taught is the concept of uncertainty and errors, rather than an actual measurement. So to make it easier to understand the concept, you are being taught simple stuff. And having just one significant digit in the uncertainty looks simple to me.

Zz.
 
  • Like
Likes TheCanadian
  • #3
You are right that there is a big gap between ±1 and ±2, although there is not such a big gap between ±0.9 (also one digit) and ±1. Some people use more complex rules, for example, here is what the Particle Data Group does:

The basic rule states that if the three highest order digits of the error lie between 100 and 354, we round to two significant digits. If they lie between 355 and 949, we round to one significant digit. Finally, if they lie between 950 and 999, we round up to 1000 and keep two significant digits. In all cases, the central value is given with a precision that matches that of the error.

In your example, they would use 5.0 ± 1.0.

The problem they are trying to avoid is a case like 500 ± 101. With an error of 101 and not 100, we are saying we know the uncertainty to about a percent. However, we only know the central value to 20%. It is highly unlikely we will know the uncertainty 20x better than the central value.
 
  • Like
Likes TheCanadian
  • #4
ZapperZ said:
Er... ## 5 \pm 1 ## means that the value could be in the range of 4 to 6. Not sure where you get that the error is ".. either 0.5 or 1.5.." here.

There is no set standard for uncertainty. It depends on the precision of the measurement or instrument. Maybe what you are being taught is the concept of uncertainty and errors, rather than an actual measurement. So to make it easier to understand the concept, you are being taught simple stuff. And having just one significant digit in the uncertainty looks simple to me.

Zz.

I guess I was just trying to say: relative to the error (i.e. 1), if the known error is actually 1.4, then there is quite a big difference between ## 5.0 \pm 1 ## and ## 5.0 \pm 1.4 ##. This appears to be magnified if instead of 5.0, another lower measured value was found (e.g. 1.5 or 0.9), and in those cases, the realtive error caused by using either 1 or 1.4 is quite significant. This could just be easily resolved if we let the error simply be its true value, instead of doing further rounding, from what I see.

Fair enough.
 
  • #5
Vanadium 50 said:
You are right that there is a big gap between ±1 and ±2, although there is not such a big gap between ±0.9 (also one digit) and ±1. Some people use more complex rules, for example, here is what the Particle Data Group does:
In your example, they would use 5.0 ± 1.0.

The problem they are trying to avoid is a case like 500 ± 101. With an error of 101 and not 100, we are saying we know the uncertainty to about a percent. However, we only know the central value to 20%. It is highly unlikely we will know the uncertainty 20x better than the central value.

Thank you. Just to be clear, although unlikely, it is still possible to know the uncertainty better than the central value, right?
 
  • #6
Possible? Sure. Likely? No. I can't think of any time I have had that happen.
 

1. Why is one significant figure the norm in uncertainties?

One significant figure is the norm in uncertainties because it represents the level of precision in a measurement or calculation. It indicates that the value is accurate to within 10% of the digit in the first decimal place. This level of precision is typically sufficient for most scientific experiments and calculations.

2. How is one significant figure determined in uncertainties?

One significant figure is determined by looking at the uncertainty or margin of error in a measurement or calculation. The first digit in the uncertainty is the significant figure, and any additional digits are considered insignificant. For example, if the uncertainty is 0.03, the one significant figure would be 0.0.

3. What is the significance of using one significant figure in uncertainties?

Using one significant figure in uncertainties allows for a consistent and standardized way of reporting the level of precision in a measurement or calculation. It also helps to avoid misleading others by presenting a false sense of accuracy.

4. Are there any situations where more than one significant figure is used in uncertainties?

Yes, in some cases, more than one significant figure may be used in uncertainties. This is typically seen in more precise experiments or calculations where the margin of error is smaller. However, it is still standard to round to one significant figure when reporting the uncertainty.

5. How does using one significant figure in uncertainties affect the overall accuracy of a measurement or calculation?

Using one significant figure in uncertainties does not necessarily affect the overall accuracy of a measurement or calculation. It simply represents the level of precision or margin of error in the value. It is essential to note that accuracy and precision are two different concepts, and using one significant figure does not guarantee accuracy.

Similar threads

  • Other Physics Topics
Replies
13
Views
3K
Replies
8
Views
1K
Replies
1
Views
1K
  • Introductory Physics Homework Help
Replies
2
Views
797
  • Introductory Physics Homework Help
Replies
4
Views
525
Replies
7
Views
7K
  • Introductory Physics Homework Help
Replies
4
Views
974
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Classical Physics
Replies
18
Views
1K
Back
Top