Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Why is one significant figure the norm in uncertainties?

  1. Sep 14, 2015 #1
    Why is it that only one significant figure for any uncertainty is taught? It seems like a nice general rule, but isn't it an unnecessary constraint and ultimately a poor rule in many cases? For example ## 5 \pm 1 ## could mean an error of either 0.5 or 1.5, which is a very large range. Wouldn't it be best to keep an additional significant figure in this case? If not, could you possibly expand on why? I have also seen more than one significant figure widely reported in research, and it doesn't seem to be too problematic in publications, yet I see this commonly taught at the high school and undergraduate level. Any advice would be great!
     
  2. jcsd
  3. Sep 14, 2015 #2

    ZapperZ

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Er... ## 5 \pm 1 ## means that the value could be in the range of 4 to 6. Not sure where you get that the error is ".. either 0.5 or 1.5.." here.

    There is no set standard for uncertainty. It depends on the precision of the measurement or instrument. Maybe what you are being taught is the concept of uncertainty and errors, rather than an actual measurement. So to make it easier to understand the concept, you are being taught simple stuff. And having just one significant digit in the uncertainty looks simple to me.

    Zz.
     
  4. Sep 14, 2015 #3

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    You are right that there is a big gap between ±1 and ±2, although there is not such a big gap between ±0.9 (also one digit) and ±1. Some people use more complex rules, for example, here is what the Particle Data Group does:

    In your example, they would use 5.0 ± 1.0.

    The problem they are trying to avoid is a case like 500 ± 101. With an error of 101 and not 100, we are saying we know the uncertainty to about a percent. However, we only know the central value to 20%. It is highly unlikely we will know the uncertainty 20x better than the central value.
     
  5. Sep 14, 2015 #4
    I guess I was just trying to say: relative to the error (i.e. 1), if the known error is actually 1.4, then there is quite a big difference between ## 5.0 \pm 1 ## and ## 5.0 \pm 1.4 ##. This appears to be magnified if instead of 5.0, another lower measured value was found (e.g. 1.5 or 0.9), and in those cases, the realtive error caused by using either 1 or 1.4 is quite significant. This could just be easily resolved if we let the error simply be its true value, instead of doing further rounding, from what I see.

    Fair enough.
     
  6. Sep 14, 2015 #5
    Thank you. Just to be clear, although unlikely, it is still possible to know the uncertainty better than the central value, right?
     
  7. Sep 14, 2015 #6

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    Possible? Sure. Likely? No. I can't think of any time I have had that happen.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Why is one significant figure the norm in uncertainties?
  1. Significant figure (Replies: 4)

  2. Significant Figures (Replies: 5)

Loading...