Question on estimated uncertainty and significant digits

AI Thread Summary
The discussion centers on how to determine significant digits in a measurement of 136.52480 with an estimated uncertainty of 2. It explores the relationship between estimated uncertainty and confidence intervals, questioning whether the uncertainty reflects a 1σ confidence level or a different interpretation. The consensus leans towards using four significant digits, as rounding to fewer digits would unnecessarily compromise accuracy. Participants emphasize the importance of distinguishing between estimated uncertainty and standard deviation, noting that practical measurements often involve subjective judgment. Ultimately, the conversation highlights the complexities of accurately conveying measurement precision and uncertainty in scientific contexts.
JDoolin
Gold Member
Messages
723
Reaction score
9
Question on "estimated uncertainty" and significant digits

Homework Statement



The value is 136.52480, and the "estimated uncertainty" is 2. How many digits should be included as significant?

Homework Equations



Not sure whether estimated uncertainty refers to a 1σ (68.26%) confidence interval or what.
Rounding implies a 100% confidence interval, because you're taking a known number between [.5, 1.5) and rounding it to 1.

Edit: I should also mention that stating one significant digit after rounding, for instance 137 ϵ [136.5,137.5) implies a uniform distribution, so the likelihood of the actual value being 136.6 is the same as the chance of havinga value of 137, whereas 136.52480 ± 2 tells you that your point estimate for the value is 136.52480, and the likelyhood of the actual value being 136.6 is greater than the likelyhood of it being 137.

The Attempt at a Solution



The confidence interval based on the estimated uncertainty is (134.52480,138.52480) so there is, I'm guessing a 68.26% chance the data is within that region. If so, the 95% CI, is (132.52480,140.52480)

1 sig fig: x≈100 -> x ϵ [50,150); The interval contains the 95% CI but is roughly 10 times too wide.
2 sig figs: x≈140 -> x ϵ [135,145). The interval contains the 1σ CI, but misses part of the 95% CI. The width of this interval is close to the width of the 95% Confidence interval.
3 sig fig: x≈137 -> x ϵ [136.5,137.5);
4 sig fig: x≈136.5 -> x ϵ [136.45,137.55); This interval contains only a tiny fraction of the Confidence Interval.

I think the best answer is that 4 digits are significant, because very little information is lost if we say our measurement is 136.5 ± 2. Any other answer would be rounding to a point where your result is unnecessarily inaccurate.

I wonder if there is a general rule? Though this question appears in the homework, I cannot find any reference to "estimated uncertainty" in the examples or text.

Additional information
I am a math and physics professor, so I can probably get hold of the answer key next week. I'll be curious what the book's answer is, but I wonder if it is wise to mix the concepts of rounding with the idea of "estimated uncertainty," or whether it should be treated so nonchalantly, like this question actually has a multiple choice answer.
 
Last edited:
Physics news on Phys.org


JDoolin said:
Not sure whether estimated uncertainty refers to a 1-standard deviation CI

I can't contribute anything to your interesting musings. But I would hope that estimated uncertainty does not relate to confidence intervals, because in practice that's not how it is used. If I use a rule (aka, a ruler) to measure a distance, and I determine that distance to be 14.4 +/- 0.2 cm I don't intend that anyone believe that distance could be 15 or even 20 cm, however unlikely.
 
Last edited:


NascentOxygen said:
I can't contribute anything to your interesting musings. But I would hope that estimated uncertainty does not relate to confidence intervals, because in practice that's not how it is used. If I use a rule (aka, a ruler) to measure a distance, and I determine that distance to be 14.4 +/- 0.2 cm I don't intend that anyone believe that distance could be 15 or even 20 cm, however unlikely.

Thank you for replying. That's a good point. How are you finding your estimated error? Is it really estimated (just taking a guess at how far off your measurement is likely to be) or is it calculated from the sample-standard-deviation of several measurements?

And the basic question, when you give your +/- 0.2 cm, do you mean you are 68% certain that the value is between there, 95% certain? 99% certain? or even 100% certain?

Some of this stuff below I learned from statistics class rather than physics class. It seems like the Stats and Physics sometimes are speaking in different languages, but I'm trying to make the two match up.

If you use 68%, 95%, 99%, those are called your Confidence Level, and then you can determine alpha, \alpha = 1 - CL which is called the significance level, and \alpha/2 is the chance of your estimate range being too low, and \alpha/2 is the chance of your estimate range being too high.

If you're calculating the sample-standard deviation of your data and calling that the uncertainty, that's about 68% certain. I think, if you're measuring a quantity that should be the same every time, you can take s_x /\sqrt n as your uncertainty, (because each trial is triangulating on the actual result.) but if you're measuring a quantity that is probably different every time, you should just take s_x as your uncertainty (because you want to account for real variance in the quantity.) One physics lab book I had called the quantity
s_x /\sqrt n = \mathrm{"standard error"}​
I'm not sure any physics texts do it, but to make things more in line with statistics, I'm thinking of suggesting to my physics students that they actually take their sample standard deviation and multiply by the invT(0.8413,n-1) (That's a function on the TI-83/84/89 calculators, called the student t distribution.)

(.8413 is the area under the normal curve to the left of 1 standard deviation, and n is the number of trials in the test. )

I'm not exactly sure what the student-t factor does, but I know it somehow accounts for a small number of data points when you the distribution is not normally distributed.

For instance with 2 data points, invT(.8413,1)=1.8367.

(and by the way, with two data-points, ALL the data is in the extreme end of the tails, and no distribution can possibly be less normal than that.)

There's also some gobbledy-gook in the Stats book I was using last semester warning people how "not" to interpret a confidence interval. Sort of mysterious though; giving examples of ways not to use it, but not explaining why precisely the examples were wrong. Even encouraging the student to come up with more creative ways to misuse the confidence interval. I suppose I'm running the danger of coming up with one of those creative mis-uses, but it would be nice to establish the one correct interpretation, rather than making a list of incorrect ones.
 


After some further experience in the lab, I noticed that sometimes, you might just say you read off 4.9 centimeters, but you're only certain to within a millimeter or so. Doesn't really have anything to do with standard deviaton at all; just a judgment call. Besides that, a lot of the time, there's probably something that you're not thinking of that's causing an error in your measurement that you're not taking into account anyway.
 
Last edited:


JDoolin said:
I think the best answer is that 4 digits are significant, because very little information is lost if we say our measurement is 136.5 ± 2. Any other answer would be rounding to a point where your result is unnecessarily inaccurate.

In practice, it is unusual to see a value expressed to a precision greater than its accuracy, e.g., your example of 136.5 +/- 2

So unusual, that I can recall first encountering such, when it caused me to pause for thought before recognizing that it can, in exceptional circumstances, be perfectly sensible. The estimate of the Hubble constant can be expressed as, e.g., \textrm71.9\; _{-2.7}^{+2.6}My editor does not allow any extended-ASCII chars, so your Latex generator just came in handy. http://www.codecogs.com/latex/eqneditor.php"
 
Last edited by a moderator:


Well, let me give an example that is a little more basic.

Using "air tracks" (if you don't know what that is, it's designed to be a nearly frictionless ramp by using a layer of air under the glider. Similar in principle to air hockey) We were measuring the height, distance and time as the glider slid down the ramp.

We were measuring a height, from the low end to the high end of the track around 3.9 cm; maybe +/-0.2 cm as a guesstimate of how carelessly we were making the measurement. (This is not at all similar to a standard deviation.) The way this measurement was made was by taking the distance between the table to the bottom of the air track at the top end and the bottom end of the path.

Well, in the end, we did some trig and found that according to our calculation the gravitational constant was 12.5 m/s2.

We wondered whether somehow the air was pushing the glider, or if there were some springy action somewhere, but we couldn't figure out where a 30% error= (12.5-9.8)/9.8 was coming from.

However, the next day, we tried setting up the track completely level with the table, and found that there was a 1° - 2° slope in the table itself! Which means that our measurement in the Δh of the track could be off by as much as 2 or 3 cm!

So where we originally said Δh was 3.9 +/-0.2 cm, it turned out to be maybe 6 +/- 2 cm.

In any case, the "estimated uncertainty" here is really just that; an estimate, or maybe even a wild guess. We could get some nice mathematical (standard deviation) number by using statistics, but if we're making the same mistake in every measurement, that standard deviation is going to be way smaller than our actual error.
 
I tried to combine those 2 formulas but it didn't work. I tried using another case where there are 2 red balls and 2 blue balls only so when combining the formula I got ##\frac{(4-1)!}{2!2!}=\frac{3}{2}## which does not make sense. Is there any formula to calculate cyclic permutation of identical objects or I have to do it by listing all the possibilities? Thanks
Since ##px^9+q## is the factor, then ##x^9=\frac{-q}{p}## will be one of the roots. Let ##f(x)=27x^{18}+bx^9+70##, then: $$27\left(\frac{-q}{p}\right)^2+b\left(\frac{-q}{p}\right)+70=0$$ $$b=27 \frac{q}{p}+70 \frac{p}{q}$$ $$b=\frac{27q^2+70p^2}{pq}$$ From this expression, it looks like there is no greatest value of ##b## because increasing the value of ##p## and ##q## will also increase the value of ##b##. How to find the greatest value of ##b##? Thanks
Back
Top