# Homework Help: Question on estimated uncertainty and significant digits

1. Aug 28, 2011

### JDoolin

Question on "estimated uncertainty" and significant digits

1. The problem statement, all variables and given/known data

The value is 136.52480, and the "estimated uncertainty" is 2. How many digits should be included as significant?

2. Relevant equations

Not sure whether estimated uncertainty refers to a 1σ (68.26%) confidence interval or what.
Rounding implies a 100% confidence interval, because you're taking a known number between [.5, 1.5) and rounding it to 1.

Edit: I should also mention that stating one significant digit after rounding, for instance 137 ϵ [136.5,137.5) implies a uniform distribution, so the likelihood of the actual value being 136.6 is the same as the chance of havinga value of 137, whereas 136.52480 ± 2 tells you that your point estimate for the value is 136.52480, and the likelyhood of the actual value being 136.6 is greater than the likelyhood of it being 137.

3. The attempt at a solution

The confidence interval based on the estimated uncertainty is (134.52480,138.52480) so there is, I'm guessing a 68.26% chance the data is within that region. If so, the 95% CI, is (132.52480,140.52480)

1 sig fig: x≈100 -> x ϵ [50,150); The interval contains the 95% CI but is roughly 10 times too wide.
2 sig figs: x≈140 -> x ϵ [135,145). The interval contains the 1σ CI, but misses part of the 95% CI. The width of this interval is close to the width of the 95% Confidence interval.
3 sig fig: x≈137 -> x ϵ [136.5,137.5);
4 sig fig: x≈136.5 -> x ϵ [136.45,137.55); This interval contains only a tiny fraction of the Confidence Interval.

I think the best answer is that 4 digits are significant, because very little information is lost if we say our measurement is 136.5 ± 2. Any other answer would be rounding to a point where your result is unnecessarily inaccurate.

I wonder if there is a general rule? Though this question appears in the homework, I cannot find any reference to "estimated uncertainty" in the examples or text.

I am a math and physics professor, so I can probably get hold of the answer key next week. I'll be curious what the book's answer is, but I wonder if it is wise to mix the concepts of rounding with the idea of "estimated uncertainty," or whether it should be treated so nonchalantly, like this question actually has a multiple choice answer.

Last edited: Aug 28, 2011
2. Aug 30, 2011

### Staff: Mentor

Re: Question on "estimated uncertainty" and significant digits

I can't contribute anything to your interesting musings. But I would hope that estimated uncertainty does not relate to confidence intervals, because in practice that's not how it is used. If I use a rule (aka, a ruler) to measure a distance, and I determine that distance to be 14.4 +/- 0.2 cm I don't intend that anyone believe that distance could be 15 or even 20 cm, however unlikely.

Last edited: Aug 30, 2011
3. Aug 30, 2011

### JDoolin

Re: Question on "estimated uncertainty" and significant digits

Thank you for replying. That's a good point. How are you finding your estimated error? Is it really estimated (just taking a guess at how far off your measurement is likely to be) or is it calculated from the sample-standard-deviation of several measurements?

And the basic question, when you give your +/- 0.2 cm, do you mean you are 68% certain that the value is between there, 95% certain? 99% certain? or even 100% certain?

Some of this stuff below I learned from statistics class rather than physics class. It seems like the Stats and Physics sometimes are speaking in different languages, but I'm trying to make the two match up.

If you use 68%, 95%, 99%, those are called your Confidence Level, and then you can determine alpha, $$\alpha = 1 - CL$$ which is called the significance level, and $$\alpha/2$$ is the chance of your estimate range being too low, and $$\alpha/2$$ is the chance of your estimate range being too high.

If you're calculating the sample-standard deviation of your data and calling that the uncertainty, that's about 68% certain. I think, if you're measuring a quantity that should be the same every time, you can take $s_x /\sqrt n$ as your uncertainty, (because each trial is triangulating on the actual result.) but if you're measuring a quantity that is probably different every time, you should just take $s_x$ as your uncertainty (because you want to account for real variance in the quantity.) One physics lab book I had called the quantity
$s_x /\sqrt n = \mathrm{"standard error"}$​
I'm not sure any physics texts do it, but to make things more in line with statistics, I'm thinking of suggesting to my physics students that they actually take their sample standard deviation and multiply by the invT(0.8413,n-1) (That's a function on the TI-83/84/89 calculators, called the student t distribution.)

(.8413 is the area under the normal curve to the left of 1 standard deviation, and n is the number of trials in the test. )

I'm not exactly sure what the student-t factor does, but I know it somehow accounts for a small number of data points when you the distribution is not normally distributed.

For instance with 2 data points, invT(.8413,1)=1.8367.

(and by the way, with two data-points, ALL the data is in the extreme end of the tails, and no distribution can possibly be less normal than that.)

There's also some gobbledy-gook in the Stats book I was using last semester warning people how "not" to interpret a confidence interval. Sort of mysterious though; giving examples of ways not to use it, but not explaining why precisely the examples were wrong. Even encouraging the student to come up with more creative ways to misuse the confidence interval. I suppose I'm running the danger of coming up with one of those creative mis-uses, but it would be nice to establish the one correct interpretation, rather than making a list of incorrect ones.

4. Aug 30, 2011

### JDoolin

Re: Question on "estimated uncertainty" and significant digits

After some further experience in the lab, I noticed that sometimes, you might just say you read off 4.9 centimeters, but you're only certain to within a millimeter or so. Doesn't really have anything to do with standard deviaton at all; just a judgment call. Besides that, a lot of the time, there's probably something that you're not thinking of that's causing an error in your measurement that you're not taking into account anyway.

Last edited: Aug 30, 2011
5. Sep 1, 2011

### Staff: Mentor

Re: Question on "estimated uncertainty" and significant digits

In practice, it is unusual to see a value expressed to a precision greater than its accuracy, e.g., your example of 136.5 +/- 2

So unusual, that I can recall first encountering such, when it caused me to pause for thought before recognizing that it can, in exceptional circumstances, be perfectly sensible. The estimate of the Hubble constant can be expressed as, e.g., $$\textrm71.9\; _{-2.7}^{+2.6}$$My editor does not allow any extended-ASCII chars, so your Latex generator just came in handy. http://www.codecogs.com/latex/eqneditor.php" [Broken]

Last edited by a moderator: May 5, 2017
6. Sep 1, 2011

### JDoolin

Re: Question on "estimated uncertainty" and significant digits

Well, let me give an example that is a little more basic.

Using "air tracks" (if you don't know what that is, it's designed to be a nearly frictionless ramp by using a layer of air under the glider. Similar in principle to air hockey) We were measuring the height, distance and time as the glider slid down the ramp.

We were measuring a height, from the low end to the high end of the track around 3.9 cm; maybe +/-0.2 cm as a guesstimate of how carelessly we were making the measurement. (This is not at all similar to a standard deviation.) The way this measurement was made was by taking the distance between the table to the bottom of the air track at the top end and the bottom end of the path.

Well, in the end, we did some trig and found that according to our calculation the gravitational constant was 12.5 m/s2.

We wondered whether somehow the air was pushing the glider, or if there were some springy action somewhere, but we couldn't figure out where a 30% error= (12.5-9.8)/9.8 was coming from.

However, the next day, we tried setting up the track completely level with the table, and found that there was a 1° - 2° slope in the table itself! Which means that our measurement in the Δh of the track could be off by as much as 2 or 3 cm!

So where we originally said Δh was 3.9 +/-0.2 cm, it turned out to be maybe 6 +/- 2 cm.

In any case, the "estimated uncertainty" here is really just that; an estimate, or maybe even a wild guess. We could get some nice mathematical (standard deviation) number by using statistics, but if we're making the same mistake in every measurement, that standard deviation is going to be way smaller than our actual error.