# Error in the average of same-value measures

## Main Question or Discussion Point

My question is simple, and I'm only asking it because most places talk about more advanced problems than this one:

I've measured the radius of a sphere (a very regular one) with a micrometer of 0.01 mm resolution.

I took 3 measures (rotating it between each measure), and all of them were perfectly the same value: 19.23 mm . It was not only the same value, but the lines coincided perfectly on the three.

Each measure is noted as 19.230 mm +- 0.005 mm, however, when averaging the three values (what gives the same value as result), what should be the error of the average?

If I sum up the three measures (propagating the error in the sum) and then divide the result by 3, I get an error of +- 0.002886751 mm (+- 0.003 mm rounding it to a single significant figure).

Is this correct?

Thanks!

Related Other Physics Topics News on Phys.org
I think that you are correct because your measures were close to 0.005mm which is honestly extremely close to 0.003mm.

jbriggs444
Homework Helper
2019 Award
f I sum up the three measures (propagating the error in the sum) and then divide the result by 3, I get an error of +- 0.002886751 mm (+- 0.003 mm rounding it to a single significant figure).
I am no expert, but I disagree. If you were adding up three figures with independent random errors then the relative error in the mean would (roughly speaking) be estimated by the relative error per measurement divided by the square root of three. But we are talking about quantization errors here. Are those independent? If the object is truly a sphere and the measurements are truly accurate to the precision of the instrument then the answer is no. They are not independent random errors. An error estimate that treats them as if they were is incorrect.

Staff Emeritus
2019 Award
with a micrometer of 0.01 mm resolution.
Then you are not going to get better than 0.01 mm by measuring multiple times. The only thing multiple measurements will help constrain is the average diameter for a sphere with measurable non-sphericity.

Dale
Mentor
My question is simple, and I'm only asking it because most places talk about more advanced problems than this one:
This is what I would call a deceptively simple question! It is simple, but it is easy to get it wrong. I would say that the definitive reference is the NIST guideline on uncertainty:

https://www.nist.gov/sites/default/files/documents/2017/05/09/tn1297s.pdf

However, if this is for a class and your teacher uses a different method then by all means learn your teacher’s method.

Each measure is noted as 19.230 mm +- 0.005 mm, however, when averaging the three values (what gives the same value as result), what should be the error of the average?
So, if you read the NIST reference then you see that you will have a type A uncertainty that you obtain through statistical methods and a type B uncertainty obtained through knowledge of the device characteristics.

The type A uncertainty is given by the sample standard deviation, which is 0. So the type A standard uncertainty is 0.

For the type B uncertainty your +-0.005 value means that your measuring device will report 19.23 for any value from 19.225 to 19.235. So the appropriate model is a uniform distribution over that range. So according to the NIST reference that gives a type B standard uncertainty of 0.005/sqrt(3). Note, the factor of sqrt(3) has nothing to do with the three samples you took, it is just due to the uniform distribution of your type B uncertainty.

To get your total estimated standard uncertainty you add the type A and type B uncertainties using the square root of the sum of squares. Since type A is 0, then this is just equal to the type B.

This is what I would call a deceptively simple question! It is simple, but it is easy to get it wrong. I would say that the definitive reference is the NIST guideline on uncertainty:

https://www.nist.gov/sites/default/files/documents/2017/05/09/tn1297s.pdf

However, if this is for a class and your teacher uses a different method then by all means learn your teacher’s method.

So, if you read the NIST reference then you see that you will have a type A uncertainty that you obtain through statistical methods and a type B uncertainty obtained through knowledge of the device characteristics.

The type A uncertainty is given by the sample standard deviation, which is 0. So the type A standard uncertainty is 0.

For the type B uncertainty your +-0.005 value means that your measuring device will report 19.23 for any value from 19.225 to 19.235. So the appropriate model is a uniform distribution over that range. So according to the NIST reference that gives a type B standard uncertainty of 0.005/sqrt(3). Note, the factor of sqrt(3) has nothing to do with the three samples you took, it is just due to the uniform distribution of your type B uncertainty.

To get your total estimated standard uncertainty you add the type A and type B uncertainties using the square root of the sum of squares. Since type A is 0, then this is just equal to the type B.
Interestingly, in my case, if I sum up all measures (propagating the error to the sum) and then divide everything by 3 (the value and the error), I get to the exact same value of 0.005/sqrt(3), because the error would be sqrt(3*(0.005²))/3 doing like that.

Just a coincidence though, if I tried that same method with another amount of measures it wouldn't be the same result.

Dale
Dale
Mentor
Just a coincidence though, if I tried that same method with another amount of measures it wouldn't be the same result.
Yes, exactly!