How to work out the uncertainty of some measurements?

1. Nov 19, 2008

a66as

1. The problem statement, all variables and given/known data
i am trying to work out the uncertainty of some measurements but i dont know how to, i tried finding some info on it online but i cant, well these are my 10 measurements

9.13mm
9.12mm
9.13mm
9.12mm
9.12mm
9.13mm
9.13mm
9.12mm
9.13mm
9.12mm

2. Relevant equations

3. The attempt at a solution

i have worked out that the average is 9.125 but i dont have a clue on how to work out the uncertainty, any help would be appreciated, you dont have to give an answer just tell me what you can to calculate to work it out

2. Nov 19, 2008

2ltben

The uncertainty is the standard deviation. Subtract each value from the mean, square that result, add them together, divide by n-1 (9 in this case), and take the square root of that.

It's generally represented by a plus-or-minus sign, but that doesn't mean that the average falls between the range created by that value, that's a different calculation involving confidence intervals.

3. Nov 19, 2008

dlgoff

When you use an instrument to make measurements, your uncertainty depends on its accuracy.
http://en.wikipedia.org/wiki/Measurement_uncertainty" [Broken]

Last edited by a moderator: May 3, 2017
4. Nov 19, 2008

a66as

after doing some research online i found this formula

-----------------------------------

so if i did 9.13 - 9.12 and divided it by 10 i would get the answer 1x10 to the power of -3

0.01 is this the correct answer?

5. Nov 19, 2008

andrewm

Typically, scientists use the standard deviation of a set of measurements to quantify the uncertainty. To give a meaningful uncertainty in practice, however, a scientist should also include device precision and reading error.

6. Nov 20, 2008

fluidistic

Exactly. If you get by some formula that the uncertainty is smaller than the accuracy of your instrument then the uncertainty is simply the accuracy of your instrument.
Using
you can get a very small uncertainty if you have a lot of values. Say you measured a building with a meter rule and you get an uncertainty of 1 nanometer you realize it's meaningless to say the uncertainty is smaller than the accuracy of your instrument.

Last edited by a moderator: May 3, 2017
7. Nov 20, 2008

naresh

Note the term "random uncertainty". This refers, therefore, to multiple measurements where the data and/or the measurement changed because of randomness. This assumes that your measurement is infinitely accurate, as others have pointed out. Your measurements have only two significant digits after the decimal point, therefore an uncertainty of 0.001 does not really make sense. It would help to know what your measurements are of and how the measurements were done.

Assuming the measurement is perfect (a theoretical measurement ) and a large number of measurements, a standard deviation as mentioned by 2ltben is a better measure of the random uncertainty than the formula you have.