# Physics lab work - calculating % errors

I have a table of these 10 readings,

149.6
150.9
149.7
147.9
147.7
152.4
149.8
152.2
153.2
148.9

They were taken on an optical bench where the smallest unit of measurement was 1mm. I'm trying to calculate the +- error but I'm not sure how to. I've came across 3 methods so far,

1. Standard deviation

2. $\frac{max - min}{average}$$*100 *0.5$

3. $\frac{max - min}{no.values}$​

Which should I use? Also, since the smallest unit of measurement was 1mm, should I round each of my readings to the nearest mm?

gneill
Mentor
I have a table of these 10 readings,

149.6
150.9
149.7
147.9
147.7
152.4
149.8
152.2
153.2
148.9

They were taken on an optical bench where the smallest unit of measurement was 1mm. I'm trying to calculate the +- error but I'm not sure how to. I've came across 3 methods so far,

1. Standard deviation

2. $\frac{max - min}{average}$$*100 *0.5$

3. $\frac{max - min}{no.values}$​

Which should I use? Also, since the smallest unit of measurement was 1mm, should I round each of my readings to the nearest mm?

You probably want the Standard Error in the Mean. When all the N samples have the same error $\Delta x$, the standard error would be $\Delta x / \sqrt{N}$.

Take a look here at the section on the Standard Error in the Mean.