- #1

arierreF

- 79

- 0

Problem: A student measure the times 100 times.

Method 1:

He calculates the mean X and the standard deviation [itex]\sigma[/itex].

Method 2:

Now the student, divides the 100 measurements in 10 groups.

He calculates the value of mean to each group. Therefore he calculates the standard deviation (with the 10 values corresponding to the mean of each group).

Question:

Why the method 2 is more precise than the method 1?

Attempt:

In my experimental results, i observe that method 2 has a small standard deviation. Ok so i can conclude that method 2 is more precise because we have a small uncertainties.

But why this happens? If we divided in four intervals, the stranded deviation would be smallest. but why??