# Error analysis

1. Dec 29, 2012

### arierreF

First of all, as you can see i'm new in forum. Sorry if i am posting in wrong section.

Problem: A student measure the times 100 times.
Method 1:
He calculates the mean X and the standard deviation $\sigma$.
Method 2:
Now the student, divides the 100 measurements in 10 groups.
He calculates the value of mean to each group. Therefore he calculates the standard deviation (with the 10 values corresponding to the mean of each group).

Question:

Why the method 2 is more precise than the method 1?

Attempt:

In my experimental results, i observe that method 2 has a small standard deviation. Ok so i can conclude that method 2 is more precise because we have a small uncertainties.

But why this happens? If we divided in four intervals, the stranded deviation would be smallest. but why??

2. Dec 30, 2012

### rude man

Method 2 is not more accurate than method 1.

Let σ = α1 for method 1. Let σ2 = std deviation of one of the ten averaged samples. Let σ3 = std deviation of the ten averaged samples.

Then σ2 = (√10)σ1 but σ3 = (1/√10)σ2
= (1/√10)(√10)σ1 = σ1.

3. Dec 30, 2012

### arierreF

With my experimental results, with method 1 i calculated the standard deviation σ for the 100 measurements.

With method 2, i divided the 100 measurements in 10 random groups. Then i obtained 10 means.

The standard deviation of the 10 mean values, is smaller than the standard deviation of the 100 measurements.