# Find number of trials to be done to get specified uncertainty?

1. Feb 28, 2013

### sloane729

1. The problem statement, all variables and given/known data
This has been bothering me for quite a while. I'm trying to work out how many measurements I will need to make to get my uncertainty under a predetermined value. If say I want the a fractional uncertainty $$\frac{\delta T}{T}$$ to be equal to or under some value for some timed event, how would I calculate the number of trials I will need to make to get that uncertainty.

2. Relevant equations
the standard deviation of some quantity to be measured T is
$$s = \left( \frac{1}{N-1}\sum_i (T_i - \overbar{T})^2 \right)^{1/2}$$
then
$$\delta T = u = \frac{s}{\sqrt{N}}$$

3. The attempt at a solution
Since to calculate the uncertainty $$\delta T$$ I would need first find the average of all values of $$T_i$$ then find the standard deviation divided by the square root of the number of trials which is equal to$$\delta T$$. But the number of trials is what I need to find but I can't know it without first finding the average value which is not possible because I need the number of trials etc. It seems like a round about problem if I'm not mistaken

2. Feb 28, 2013

### rude man

If you make N measurements of T then your error reduces to δT = (1/√N) times the error you get from making just 1 measurement.

So the answer to your question depends on how small you want the final δT to be. The error will continue to drop as 1/√N from your first measurement error:

(δT)N = (δT)1/√N.

3. Feb 28, 2013

### sloane729

so s (from above) is the error from making just a single measurement not the standard deviation of all the T_i's?

4. Feb 28, 2013

### Staff: Mentor

s is the estimated* standard deviation of a single measurement.
*as you use your own data to estimate this deviation.

If you know nothing about your measurements, you cannot determine the required number of measurements, right. If you have some way to estimate the mean and standard deviation (because you already took some data, or took similar data in the past, or have some theory prediction or whatever), you can use this formula with those estimates for mean and standard deviation.

5. Feb 28, 2013

### sloane729

thanks for the help. I have one final question: (δT)1 is the random error of a single measurement but what if I wanted to include systematic error? Would I need to take the square of both errors and then take the square root of the sum since the above equations assumes only statistical fluctuations of T.

6. Mar 1, 2013

### Staff: Mentor

Systematic errors which are the same for all data points? In that case, the method you described works both for the uncertainty of a single point and the uncertainty of the mean.