Find number of trials to be done to get specified uncertainty?

  • Thread starter Thread starter sloane729
  • Start date Start date
  • Tags Tags
    Uncertainty
Click For Summary

Homework Help Overview

The discussion revolves around determining the number of trials needed to achieve a specified uncertainty in measurements of a timed event. The original poster is exploring how to calculate the number of measurements required to ensure that the fractional uncertainty remains below a predetermined threshold.

Discussion Character

  • Exploratory, Assumption checking, Conceptual clarification

Approaches and Questions Raised

  • Participants discuss the relationship between the number of measurements and the resulting uncertainty, with some questioning the definitions of standard deviation and error in the context of single versus multiple measurements. Others explore the implications of including systematic errors alongside random errors.

Discussion Status

The conversation is ongoing, with participants providing insights into how the uncertainty decreases with more measurements and discussing the conditions under which the number of trials can be estimated. There is recognition of the complexity involved in determining the required number of trials without prior data.

Contextual Notes

Participants note that without prior knowledge of measurements or estimates of mean and standard deviation, it is challenging to determine the necessary number of trials. The discussion also touches on the distinction between random and systematic errors in the context of uncertainty calculations.

sloane729
Messages
7
Reaction score
0

Homework Statement


This has been bothering me for quite a while. I'm trying to work out how many measurements I will need to make to get my uncertainty under a predetermined value. If say I want the a fractional uncertainty \frac{\delta T}{T} to be equal to or under some value for some timed event, how would I calculate the number of trials I will need to make to get that uncertainty.

Homework Equations


the standard deviation of some quantity to be measured T is
s = \left( \frac{1}{N-1}\sum_i (T_i - \overbar{T})^2 \right)^{1/2}
then
\delta T = u = \frac{s}{\sqrt{N}}

The Attempt at a Solution


Since to calculate the uncertainty \delta T I would need first find the average of all values of T_i then find the standard deviation divided by the square root of the number of trials which is equal to\delta T. But the number of trials is what I need to find but I can't know it without first finding the average value which is not possible because I need the number of trials etc. It seems like a round about problem if I'm not mistaken
 
Physics news on Phys.org
If you make N measurements of T then your error reduces to δT = (1/√N) times the error you get from making just 1 measurement.

So the answer to your question depends on how small you want the final δT to be. The error will continue to drop as 1/√N from your first measurement error:

(δT)N = (δT)1/√N.
 
so s (from above) is the error from making just a single measurement not the standard deviation of all the T_i's?
 
s is the estimated* standard deviation of a single measurement.
*as you use your own data to estimate this deviation.

But the number of trials is what I need to find but I can't know it without first finding the average value which is not possible because I need the number of trials etc. It seems like a round about problem if I'm not mistaken
If you know nothing about your measurements, you cannot determine the required number of measurements, right. If you have some way to estimate the mean and standard deviation (because you already took some data, or took similar data in the past, or have some theory prediction or whatever), you can use this formula with those estimates for mean and standard deviation.
 
thanks for the help. I have one final question: (δT)1 is the random error of a single measurement but what if I wanted to include systematic error? Would I need to take the square of both errors and then take the square root of the sum since the above equations assumes only statistical fluctuations of T.
 
Systematic errors which are the same for all data points? In that case, the method you described works both for the uncertainty of a single point and the uncertainty of the mean.
 

Similar threads

  • · Replies 8 ·
Replies
8
Views
2K
Replies
6
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 4 ·
Replies
4
Views
1K
  • · Replies 3 ·
Replies
3
Views
4K
Replies
15
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K