1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Homework Help: Find number of trials to be done to get specified uncertainty?

  1. Feb 28, 2013 #1
    1. The problem statement, all variables and given/known data
    This has been bothering me for quite a while. I'm trying to work out how many measurements I will need to make to get my uncertainty under a predetermined value. If say I want the a fractional uncertainty [tex]\frac{\delta T}{T}[/tex] to be equal to or under some value for some timed event, how would I calculate the number of trials I will need to make to get that uncertainty.

    2. Relevant equations
    the standard deviation of some quantity to be measured T is
    [tex] s = \left( \frac{1}{N-1}\sum_i (T_i - \overbar{T})^2 \right)^{1/2} [/tex]
    [tex] \delta T = u = \frac{s}{\sqrt{N}}[/tex]

    3. The attempt at a solution
    Since to calculate the uncertainty [tex]\delta T[/tex] I would need first find the average of all values of [tex]T_i[/tex] then find the standard deviation divided by the square root of the number of trials which is equal to[tex]\delta T[/tex]. But the number of trials is what I need to find but I can't know it without first finding the average value which is not possible because I need the number of trials etc. It seems like a round about problem if I'm not mistaken
  2. jcsd
  3. Feb 28, 2013 #2

    rude man

    User Avatar
    Homework Helper
    Gold Member

    If you make N measurements of T then your error reduces to δT = (1/√N) times the error you get from making just 1 measurement.

    So the answer to your question depends on how small you want the final δT to be. The error will continue to drop as 1/√N from your first measurement error:

    (δT)N = (δT)1/√N.
  4. Feb 28, 2013 #3
    so s (from above) is the error from making just a single measurement not the standard deviation of all the T_i's?
  5. Feb 28, 2013 #4


    User Avatar
    2017 Award

    Staff: Mentor

    s is the estimated* standard deviation of a single measurement.
    *as you use your own data to estimate this deviation.

    If you know nothing about your measurements, you cannot determine the required number of measurements, right. If you have some way to estimate the mean and standard deviation (because you already took some data, or took similar data in the past, or have some theory prediction or whatever), you can use this formula with those estimates for mean and standard deviation.
  6. Feb 28, 2013 #5
    thanks for the help. I have one final question: (δT)1 is the random error of a single measurement but what if I wanted to include systematic error? Would I need to take the square of both errors and then take the square root of the sum since the above equations assumes only statistical fluctuations of T.
  7. Mar 1, 2013 #6


    User Avatar
    2017 Award

    Staff: Mentor

    Systematic errors which are the same for all data points? In that case, the method you described works both for the uncertainty of a single point and the uncertainty of the mean.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted