1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Need help quick with error weirdness

  1. Nov 20, 2005 #1
    sry wrong forumn i think

    delete plz






    (right im doing a physics project and i get a set of results.

    say im measuring height and the error is +-1 cm (easy number)

    now say i repeat this and get 10 results now to get the mean i have to add them all together which acording to error rules means i have to add the errors together so i have an error of +- 10cm then i divide by 10 to give the mean now beacause 10 dosne't have an error nothing happens to the error

    this means that using the mean of 10 results gives me a greater error than using just one
    this seems to go against the whole point of using repeats

    is there a different rule i should use when working out a mean)
     
    Last edited: Nov 20, 2005
  2. jcsd
  3. Nov 20, 2005 #2
    right im doing a physics project and i get a set of results.

    say im measuring height and the error is +-1 cm (easy number)

    now say i repeat this and get 10 results now to get the mean i have to add them all together which acording to error rules means i have to add the errors together so i have an error of +- 10cm then i divide by 10 to give the mean now beacause 10 dosne't have an error nothing happens to the error

    this means that using the mean of 10 results gives me a greater error than using just one
    this seems to go against the whole point of using repeats

    is there a different rule i should use when working out a mean


    ive looked in my text book and scaned the internet but can't find anything thank you for any help
     
  4. Nov 20, 2005 #3

    daniel_i_l

    User Avatar
    Gold Member

    Usually when you add all of the +-1 errors together you don;t get 10 or -10 but rather some of the +1s and -1s cancel out so you get a number smaller than 10 and bigger than -10. Divide that number by 10 and you get a number less than 1 and bigger than -1. The more repeats you use, the more of a chance there is that the total will be very close to 0 so the error will be very small.
     
  5. Nov 20, 2005 #4
    all of the errors are the same beacuse there roundings (say its impossible to read off closer than 1cm)

    what you said makes perfect sense but is there a mathmatical way to calculate this depending on the number of samples

    i found this but im not sure if it applies any ideas http://www.ruf.rice.edu/~lane/hyperstat/A103735.html
     
  6. Nov 20, 2005 #5
    There are many ways to generate an estimation of error. First, a plug.

    If you have an interest in experimental physics, or are an aspiring theoretician who wants to understand the voodoo that goes on in a laboratory, you should pick up a copy of "An Introduction to Error Analysis" by Taylor. It's one of the friendliest, well written, most brief, easily accessible books I've ever seen. It's written in the style of Div Grad Curl, or a David Griffiths textbook.

    Let's agree on some terminology. When you report a measurement, you should always report it as:

    [tex]
    x = x_{\hbox{\small best}} \pm \Delta x
    [/tex]

    where [itex]x_{\hbox{\small best}}[/itex] is your best estimate for the value of x (it would almost always be the mean of your measurements), and [itex]\Delta x[/itex] gives an estimate of the error in your best guess. There are many, many ways of calculating error. However, for repeated independent trials, you should be using the standard deviation of the mean (SDOM) for [itex]\Delta x[/itex].

    What you'll find is that by using SDOM, the more measurements you make, the smaller the SDOM will be. If it doesn't get smaller, you're most likely doing something very wrong.

    You should be aware that there are two very different things here:

    1. generating an uncertainty for a single quantity due to repeated measurements.

    For example, you take 10 length measurements of the length of a table. That's what you're doing. One good way of generating an error estimate would be the SDOM.

    2. generating an uncertainty for a function of several measurements.

    For example, the table is much longer than your ruler, and so you need to take 3 sets of measurements. Each set consists of 10 "trials". Hmmm... lame example, but you get the idea. A better example would be measure the length, width, height of a box to calculate a volume. The rule you stated in your question is appropriate for generating uncertainty for THIS kind of measurement. But you should also know, it's a very bad rule. To see why, consider:

    [tex]
    f(x,y) = x_{\hbox{\small best}} + y_{\hbox{\small best}} \pm (\Delta x + \Delta y)
    [/tex]

    This assumes the WORST CASE SCENARIO. It assumes that you made a measurement of x, and got the worst possible measurement. But even worse than that, you made a measurement of y and also got the worst possible measurement. But even worse still, your errors in x and y were in "the same direction". There's a good chance that x might have been an overestimate, and y an underestimate, producing cancelling errors. But by adding [itex]\Delta x + \Delta y[/itex], you've assumed that both were complete overestimates. Or both were complete underestimates, with no cancellation of error.

    For this reason, when computing the sum or difference of measured quantities, you should add the error estimates in quadrature, not direct summation.

    I've given you more than you've asked for. Hope it didn't confuse. Do look into the Taylor book. It's really, really good.
     
    Last edited by a moderator: Nov 20, 2005
  7. Nov 20, 2005 #6
    lets call the ith measurement [itex]x_i[/tex] then you're calculating

    [tex] \bar{x} = \frac{\sum _{i=1} ^N x_i}{N} [/tex]

    The uncertainity then is

    [tex] u = \frac{\sum _{i=1} ^N (u_i)^2}{N^2} [/tex]

    where [itex]u_i[/tex] is the uncertainty in the ith measurement. This is actually lower than you original uncertainty.
     
  8. Nov 20, 2005 #7
    right i think i understand

    rather than saying i think there will be an error of -+ 1

    i use this http://www.ruf.rice.edu/~lane/hyperstat/A103735.html
    i know its not the standard deveation of the mean but it seems to be what im after

    which will give me an estimate of the error of a mean from my 3 measures of the length of the table

    and then i can use the very 'bad' rule to find the worse case senario for things calculated from these means
     
    Last edited: Nov 20, 2005
  9. Nov 20, 2005 #8
    You got it! :)


    Maybe a very nice way of thinking of this is that you have two things here (just thinking out loud in case I ever have to teach this stuff):

    1. The uncertainty in your measurement.

    2. The uncertainty in the thing you're trying to measure.

    The uncertainty in your measurement is ALWAYS +/- 1cm. That's why if you take a million billion measurements, the uncertainty will be +/- 1cm. If you can only read up to half-tick marks on the first measurement, you can only read up to half-tick marks on your billionth measurement. Repeated measurements won't change that.

    The uncertainty in what you're trying to measure, however, is different. It's related to, but different from, the uncertainty in the measurement. Your estimate becomes better as more and more trials cluster around the mean.

    The difference between error in a measurement and error in the quantity you're trying to measure is subtle, but important.
     
  10. Nov 20, 2005 #9
    right

    um (and i know im starting to sound a little stupid here but maths isn't my strong point, thinking is :) )

    i have to include in my project that i have thought about error and followed ti though the formulas i applie

    now in this i belive its talking about the first type of error beacuse its just assumed that the error in the second case is reduced by the repeats (you just say i will repeat blahblahblah to reduce error)

    where as what i think its mainly after is how far out your final result could be beacuase of the inherrent errors of the measuring

    ie. 'i know theres an error measureing time with a camera beacause the frames are only 24 a sec so i know there is an error of 1/24th of a sec'

    you say that i should just say that the mean time also has an error of
    1/24th of a sec but my teacher mentioned that there is a formula for working out the means error and i don't think he ment the one we have already talked about


    so you see i repeat the test to reduce the second type of error but this might increase the first ie. i read 1cm to high on all of them but it is likly they will cancel out so the more i do the more will cancel out and the lower the error will get and its a formula for this i can't find


    sry to keep going on
     
    Last edited: Nov 20, 2005
  11. Nov 20, 2005 #10

    mathman

    User Avatar
    Science Advisor
    Gold Member

    The standard deviation estimate s (which is an estimate of the error in the sample average x) is given by
    s2=sum((xi-x)2)/(N-1)
    N-1 is used, not N because of the uncertainty of x.
    Statistically the true mean will be within s of x about 67% of the time. Within 2s it will be about 90%.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: Need help quick with error weirdness
  1. Quick errors help (Replies: 10)

  2. Need help really quick. (Replies: 25)

  3. Some quick help needed (Replies: 1)

  4. Need quick math help (Replies: 2)

Loading...