1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Data analysis question

  1. Feb 13, 2009 #1
    1. The problem statement, all variables and given/known data

    Say I perform an experiment, and I make a number measurements over a given interval (e.g t=0s to t = 10s, every 1s), and I perform this experiment many times.

    Now, let's say I make a plot of data vs. time, and I want to find when the data peaks in time on average.

    Which measurement of the peak time would provide more accuracy: if I take the average of the individual data points over the number of runs I did and and then determine the peak time, or if I determine the peak time within a given run and then find the average peak for all the data runs?

    2. Relevant equations

    ?


    3. The attempt at a solution

    I don't even know where to start...there doesn't seem to be any simple relation between the two...at least, not that I know of....
     
  2. jcsd
  3. Feb 13, 2009 #2

    Delphi51

    User Avatar
    Homework Helper

    Well, I would find the peak in each run first. Not sure just why, but for example you might have two points equally high and notice that the peak must be in between them - likely midway. So you would gain a bit of accuracy that way.
     
  4. Feb 13, 2009 #3

    Redbelly98

    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    I was thinking the same thing. In different runs, the peak heights or widths could be different, and averaging all the data runs before doing the height would effectively weight the various runs differently.
     
  5. Feb 14, 2009 #4
    Thanks for the replies...
    Would there be some analytical way to determine which way is more efficient?

    I guess I can intuitively see how finding the peak for each run and then taking the average would be a better way, since for each individual data point, there may be random errors, but if you wait until the end of the run, the randomness may "smooth out", and the results will converge to the (presumably) correct value...

    But if you have a distinct peak, wouldn't that also be apparent by taking the average of the individual data points?

    I understand what you're both saying, but is there a way to determine which method has the greater uncertainty? I can probably just figure it out computationally, but I was hoping there was an analytical method that might make things a bit more...concrete.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Data analysis question
  1. Data analysis (Replies: 2)

  2. Data Analysis (Replies: 0)

  3. Errors/Data Analysis (Replies: 4)

Loading...