Which method is more accurate for determining peak time in data analysis?

AI Thread Summary
Determining the most accurate method for finding peak time in data analysis involves comparing two approaches: averaging individual data points before identifying the peak versus finding the peak for each run and then averaging those peaks. The consensus suggests that finding the peak in each run first may yield better accuracy, as it accounts for variations in peak heights and widths across different runs. This method helps to mitigate random errors that could skew results if averaged prematurely. However, there is a desire for an analytical method to quantify the uncertainty associated with each approach. Ultimately, computational analysis may be necessary to provide a clearer understanding of the accuracy and uncertainty of each method.
marcusesses
Messages
23
Reaction score
1

Homework Statement



Say I perform an experiment, and I make a number measurements over a given interval (e.g t=0s to t = 10s, every 1s), and I perform this experiment many times.

Now, let's say I make a plot of data vs. time, and I want to find when the data peaks in time on average.

Which measurement of the peak time would provide more accuracy: if I take the average of the individual data points over the number of runs I did and and then determine the peak time, or if I determine the peak time within a given run and then find the average peak for all the data runs?

Homework Equations



?


The Attempt at a Solution



I don't even know where to start...there doesn't seem to be any simple relation between the two...at least, not that I know of...
 
Physics news on Phys.org
Well, I would find the peak in each run first. Not sure just why, but for example you might have two points equally high and notice that the peak must be in between them - likely midway. So you would gain a bit of accuracy that way.
 
Delphi51 said:
Well, I would find the peak in each run first.

I was thinking the same thing. In different runs, the peak heights or widths could be different, and averaging all the data runs before doing the height would effectively weight the various runs differently.
 
Thanks for the replies...
Would there be some analytical way to determine which way is more efficient?

I guess I can intuitively see how finding the peak for each run and then taking the average would be a better way, since for each individual data point, there may be random errors, but if you wait until the end of the run, the randomness may "smooth out", and the results will converge to the (presumably) correct value...

But if you have a distinct peak, wouldn't that also be apparent by taking the average of the individual data points?

I understand what you're both saying, but is there a way to determine which method has the greater uncertainty? I can probably just figure it out computationally, but I was hoping there was an analytical method that might make things a bit more...concrete.
 
I multiplied the values first without the error limit. Got 19.38. rounded it off to 2 significant figures since the given data has 2 significant figures. So = 19. For error I used the above formula. It comes out about 1.48. Now my question is. Should I write the answer as 19±1.5 (rounding 1.48 to 2 significant figures) OR should I write it as 19±1. So in short, should the error have same number of significant figures as the mean value or should it have the same number of decimal places as...
Thread 'A cylinder connected to a hanging mass'
Let's declare that for the cylinder, mass = M = 10 kg Radius = R = 4 m For the wall and the floor, Friction coeff = ##\mu## = 0.5 For the hanging mass, mass = m = 11 kg First, we divide the force according to their respective plane (x and y thing, correct me if I'm wrong) and according to which, cylinder or the hanging mass, they're working on. Force on the hanging mass $$mg - T = ma$$ Force(Cylinder) on y $$N_f + f_w - Mg = 0$$ Force(Cylinder) on x $$T + f_f - N_w = Ma$$ There's also...

Similar threads

Back
Top