Which method is more accurate for determining peak time in data analysis?

Click For Summary
SUMMARY

The discussion centers on determining the most accurate method for identifying peak time in data analysis from multiple experimental runs. Participants agree that calculating the peak for each individual run and then averaging these peaks yields greater accuracy compared to averaging all data points before determining the peak. This approach minimizes the impact of random errors present in individual measurements. The conversation also touches on the need for analytical methods to assess uncertainty in both approaches.

PREREQUISITES
  • Understanding of statistical averaging techniques
  • Familiarity with peak detection methods in data analysis
  • Knowledge of error analysis in experimental data
  • Basic proficiency in computational data analysis tools
NEXT STEPS
  • Research statistical methods for peak detection in time series data
  • Learn about error propagation and uncertainty quantification
  • Explore computational techniques for analyzing experimental data
  • Investigate software tools for data visualization and analysis, such as Python's SciPy library
USEFUL FOR

Data analysts, researchers in experimental physics, and anyone involved in quantitative data analysis seeking to improve accuracy in peak detection methods.

marcusesses
Messages
23
Reaction score
1

Homework Statement



Say I perform an experiment, and I make a number measurements over a given interval (e.g t=0s to t = 10s, every 1s), and I perform this experiment many times.

Now, let's say I make a plot of data vs. time, and I want to find when the data peaks in time on average.

Which measurement of the peak time would provide more accuracy: if I take the average of the individual data points over the number of runs I did and and then determine the peak time, or if I determine the peak time within a given run and then find the average peak for all the data runs?

Homework Equations



?


The Attempt at a Solution



I don't even know where to start...there doesn't seem to be any simple relation between the two...at least, not that I know of...
 
Physics news on Phys.org
Well, I would find the peak in each run first. Not sure just why, but for example you might have two points equally high and notice that the peak must be in between them - likely midway. So you would gain a bit of accuracy that way.
 
Delphi51 said:
Well, I would find the peak in each run first.

I was thinking the same thing. In different runs, the peak heights or widths could be different, and averaging all the data runs before doing the height would effectively weight the various runs differently.
 
Thanks for the replies...
Would there be some analytical way to determine which way is more efficient?

I guess I can intuitively see how finding the peak for each run and then taking the average would be a better way, since for each individual data point, there may be random errors, but if you wait until the end of the run, the randomness may "smooth out", and the results will converge to the (presumably) correct value...

But if you have a distinct peak, wouldn't that also be apparent by taking the average of the individual data points?

I understand what you're both saying, but is there a way to determine which method has the greater uncertainty? I can probably just figure it out computationally, but I was hoping there was an analytical method that might make things a bit more...concrete.
 

Similar threads

  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 23 ·
Replies
23
Views
5K
Replies
14
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
15
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K