Optimal Approach for Analyzing Non-Smooth Experimental Data

Click For Summary
SUMMARY

The discussion centers on the challenges of analyzing non-smooth experimental data. The user attempted polynomial fitting and smoothing techniques but found them inadequate. Key insights include the importance of validating data analysis methods and the suggestion to use a portion of the dataset for fitting while reserving another portion for validation. It is emphasized that multiple failed attempts at fitting can undermine the credibility of the analysis.

PREREQUISITES
  • Understanding of polynomial fitting techniques
  • Familiarity with data smoothing methods
  • Knowledge of validation techniques in data analysis
  • Experience with exponential and logarithmic functions in modeling
NEXT STEPS
  • Research advanced data fitting techniques for non-smooth datasets
  • Learn about validation methods such as cross-validation and bootstrapping
  • Explore the use of spline interpolation for data analysis
  • Investigate the application of exponential and logarithmic models in experimental data
USEFUL FOR

Data analysts, researchers in experimental sciences, and statisticians dealing with complex datasets that exhibit non-smooth characteristics.

t.m.p.c
Messages
1
Reaction score
0
Hi,

I need your help. From experiments I got data set which I need to analyze. The problem is that my data is not smooth. I tried to fit my data using a polynomial equation, but the fitting was not good enough. I also tried to smooth, spline... but got very different final results. Can anyone tell me which will be the best way (the more credible) to analyze my data?
Thanks in advance.

Regards,

tmpc
 
Physics news on Phys.org
Question #1: Does theoretically analyzing the data source suggest how the results should be distributed?

Question #2: How are you judging whether or not the fit is good enough?

Question #3: Do you have all of the variables accounted for?

Question #4: Have you tried anything exponential or logarithmic?


Some bad news: at this point, since you've been trying lots of things, it might not be possible to credibly analyze your data*. Finding a fit after a dozen false starts is far less significant than finding a fit on the first try. It might be that the best you can do is, once you find something that fits your current data, and then do a new experiment to (in)validate how well it works.

Maybe you can do some tricks to salvage this dataset, like using 75% of the data to find a fit and 25% to (in)validate it... but I'm not the person who can judge such things.



*: More accurately, it might not be possible to draw any credible conclusions. The analysis "we tried X, Y, and Z to analyze the data, without success" is definitely credible and accurate. It is important to know things like "these variables aren't linearly related", although it's not a flashy result.
 
Last edited:

Similar threads

  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 5 ·
Replies
5
Views
4K
Replies
27
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 33 ·
2
Replies
33
Views
4K