Shaky model in least squares fit

Click For Summary
SUMMARY

The discussion centers on the challenges faced in achieving a robust least squares fit for spectroscopic data. The user identifies significant deviations between measurements and the physical model, indicating that the model is incomplete. To quantify the robustness of the fit, the F-value is introduced as a method to compare the variance of systematic errors against noise variance. Additionally, the Akaike Information Criterion (AIC) is suggested as a tool for model evaluation.

PREREQUISITES
  • Understanding of least squares fitting techniques
  • Familiarity with statistical concepts such as variance and systematic error
  • Knowledge of the F-test and its application in regression analysis
  • Awareness of model selection criteria, specifically the Akaike Information Criterion (AIC)
NEXT STEPS
  • Research the application of the F-test in regression analysis
  • Explore the Akaike Information Criterion (AIC) for model comparison
  • Investigate methods for quantifying robustness in statistical models
  • Learn about perturbation techniques for assessing model sensitivity
USEFUL FOR

Researchers and data analysts working with spectroscopic data, statisticians involved in model fitting, and anyone interested in improving the robustness of least squares fits in their analyses.

Gigaz
Messages
110
Reaction score
37
I've come across a problem with my least squares fits and I think someone else must have analyzed this, but I don't know where to find it.

I have a converged least squares fit of my spectroscopic data. Unfortunately, the physical model, on which the fit is based, is mediocre. The deviations between measurement and model are much larger than the statistical errors at each data point. There is almost certainly nothing I can do about that. The fit reproduces the data reasonably well, but the model is incomplete.

I know that there are some parameters inside the model, which do not seem to be very robust. If I fit only half of my data (only s or only p polarization), they always come out differently. Other parameters remain totally unchanged.

I'm looking basically for an idea on how I could quantify this "robustness". It can probably been done based on some sort of artificial perturbation function, but I haven't seen anything like that.
 
Physics news on Phys.org
Hi Gigaz, welcome to PF,

Apparently your systematic error (error with respect to the proposed model) is much larger than the error due to noise.
To quantify that we can divide the variance in the systematic errors by the variance of the noise.
This is called the F-value.
We can use the F-test to verify how significant this is with the hypothesis that a proposed regression model fits the data well.

For the record, a least squares method assumes that the errors are independent, normally distributed, have equal variance everywhere, and have expectation zero.
The F-test can verify part of those assumptions.
 
  • Like
Likes   Reactions: Gigaz and berkeman
A case of data in search of a model, it seems. You can apply the same data to different models, and see how that changes the fit. For example, you can fix one coefficient at a time, and re-estimate the remaining parameters.
 
  • Like
Likes   Reactions: Gigaz
Many thanks for those suggestions. I will try it and see what comes out :)
 
Thanks, chiro. Apparently, what I need is listed in your link: The Akaike information criterion.
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 20 ·
Replies
20
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
24
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 5 ·
Replies
5
Views
9K
Replies
28
Views
4K