Judgement of good fit: where to stop?

  • Context: Graduate 
  • Thread starter Thread starter ssd
  • Start date Start date
  • Tags Tags
    Fit
Click For Summary

Discussion Overview

The discussion revolves around the challenge of fitting a curve to predict Y based on X using a nonlinear model. Participants explore methods for determining when to stop refining the fit, particularly in the context of residual sum of squares (rss) and the significance of model terms. The conversation includes technical reasoning and various approaches to model fitting.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant describes a nonlinear model for Y involving multiple terms and expresses concern about the lack of stabilization in the rss value during fitting.
  • Another participant suggests that there are several plausible explanations for the variance in the data and proposes starting with a simple model and adding complexity incrementally.
  • An alternative approach mentioned involves starting with a full model and systematically removing terms that contribute least to reducing residual variance.
  • Some participants inquire about the (x,y) data for further exploration and experimentation with software tools, but the original poster indicates the data is confidential.
  • One participant notes that the approach of using orthogonal polynomials is not applicable in this case, emphasizing the need for an objective measure of accuracy instead.

Areas of Agreement / Disagreement

Participants express differing views on the best approach to model fitting and the criteria for determining a satisfactory fit. There is no consensus on a specific method or cutoff value for rss or R^2.

Contextual Notes

The discussion highlights the complexities of nonlinear modeling, including the interdependence of model terms and the challenges posed by measurement noise. Specific limitations regarding the applicability of certain methods, such as orthogonal polynomials, are noted.

ssd
Messages
268
Reaction score
6
I have to fit a curve to predict Y on the basis of X. 50 pair of (x,y) is given. On free hand plot the curve it is seen (and otherwise known also) that Y and X has 1:1 relationship. The basic nature of movement of Y is quite composite and found involving x^a, sin(c+bx), d^x , one additive constant, and different constant multipliers of the first 3 terms. Here constants are real valued .
That is,
Y= p + q.x^a+ r.sin(c+dx) + s.d^x
What I did is to start with arbitrary values of a,b,c,d and find p,q,r,s by least squares and calculate the residual sum of squares (rss).
Now I varied 'a' and compared rss till it is minimum. Then repeated the same with others, one at a time and came back to 'a' ..and so on. This resulted in a nice fit but the rss value never seem to stabilize.. it is decreasing (but of course it is not becoming 0).

My question is when should I stop ... is there any objective method or a value of rss (or value of R^2) which can be used as cut off when I can say the fit is satisfactory?
PS: Frequency distributions cannot be formed to test goodness of fit.

Any idea is appreciated.
 
Physics news on Phys.org
A good article relating to your question is at http://www.aip.org/tip/INPHFA/vol-9/iss-2/p24.html" . See esp. the sections "Nonlinear Models" and "Interpreting Results".

BTW - there are a few pretty good [free] shareware packages that can do nonlinear curve fitting. See, for example http://www.prz.rzeszow.pl/~janand/" .

jf
 
Last edited by a moderator:
BTW - just for fun, could you provide the (x,y) data? This would be an interesting and educational (for me, at least) exercize for playing and tweaking with Mathematica's "Non-linear Regression" package. I'm kind of curious to fool around with it in OriginPro, too. I've never messed with their non-linear stuff.

jf
 
You have a number of plausible explanations of the variance in the data. The difficulties you are facing are (1) that these explanations are not independent of one another, and (2) the models are non-linear.

One approach is to address the problem is to start with nothing (i.e., the data are just random numbers). One of the plausible explanations will most likely do a better job than any of the other explanations at reducing the residual variance. Keep repeating this step -- i.e., add one explanation at a time to your overall model. Stop when the residual variance can be attributed to measurement noise with a high degree of confidence. Note well: This means you need some kind of model of the measurement process.

Another approach is to start with a full model. Throw the kitchen sink at the problem. The first approach worked by adding terms step-by-step. This approach works by subtracting terms. Find the model term such that removing it does the least damage to the residual variance. Suppose that the residual variance can still be attributed with high confidence to measurement noise after removing this least-powerful term. That means that this term is not significant. Throw it out. Repeat throwing out terms until you finally come up against a term that is significant.
 
jackiefrost said:
BTW - just for fun, could you provide the (x,y) data? This would be an interesting and educational (for me, at least) exercize for playing and tweaking with Mathematica's "Non-linear Regression" package. I'm kind of curious to fool around with it in OriginPro, too. I've never messed with their non-linear stuff.

jf

Sorry, it cannot be done as the data is not for public and is maintained as secret for people who are not permitted to use it.
 
D H said:
You have a number of plausible explanations of the variance in the data. The difficulties you are facing are (1) that these explanations are not independent of one another, and (2) the models are non-linear.

One approach is to address the problem is to start with nothing (i.e., the data are just random numbers). One of the plausible explanations will most likely do a better job than any of the other explanations at reducing the residual variance. Keep repeating this step -- i.e., add one explanation at a time to your overall model. Stop when the residual variance can be attributed to measurement noise with a high degree of confidence. Note well: This means you need some kind of model of the measurement process.

Another approach is to start with a full model. Throw the kitchen sink at the problem. The first approach worked by adding terms step-by-step. This approach works by subtracting terms. Find the model term such that removing it does the least damage to the residual variance. Suppose that the residual variance can still be attributed with high confidence to measurement noise after removing this least-powerful term. That means that this term is not significant. Throw it out. Repeat throwing out terms until you finally come up against a term that is significant.

Thanks for your opinion but the apporoach like orthogonal polynomials do not work here. The basic shape has been identified in the form as I mentioned. The question is of satisfactory (objective) degree of accuracy only.
 
Last edited:

Similar threads

  • · Replies 16 ·
Replies
16
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
24
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 8 ·
Replies
8
Views
4K