MikeyW said:
but can anyone point me in the right direction to find errors for A,B,C? Is this even possible for non-linear fitting?
I think the search keywords you want are "asymptotic linearized confidence interval". I recall reading about them, but tonight I haven't found a good link that explains the topic.
Saying that you want the "errors" or "error bars" in the parameters is not specific. Perhaps you want to find the standard deviations of the parameters A,B,C about their means. We have no data to compute this (even in linear curve fitting). After all, your data consists of samples of (x,y) not samples of A,B,C, so how can we say A,B,C have a mean or variance? Yet curve fitting software packages claim to give such information for parameters. How do they do it?
I'm not certain. I'll make a conjecture based on reading about "asymptotic linear confidence intervals" .
Express the value of each parameter as a known function of the data. For example, when we do the least squares fit of a linear function, the slope and intercept are computed as a function of the data values.
Let's call the parameter p and the say
p = F(X_1,X_2,...X_n, Y_1, Y_2,...Y_n) where the (X_i ,Y_i) are the data.
You may not know the symbolic expression for F , but you have a numerical method for computing it, namely your curve fitting algorithm. So you could approximate the partial derivatives of F numerically.
Let's say that your particular curve fit found that p = p_0 when the specific data was X_i = x_i, Y_i = y_i.
Find (symbolically or numerically) the differential expression that approximates a change in p_0 as a function of changes in the x_i, y_i.
p0 + \delta p = F(x_1,x_2,...) + \delta x_1 \frac{\partial F}{\partial X_1} + \delta x_2 \frac{\partial F}{\partial X_2}+ \delta y_1 \frac{\partial F}{\partial Y_1} + \delta y_2 \frac{\partial F}{\partial Y_2} + ...
\delta p = \delta x_1 \frac{\partial F}{\partial X_1} + \delta x_2 \frac{\partial F}{\partial X_2} + \delta y_1 \frac{\partial F}{\partial Y_1} + \delta y2 \frac{\partial F}{\partial Y_2} + ...
Assume p_0 = F(x_1,x_2,...y_1,y_2..) is a good estimate for the mean value of p
Assume the \delta x_i are independently identically distributed , mean zero, gaussian random errors. Assume the \delta y_i are also. The above approximation expresses the random variable \delta p as a linear function of the independent mean zero normal random variables \delta x_i , \delta y_i You can compute the variance of \delta p if you know the variance of the \delta x_i and the \delta y_i.
Let's assume the \delta y_i have a variance that is estimated by the variance of the residuals.
How do we find the variance of the \delta x_i? You could assume that there are no measurement errors in the X_i and set the \delta x_i = 0. If you can't assume that, perhaps we can use the linear approximation trick again (but I'm not really sure if this makes sense.) The curve fit (using specific values of the parameters) expresses the prediction of Y_i as a function of the X_i so Y_i = G(X_1, X_2,...).
Approximate using:
Y_i + \delta y_i = G(x_1,x_2,..) + \delta x_1 \frac{\partial G}{\partial X_1} + \delta x_2 \frac{\partial G}{\partial X_2} + ...
\delta y_i = \delta x_1 \frac{\partial G}{\partial X_1} + \delta x_2 \frac{\partial G}{\partial X_2} + ...
We have assumed the variance of the \delta y_i is the variance of the residuals. Use the above equation to solve for the variance of the \delta x_i.
To me, the above process is rather circular and suspicious. It involves many assumptions and I'm not sure I stated all of them. However, it's the best I can to to reconstruct how standard deviations could be estimated for use in "asymptotic linearized confidence intervals" for parameters in a curve. fit. I anyone knows better, please comment!
-----