Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Uncertainty of a non linear least squares fit

  1. Jan 6, 2012 #1
    Hi,

    I have some experimental data as a function of time t and temperature T. I have done a least squares fit of the data with a function f=f(a1,a2,t,T) (the function is non linear in a1 and a2!). Optimization of e^2 = sum((yi-f(a1,a2,ti,Ti)^2) with Matlab's fminsearch gave me a1, a2 and the residual error^2 (e^2).

    Now I need some estimation of the quality of the fit (something comparable to R^2 in linear regressions). What can I use for this purpose?

    I think I can remember that e^2/(n-2) (n=sample volume) can be used to estimate the uncertainty of the fit. Am I right? If so, how is this quantity called and how can I interpret it (e.g. what statistical test is applicable?)?

    Or is it necessary/better to calculate the covariance matrix?
    I found somewhere that it can be calculated with:
    e^2/(n-2)*C ^-1 with Cij=sum(df/dai*df/daj).

    I guess, I have to evaluate the differentials df(t,T)/da1 and df(t,T)/da2 at the optimized a1 and a2 and sum over all combinations of ti and Ti: Cij=sum(sum(df(ti,Ti)/dai*df(ti,Ti)/daj)); right??



    Thanks for any help !
     
  2. jcsd
  3. Jan 7, 2012 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    (e^2)/(n-1) is the usual "unbiased estimator of the population variance", if we assume your errors are independent random samples from a normal distribution.

    Using a statistical test would be appropriate if you were making some sort of decision, but you haven't said what decision that would be.

    I don't know what procedure you are referring to. Post a link to something like it and perhaps someone can comment.



    Thanks for any help ![/QUOTE]
     
  4. Jan 8, 2012 #3
    I just want to have a simple measure for the uncertainty of the fit. I am actually just interested in one of the 2 parameters and I want to have something like a1 = xx ± error.

    I have to calculate e^2 anyway and so I thought, I could use it for this purpose. But it doesn’t even have the same units as the parameter a1 and I have no idea how to interpret it.

    So possibly it is better to calculate the covariance matrix cov=e^2/(n-2)*C ^-1; C = J’*J, where J is the Jacobian matrix (http://www.orbitals.com/self/least/least.htm). The calculation should be correct but again I don’t know exactly what to do with it. Can I simply use the diagonal element of cov for a1 = xx +/- √cov(1,1)?
     
  5. Jan 8, 2012 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    I looked at the paper in that link. In my opinion, the bottom line is that you do have to compute Jacobians and covariance matrices to get intervals for the parameters you are estimating, if you follow the method it prescribes.

    I don't know whether you are rushing to produce a report or whether you have the time and inclination to understand exactly what you are doing. Actually understanding what is going on in nonlinear least total squares curve fitting is complicated and I don't claim to have a comprehensive grasp of it. ( From the point of view of explaining the probability and statistics involved, the paper in that link isn't well written.)

    Some elementary points:

    1. It's best to use unambiguous terminology. In some contexts "uncertainty" amounts to the standard deviation of a random variable. In others it refers to entropy. It isn't clear what you mean by the "uncertainty of the fit". Likewise, although "confidence" has technical definition in statistics, most laymen are thinking of their own misinterpretation of that word when they use it. For example, if we say that 1.90 plus or minus 1.3 is a "90% confidence interval" for a parameter, this does NOT mean that there is a 90% chance than the true value of the parameter is in that interval.

    2. As I understand the paper in the link, it is not computing "confidence intervals" in the ordinary sense of that terminology. It is computing "asymptotic linearized confidence intervals". It doesn't bother to include those adjectives or explain the ideas behind them.

    3. When a common sense person has data, he naturally wants information about the probability that certain ideas or true or the probability that the true values of things are in certain intervals. Unless he makes enough assumptions to use Bayesian statistics, he can never get such answers. The usual kind of statistics ("frequentist" statistics") does not solve that type of problem. It doesn't quantify the probability of some idea given the data. Instead it computes the probability of the data given some idea. The terminology of frequentist statistics ("confidence", "significance" etc.) strongly suggests to laymen that they are getting answers about the the probability of various ideas or the location of various parameters given the data they have. In fact, they are actually getting numbers based on computing the probability of data when assuming certain ideas and locations are true (i.e. true with probability 1). There is a distinction between "The probability of A given B" versus "The probability of B given A", which I hope is obvious .
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Uncertainty of a non linear least squares fit
Loading...