Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Confidence interval from least squares fitting?

  1. Aug 17, 2012 #1
    Hello,

    Let me get right to my problem. I have an experimental distribution and a single-parameter theoretical distribution.

    I want to find the value for the best fit theoretical distribution that agrees with my experimental data for the bulk of the distributions (the tails of my distributions differ substantially).

    I isolate an equal portion of both distributions and calculate the sum of the squares of the differences between the two distributions for this region. i.e. least squares approach.
    R2=Ʃ [ expi - theoi(x) ]2

    I do this for several values that I have chosen for the single-parameter theoretical distribution and obtain a unique parameter value (which I call x=ζ) which results in the minimization of the sum of the squares (exactly what I want). Every other parameter gives a larger value for this sum of squares.

    I do not know if this is necessary but I can plot all parameters that I tested vs the R2 too. This gives points that combine to form a parabola, ax2+bx+c=R2. I can take the derivative of the parabola curve to obtain the minimum of the parabola which again occurs at ζ. I have attached a pdf of this.

    My question is, how do I find a confidence interval for this? I am looking for [itex]\sigma[/itex] in ζ ± [itex]\sigma[/itex]. Do I use the formula for the parabola to find this? Do I use R2?

    I have looked through some books and online and been unsuccessful on how to find this [itex]\sigma[/itex] value. Any help is appreciated. A reference book would be of great help too. Thanks!
     

    Attached Files:

  2. jcsd
  3. Aug 17, 2012 #2

    Stephen Tashi

    User Avatar
    Science Advisor

    I may (or may not!) understand the usual method for getting confidence intervals for parameters of a curve fit, I've remarked about it in several posts and not gotten any corrections - we'll see if my luck holds.

    it involves some commonly made and implausible assumptions. It goes something like this. The value of the parameter is a known function of the data. I don't want to call the parameter "x". Let's call the parameter p and the say
    [itex] p = F(X_1,X_2,...X_n) [/itex] where the [itex] X_i [/itex] are the data.

    You may not know the symbolic expression for [itex] F [/itex] , but you have a numerical method for computing it, namely your curve fitting algorithm.

    Let's say that your particular curve fit found that [itex] p = p_0 [/itex] when the specific data was [itex] X_1 = x_1, X_2 = x_2,... X_n = x_n [/itex].

    Find (symbolically or numerically) the differential expression that approximates a change in [itex] p_0 [/itex] as a function of changes in the [itex] x_i [/itex].


    [itex] p0 + \delta p = F(x_1,x_2,...) + \delta x_1 \frac{\partial _F}{\partial X_1} + \delta x_2 \frac{\partial _F}{\partial X_2} + ...[/itex]

    [itex] \delta p = \delta x_1 \frac{\partial _F}{\partial X_1} + \delta x_2 \frac{\partial _F}{\partial X_2} + ...[/itex]

    Think of the [itex] \delta x_i [/itex] are the random errors in measuring each of the [itex] x_i [/itex]. Assume these are independent normally distributed random variables with mean zero. Estimate their standard deviations. (You might have some way of doing this that is independent of the curve fit. I suppose some people use the "residuals" (i.e. the "error" between the values predicted by the curve and the actual data) to do this estimation. I'm not sure how that can be justified by logical reasoning! )

    The above approximation expresses the random variable [itex] \delta p [/itex] a linear function of the independent mean zero normal random variables [itex]\delta x_i [/itex], which have known standard deviations. From this you can compute the standard deviation of [itex] \delta p [/itex] and say things about confidence intervals around [itex] p_0 [/itex].

    The method is suspicious enough that the technical term for it is not "confidence interval", although some software packages are brazen enough to call it that. I think it is technically a "linearized confidence interval" and it may need another adjective (that I don't recall at the moment) to warn people.

    If your theoretical distribution is from a commonly used family of curves, you might be able to look up the formula for the linearized confidence interval that the above method produces.
     
    Last edited: Aug 17, 2012
  4. Aug 18, 2012 #3

    DrDu

    User Avatar
    Science Advisor

    Least squares fit can be viewed at as a special case of a maximum likelihood proceedure where you assume that the logarithmic likelihood L is - up to some factors not depending on x - R^2.
    That is, the probability density function for obtaining your data exp_i given x is
    [itex] p(\{exp_i\})=C \exp(-R^2) [/itex].
    Now you turn the handle of general theorems about maximum likelihood to find that the asymptotic variance is [itex] \sigma^2= (\partial^2 R^2/\partial x^2)^{-1} [/itex], i.e. one over the second derivative of your parabola.
     
  5. Aug 18, 2012 #4

    Stephen Tashi

    User Avatar
    Science Advisor

    Another thought: You could simply Monte-Carlo the problem. Simulate sets of data with various simulated "errors" and plot the distribution of the estimated parameter. Even if you do the problem another way, you could use a simulation to check your result.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Confidence interval from least squares fitting?
Loading...