Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

The linear in linear least squares regression

  1. Jul 28, 2015 #1
    It is my understanding that you can use linear least squares to fit a plethora of different functions (quadratic, cubic, quartic etc). The requirement of linearity applies to the coefficients (i.e B in (y-Bx)^2). It seems to me that I can find a solution such that a coefficient b_i^2=c_i, in other words I can just express the squared b_i with a linear c_i. So can't I always find a linear solution?

    My gut feeling is that linearity is required because normality is required for the orthoganality principle to hold? But I am not sure.

  2. jcsd
  3. Jul 28, 2015 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Homework Helper

    I'm not sure what your question is here.

    The least squares method for fitting a set of data to a general polynomial function (linear, quadratic, cubic, etc.) results in a set of linear equations which must be solved to determine the unknown coefficients of the polynomial. The number of equations to be solved is equal to the degree of the polynomial plus 1.
  4. Jul 29, 2015 #3


    User Avatar

    Normality is only required in OLS for accurate testing of error terms. As long as you have finite variance and a non-singular covariance matrix you can get the beta terms, nothing is required to be orthogonal.
  5. Aug 3, 2015 #4
    It has nothing to with normality; linearity in the parameters just lets you derive a solution using linear algebra. It's entirely possible to fit non-linear functions by least squares, it just requires numerical methods.

    The model
    [tex]y = \alpha + \beta^2x + \epsilon[/tex]
    is linear in the parameters [itex](\alpha, \beta^2)[/itex], so there's no problem. Conversely, the model
    [tex]y = \alpha e^{\beta x} + \epsilon[/tex]
    is clearly non-linear in [itex]\beta[/itex].

    This is mostly true. Strictly speaking, the Gauss-Markov theorem guarantees that the least-squares estimates have good properties assuming only zero-mean, equal variance, and uncorrelated errors, so the normality assumption isn't so important if you just want to curve fit, but modelling is easier assuming normal errors. Under the usual normality assumptions, the least-squares estimates are the maximum likelihood estimates, which are better understood than the more general least-squares estimates. It also makes in easier test hypotheses about the coefficients.
    Last edited: Aug 3, 2015
  6. Mar 16, 2016 #5
    I am not completely sure about my answers, but I try my best.

    You do not need normally distributed error terms for an OLS estimator to be unbiased, nor is that the reason that you have linearity in coefficients. According to the central limit theorem your estimates will be normally distributed in the limit anyway. However, you want to have normally distributed error terms for hypothesis testing in small samples. Overall the assumption of normality has something to do with the assumption of the classical linear model in general an not with the linearity of its estimates (for an elaboration on the ols assumption see for example http://economictheoryblog.com/2015/04/01/ols_assumptions).

    I think the answer to you question is far more technical, in the sense that what you get for your coefficient is a scalar which is constant per se. In that sense it cannot be a function of an non linear form. However, that is my own understanding of this subject and I would love to hear another answer to this question if there is one.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted