Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Difference of OLS and LAD

  1. Feb 11, 2009 #1
    Hi
    I'mwondering what's the difference between least squares method with least absolute deviation method.
    Assume we have y=ax+b+s where s isdeviation.
    Is the step to calculate even a and b is different.
    I read that those two methods are almost the same but hardly found a real good explaination about LAD.
    Thank you
     
  2. jcsd
  3. Mar 23, 2009 #2
    I assume that you are referring specifically to linear regressions. The difference between least squares and least absolute deviation is what is being optimized when the line is fit to the data. Yes, the mechanics of the LS and LAD (also called "L-1") fitting procedures are quite different.

    While regression procedures which optimize different error functions sometimes produce similar results on a given set of data, they can also yield substantially different results. You can see an example of this in my posting, http://matlabdatamining.blogspot.com/2007/10/l-1-linear-regression.html" [Broken].


    -Will Dwinnell
    http://matlabdatamining.blogspot.com/" [Broken]
     
    Last edited by a moderator: May 4, 2017
  4. Mar 23, 2009 #3

    statdad

    User Avatar
    Homework Helper

    The two methods are quite different in concept. In least squares the estimates are obtained by minimizing the sum of squared differences between the data and the fit (also described as minimizing the sum of the squares of the residuals)

    If we call the estimates [tex] \widehat{\alpha} [/tex] and [tex] \widehat{\beta} [/tex], then for least squares

    [tex]
    S(\widehat{\alpha}, \widehat{\beta}) = \min_{(\alpha,\beta) \in R^2} \sum (y-(\alpha + \beta x))^2
    [/tex]

    while for L1

    [tex]
    S(\widehat{\alpha}, \widehat{\beta}) = \min_{(\alpha, \beta) \in R^2} \sum |y - (\alpha + \beta) }
    [/tex]

    Two benefits, not the most important, of least squares:
    - the underlying calculations are easier to show with pencil and paper (than they are for L1)
    - it is possible to write down formulas for the two estimates obtained from least squares - it isn't for L1

    Estimates from both methods have asymptotic normal distributions under fairly general conditions.

    The least squares estimates are the classical estimates when normality of the error distribution is assumed: they have certain optimality properties in that case, and, if you are interested in looking only at certain types of estimates, they are BLUE (Best Linear Unbiased Estimates) of the underlying parameters.

    Least squares is so widely used because people are familiar with it. Its biggest downside is that fits from least squares are incredibly non-robust (sensitive to outliers and leverage points). L1 fits also suffer from this, but not quite as seriously as least squares.

    Regression based on ranks, as well as regression based on Huber's M-estimates, are more robust and, with the ongoing combination of computing power increase and lower cost, are
    ever-more reasonable alternatives.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook