1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Best straight line fit to a set of data

  1. Aug 27, 2015 #1
    Hello,

    I didn't know where to put my question, but I think here is the best section for it.

    http://im60.gulfup.com/apkrpJ.png [Broken]

    The problem isn't that I can't solve it, I actually did but I don't understand the concept !! -- I don't remember anything from my high school stat and I didn't do college stat yet.

    to be more specific in my question, what does the squire of the deviation mean? and how taking the sum of them give me the result I want?
     
    Last edited by a moderator: May 7, 2017
  2. jcsd
  3. Aug 27, 2015 #2

    Geofleur

    User Avatar
    Science Advisor
    Gold Member

    Some of the data points will fall above the straight line, and some will fall below it. The difference, ##y_n - y## for the points above will be positive, and the same difference for the points that fall below will be negative. If you just add all these differences up, they will cancel each other on average and you will get zero. That's not very useful! So instead, you square the vertical distance between ##y_n## and ##y##, to get a positive number whether the data point falls above or below the line. Adding these numbers up will only give zero if all the data points are exactly on the line. Minimizing the sum of the squared deviations from the line will give you the line that, on average, has the smallest vertical deviations from the data points. It may help to draw a picture and try to visualize this argument.
     
  4. Aug 27, 2015 #3

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper



    As stated, the problem has nothing to do with statistics; it is just a well-defined math problem. It is a whole separate issue as to whether the sum of squared deviations has something to do with probability and/or statistics; in some cases it does, and in other cases it does not.


    Anyway, you said that you "actually did solve it", but did not understand what you were doing. Well, first show us your work, so we can tell where you might need some assistance.


    Why the sum of squares? Here are some reasons:

    (1) We (usually) want a "goodness-of-fit" measure that somehow has in it all the errors ##e_i = y_i - (m x_i + b)## for ##i = 1,2, \ldots, n##.

    (2) We do not just want to add up all the errors (algebraically), because the positive ones may cancel out the negative ones, leaving us with a highly erroneous error measure of 0 (or something very small), even when the fit is not very good at all. So, for that reason, we should use a function of the magnitudes ##|e_i|##, rather than the ##e_i## themselves.

    (3) Taking the sum of squares (which does involve ##|e_i|^2 = e_i^2##) is convenient, because it allows us to use calculus methods to arrive at a simple solution involving more-or-less straightforward arithmetical calculations. Furthermore, the method has been around for more than 200 years, so is familiar. Finally, IF certain types of statistical assumptions are made about the nature of the ##(x,y)## data points, THEN numerous interesting statistical facts and measurements can be derived from the solution. However, just to be clear: even if we are not doing statistics, the least-squares fit can still be useful.

    (4) Other, sometimes more "robust" intercept-slope estimates can be obtained using alternative measures of errors, such as ##S_1 = \sum_{i=1}^n |e_i|## (total absolute error) or ##S_3 = \max (|e_1|, |e_2|, \ldots, |e_n|)## (largest single error) and finding the lines that minimize those measures instead. Such problems are doable nowadays using relatively recently-developed tools (Linear Programming, for example). They would not have been known to Gauss or Legendre and probably not have been solvable by them, either. I believe that the resulting statistical issues in these cases are much less well-understood (and harder to deal with) than in the least-squares case. Nevertheless, these types of fits are nowadays pretty widely used and are often preferred to those of least-squares; and sometimes the resulting statistical issues (if any) are handled using Monte-Carlo simulation methods, for example.
     
    Last edited by a moderator: May 7, 2017
  5. Aug 27, 2015 #4

    WWGD

    User Avatar
    Science Advisor
    Gold Member

    As a technical note, the differences |y_i - y| are usually called the residuals. We want to minimize the sum of squares of residuals. There are additional tests you may want to run in order to have an idea of how well the approximation fits the data: a hypothesis test where ##H_0: a=0 , H_1 : a \neq 0##, where ##a## is the slope. If ##a=0## is accepted (not rejected) then the test tells you there is little, if any linear dependence between ##y## and ##x##. You ay also want to test the correlation between ##a,b## , especially so if you have multilinear regression ##y = b + a_1x_1 +a_2 x_2 +.... ## , you want to tests the correlations between the pairs ## a_j, a_k ##, and get rid of those where the correlation is high. Then you also want to consider the coefficients ## r, r^2## that measure the level of dependence, the extent to which your independent variable determines your dependent variable.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Best straight line fit to a set of data
Loading...