ahmed markhoos said:
Hello,
I didn't know where to put my question, but I think here is the best section for it.
http://im60.gulfup.com/apkrpJ.png
The problem isn't that I can't solve it, I actually did but I don't understand the concept ! -- I don't remember anything from my high school stat and I didn't do college stat yet.
to be more specific in my question, what does the squire of the deviation mean? and how taking the sum of them give me the result I want?[/QUOT"actualE]
The problem isn't that I can't solve it, I actually did but I don't understand the concept ! -- I don't remember anything from my high school stat and I didn't do college stat yet.
to be more specific in my question, what does the squire of the deviation mean? and how taking the sum of them give me the result I want?
As stated, the problem has nothing to do with statistics; it is just a well-defined math problem. It is a whole separate issue as to whether the sum of squared deviations has something to do with probability and/or statistics; in some cases it does, and in other cases it does not.Anyway, you said that you "actually did solve it", but did not understand what you were doing. Well, first show us your work, so we can tell where you might need some assistance.Why the sum of squares? Here are some reasons:
(1) We (usually) want a "goodness-of-fit" measure that somehow has in it all the errors ##e_i = y_i - (m x_i + b)## for ##i = 1,2, \ldots, n##.
(2) We do not just want to add up all the errors (algebraically), because the positive ones may cancel out the negative ones, leaving us with a highly erroneous error measure of 0 (or something very small), even when the fit is not very good at all. So, for that reason, we should use a function of the magnitudes ##|e_i|##, rather than the ##e_i## themselves.
(3) Taking the sum of squares (which does involve ##|e_i|^2 = e_i^2##) is convenient, because it allows us to use calculus methods to arrive at a simple solution involving more-or-less straightforward arithmetical calculations. Furthermore, the method has been around for more than 200 years, so is familiar. Finally, IF certain types of statistical assumptions are made about the nature of the ##(x,y)## data points, THEN numerous interesting statistical facts and measurements can be derived from the solution. However, just to be clear: even if we are not doing statistics, the least-squares fit can still be useful.
(4) Other, sometimes more "robust" intercept-slope estimates can be obtained using alternative measures of errors, such as ##S_1 = \sum_{i=1}^n |e_i|## (total absolute error) or ##S_3 = \max (|e_1|, |e_2|, \ldots, |e_n|)## (largest single error) and finding the lines that minimize those measures instead. Such problems are doable nowadays using relatively recently-developed tools (Linear Programming, for example). They would not have been known to Gauss or Legendre and probably not have been solvable by them, either. I believe that the resulting statistical issues in these cases are much less well-understood (and harder to deal with) than in the least-squares case. Nevertheless, these types of fits are nowadays pretty widely used and are often preferred to those of least-squares; and sometimes the resulting statistical issues (if any) are handled using Monte-Carlo simulation methods, for example.