Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Question about Mean Squared ErrorWhy Squared?

  1. Aug 29, 2009 #1
    Hello there :smile:

    I have no background in statistics, but have encountered some at my job and I am seeking to better understand the nature of Data Analysis.

    From Wikipedia:

    But as obvious as it may be to some, I cannot for the life of me figure out why we average the squares of the error?

    And why is this a better measure of accuracy than simply measuring the errors themselves?

    Thanks!!
     
  2. jcsd
  3. Aug 29, 2009 #2
    The math for working with the squared error is simpler than the math for working with the absolute error. You can do things like take the derivative of the squared error or express it with matrices.
     
  4. Aug 29, 2009 #3
    I really don't know if this is correct at all, but I've always thought that one reason at least is that by squaring the errors you're working with the magnitudes of the errors without regard to the signs of the raw errors.

    I'm very weak in statistics, however, so I could well be wrong about that ...
     
  5. Aug 29, 2009 #4
    You can show that for a joint Gaussian distribution that the minimization of mean squared error is equivalent to maximum likelyhood. I suppose then you could ask, "Why maximum likelyhood".
     
  6. Aug 30, 2009 #5
    Bear with me - I may be very rusty here.

    Suppose we define the kth order root mean power deviation of a data set [itex]\underline{X}=\{ x_1,x_2, \dots x_n\}[/itex] as

    [tex]dev_k(\underline{X}) = \sqrt[k]{\frac{\sum_{i=1}^{n}(x_i-\overline{x})^k}{n}}[/tex]

    where we interpret the 1st root to be the radicand itself.

    (Note "dev" is not an official name for anything, I made it up for this example).

    Then devk is a measure of dispersion for any k > 0.

    But

    [tex]dev_1(\underline{X})=\frac{\sum_{i=1}^{n}(x_i-\overline{x})}{n}[/tex]

    [tex]=\frac{\sum_{i=1}^{n}(x_i)-\sum_{i=1}^{n}(\overline{x})}{n}[/tex]

    [tex]=\frac{n\overline{x} - n\overline{x}}{n}=0[/tex]

    So that measure is rather useless. Some condsider [itex]\sum |x_i - \overline{x}|/n[/itex] instead, and this leads to some useful information.

    However the second order measure (typically called the root mean square or some such) is analogous to the "moment of inertia" of the distribution about the mean and has been mentioned is useful in analysing minimization of error.

    It was decided (I am not sure when, early 20th century?) that this measure would be the "standard" one (hence "standard deviation"), but the other higher order ones are also valid measures.

    This may not fully answer your question, but statisticians have put some thought into which measure is best.

    --Elucidus
     
    Last edited: Aug 30, 2009
  7. Aug 30, 2009 #6
    Good presentation of central moments, but the divisor is usually [tex]n-1[/tex]. This is more important with small sample sizes.
     
  8. Aug 30, 2009 #7

    statdad

    User Avatar
    Homework Helper

    The squared error was historically used because that is the natural method to use when you assume the errors behind your data are normally distributed. The fact that using the squares made the following mathematics easier to work with was a bonus.
     
  9. Aug 30, 2009 #8
    The reason of using MSE is that you will have a quadratic curve, similar to [tex]y=x^2[/tex], which has a unique minimal value, which is its derivative equal to zero.
     
  10. Aug 30, 2009 #9
    statdad is right.

    A normal distribution has only two parameters, its mean and its standard deviation. The average of your data is an unbiased estimator of the mean, and the mean square error is an unbiased estimator of the standard deviation.

    this is why these two statistics are used. they are unbiased estimators for the defining parameters of the unknown normal distribution.


    In reality, a lot of data is close to normal. For instance stock price returns are nearly normal. In these cases one can estimate the distributions directly with the sample mean and standard deviation. If data is not normal then one can take averages before estimating parameters. Before computers, this is what statisticians did because they needed to know something about the mathematical form of their sample distribution. if you average your data to create a new sampling distribution then this new distribution of averages is approximately normal. this reduces the problem of data analysis to estimating which normal distribution your sample data comes form.
     
  11. Aug 30, 2009 #10
    Interestingly enough the use of the divisor (n - 1) is a rather recent development. I've read texts as recent as the 40's where the standard deviation has a devisor of n. As such the expected value of s2 is [itex]n^2\cdot \sigma^2 / (n-1)^2[/itex] if I'm not mistaken. Since is was an underestimator of the true variance, statisticians and probabilists switched to the current divisor of (n - 1).

    Whether the divisor is n or (n - 1), these expressions are all measures of dispersion; whether they're good measures is another matter.

    --Elucidus
     
  12. Aug 30, 2009 #11
    For large n it doesn't really matter.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Question about Mean Squared ErrorWhy Squared?
  1. Mean square error (Replies: 2)

Loading...