Question about Mean Squared ErrorWhy Squared?

  • Thread starter Thread starter Saladsamurai
  • Start date Start date
  • Tags Tags
    Mean
AI Thread Summary
Mean Squared Error (MSE) is a statistical measure used to quantify the difference between an estimator and the true value, calculated by averaging the squares of the errors. Squaring the errors helps eliminate negative values and simplifies mathematical operations, making it easier to derive and analyze. MSE is particularly useful for normally distributed data, as it serves as an unbiased estimator for the mean and standard deviation. The choice of using n or (n-1) as a divisor in calculations affects the accuracy of variance estimates, with (n-1) being preferred for smaller sample sizes. Overall, MSE and related measures are foundational in data analysis for assessing accuracy and dispersion.
Saladsamurai
Messages
3,009
Reaction score
7
Hello there :smile:

I have no background in statistics, but have encountered some at my job and I am seeking to better understand the nature of Data Analysis.

From Wikipedia:

In statistics, the mean squared error or MSE of an estimator is one of many ways to quantify the amount by which an estimator differs from the true value of the quantity being estimated. As a loss function, MSE is called squared error loss. MSE measures the average of the square of the "error."

But as obvious as it may be to some, I cannot for the life of me figure out why we average the squares of the error?

And why is this a better measure of accuracy than simply measuring the errors themselves?

Thanks!
 
Physics news on Phys.org
The math for working with the squared error is simpler than the math for working with the absolute error. You can do things like take the derivative of the squared error or express it with matrices.
 
I really don't know if this is correct at all, but I've always thought that one reason at least is that by squaring the errors you're working with the magnitudes of the errors without regard to the signs of the raw errors.

I'm very weak in statistics, however, so I could well be wrong about that ...
 
You can show that for a joint Gaussian distribution that the minimization of mean squared error is equivalent to maximum likelyhood. I suppose then you could ask, "Why maximum likelyhood".
 
Bear with me - I may be very rusty here.

Suppose we define the kth order root mean power deviation of a data set \underline{X}=\{ x_1,x_2, \dots x_n\} as

dev_k(\underline{X}) = \sqrt[k]{\frac{\sum_{i=1}^{n}(x_i-\overline{x})^k}{n}}

where we interpret the 1st root to be the radicand itself.

(Note "dev" is not an official name for anything, I made it up for this example).

Then devk is a measure of dispersion for any k > 0.

But

dev_1(\underline{X})=\frac{\sum_{i=1}^{n}(x_i-\overline{x})}{n}

=\frac{\sum_{i=1}^{n}(x_i)-\sum_{i=1}^{n}(\overline{x})}{n}

=\frac{n\overline{x} - n\overline{x}}{n}=0

So that measure is rather useless. Some condsider \sum |x_i - \overline{x}|/n instead, and this leads to some useful information.

However the second order measure (typically called the root mean square or some such) is analogous to the "moment of inertia" of the distribution about the mean and has been mentioned is useful in analysing minimization of error.

It was decided (I am not sure when, early 20th century?) that this measure would be the "standard" one (hence "standard deviation"), but the other higher order ones are also valid measures.

This may not fully answer your question, but statisticians have put some thought into which measure is best.

--Elucidus
 
Last edited:
Elucidus said:
Bear with me - I may be very rusty here.

Good presentation of central moments, but the divisor is usually n-1. This is more important with small sample sizes.
 
The squared error was historically used because that is the natural method to use when you assume the errors behind your data are normally distributed. The fact that using the squares made the following mathematics easier to work with was a bonus.
 
The reason of using MSE is that you will have a quadratic curve, similar to y=x^2, which has a unique minimal value, which is its derivative equal to zero.
 
statdad is right.

A normal distribution has only two parameters, its mean and its standard deviation. The average of your data is an unbiased estimator of the mean, and the mean square error is an unbiased estimator of the standard deviation.

this is why these two statistics are used. they are unbiased estimators for the defining parameters of the unknown normal distribution.In reality, a lot of data is close to normal. For instance stock price returns are nearly normal. In these cases one can estimate the distributions directly with the sample mean and standard deviation. If data is not normal then one can take averages before estimating parameters. Before computers, this is what statisticians did because they needed to know something about the mathematical form of their sample distribution. if you average your data to create a new sampling distribution then this new distribution of averages is approximately normal. this reduces the problem of data analysis to estimating which normal distribution your sample data comes form.
 
  • #10
SW VandeCarr said:
Good presentation of central moments, but the divisor is usually n-1. This is more important with small sample sizes.

Interestingly enough the use of the divisor (n - 1) is a rather recent development. I've read texts as recent as the 40's where the standard deviation has a devisor of n. As such the expected value of s2 is n^2\cdot \sigma^2 / (n-1)^2 if I'm not mistaken. Since is was an underestimator of the true variance, statisticians and probabilists switched to the current divisor of (n - 1).

Whether the divisor is n or (n - 1), these expressions are all measures of dispersion; whether they're good measures is another matter.

--Elucidus
 
  • #11
Elucidus said:
Interestingly enough the use of the divisor (n - 1) is a rather recent development. I've read texts as recent as the 40's where the standard deviation has a devisor of n. As such the expected value of s2 is n^2\cdot \sigma^2 / (n-1)^2 if I'm not mistaken. Since is was an underestimator of the true variance, statisticians and probabilists switched to the current divisor of (n - 1).

Whether the divisor is n or (n - 1), these expressions are all measures of dispersion; whether they're good measures is another matter.

--Elucidus

For large n it doesn't really matter.
 

Similar threads

Replies
4
Views
2K
Replies
7
Views
2K
Replies
2
Views
2K
Replies
2
Views
4K
Replies
5
Views
2K
Replies
2
Views
2K
Replies
7
Views
2K
Back
Top