Mean Squared Error vs Loss

Hi Guys,

I am just starting readings on machine learning and came across ways that the error can be used to learn the target function. The way I understand it,

Error: $e = f(\vec{x}) - y*$
Loss: $L(\vec{x}) = \frac{( f(\vec{x}) - y* )^2}{2}$
Empirical Risk: $R(f) = \sum_{i=o}^{m} \frac{( f(\vec{x}) - y* )^2}{2m}$

where y* is the desired function, $\vec{x}$ is the sample vector (example) and m is the number of examples in your sample space.

I don't understand why the factor of 2 is present in the expression for loss. The only condition my instructor placed on loss was that it had to non-negative, hence the exponent 2. But the division by two only seems to make the loss less than it really is.

I also came across the expression for mean squared error, and it is essentially the loss without the factor of 2. If anyone could shed light on why the factor of 2 is there, I would be grateful
 PhysOrg.com science news on PhysOrg.com >> King Richard III found in 'untidy lozenge-shaped grave'>> Google Drive sports new view and scan enhancements>> Researcher admits mistakes in stem cell study