# Changes to Standard Deviation?

by The Bob
Tags: deviation, standard
 P: 1,116 How many of you know that Standard Deviation has changed. It used to be:$$\sqrt{\frac{\Sigma(x_i - \overline{x})^2}{n}}$$ And now it is:$$\sqrt{\frac{\Sigma(x_i - \overline{x})^2}{n - 1}}$$ It is the Variance of data but square rooted: $$s^2 = \frac{\Sigma(x_i - \overline{x})^2}{n - 1}$$ convertd to: $$s = \sqrt{\frac{\Sigma(x_i - \overline{x})^2}{n - 1}}$$ Not really anything important, just wanted people to know and comment (if necessary) on the fact that it has changed. The Bob (2004 ©)
 Emeritus Sci Advisor PF Gold P: 16,091 Actually, both formulae are used... I forget the reasons for using n instead of n-1, though.
 Sci Advisor P: 6,080 It depends on what you are using for the mean. If you know the mean, then you divide by n. If you estimate the mean from the sample, then you use n-1, because the estimated mean has a statistical error.
P: 1,116
Changes to Standard Deviation?

 Quote by Hurkyl Actually, both formulae are used... I forget the reasons for using n instead of n-1, though.
I do understand that both are still used but I didn't realise why until:
 Quote by mathman It depends on what you are using for the mean. If you know the mean, then you divide by n. If you estimate the mean from the sample, then you use n-1, because the estimated mean has a statistical error.
- Mathman came along and said why.

Cheers guys.

The Bob (2004 ©)
P: n/a
 Quote by mathman It depends on what you are using for the mean. If you know the mean, then you divide by n. If you estimate the mean from the sample, then you use n-1, because the estimated mean has a statistical error.
Why does using n-1 instead correct the error? Is this negligible for large values of n?
 P: 646 This has been discussed before in this forum i believe tho i cannot locate that thread now.... check here anyways, http://mathworld.wolfram.com/Variance.html -- AI
 P: 75 n-1 is used for samples in order to adjust for the variability of the data set which does not included all possible events. using n tends to produce an undersestimate of the population variance. So we use n-1 in the denominator to provide the appropriate correction for this tency. to sum up: when using populations, use n as the denominator. else use n-1 hope that helps!
 P: 155 The factor (n-1) is used to make the sample variance an "unbiased estimator" of the population variance. There's no particular reason you need an unbiased estimator, though. For example, if you want to minimize mean squared error, it turns out that it's much better to use (n+1) instead of (n-1) (in case of a normal distribution). See Jaynes, chapter 17.
 Emeritus PF Gold P: 8,147 The denominator is not the sample size, but the number of degrees of freedom. Initially the two are equal, but when you do the mean (sum over n) you "fix" or "lose" one degree of freedom. So when you then go to use the mean in the sd calculation, you have only n-1 degrees of freedom left. Degrees of freedom are a thorny thing to teach, and they only become essential to consider in things like ANOVA, so they are frequently skipped in teaching simple statistics.

 Related Discussions Set Theory, Logic, Probability, Statistics 6 General Math 4 Precalculus Mathematics Homework 16 Classical Physics 3 Set Theory, Logic, Probability, Statistics 4