Changes to Standard Deviation?? How many of you know that Standard Deviation has changed. It used to be:[tex]\sqrt{\frac{\Sigma(x_i - \overline{x})^2}{n}}[/tex] And now it is:[tex]\sqrt{\frac{\Sigma(x_i - \overline{x})^2}{n - 1}}[/tex] It is the Variance of data but square rooted: [tex]s^2 = \frac{\Sigma(x_i - \overline{x})^2}{n - 1}[/tex] convertd to: [tex]s = \sqrt{\frac{\Sigma(x_i - \overline{x})^2}{n - 1}}[/tex] Not really anything important, just wanted people to know and comment (if necessary) on the fact that it has changed. The Bob (2004 ©)
It depends on what you are using for the mean. If you know the mean, then you divide by n. If you estimate the mean from the sample, then you use n-1, because the estimated mean has a statistical error.
I do understand that both are still used but I didn't realise why until: - Mathman came along and said why. Cheers guys. The Bob (2004 ©)
This has been discussed before in this forum i believe tho i cannot locate that thread now.... check here anyways, http://mathworld.wolfram.com/Variance.html -- AI
n-1 n-1 is used for samples in order to adjust for the variability of the data set which does not included all possible events. using n tends to produce an undersestimate of the population variance. So we use n-1 in the denominator to provide the appropriate correction for this tency. to sum up: when using populations, use n as the denominator. else use n-1 hope that helps!
The factor (n-1) is used to make the sample variance an "unbiased estimator" of the population variance. There's no particular reason you need an unbiased estimator, though. For example, if you want to minimize mean squared error, it turns out that it's much better to use (n+1) instead of (n-1) (in case of a normal distribution). See Jaynes, chapter 17.
The denominator is not the sample size, but the number of degrees of freedom. Initially the two are equal, but when you do the mean (sum over n) you "fix" or "lose" one degree of freedom. So when you then go to use the mean in the sd calculation, you have only n-1 degrees of freedom left. Degrees of freedom are a thorny thing to teach, and they only become essential to consider in things like ANOVA, so they are frequently skipped in teaching simple statistics.