# Just a statistics question

Well, were trying to get caught up to where we were in school last year. And we are going over standard deviation. I just have a simple question.

Why, when you are examining only a sample do you (when trying to find the variance) use n-1, and when you are examining the population, you use n?

Let's see if I can't get this in latex...:

$$\sqrt{\frac{1}{n-1}\sum_{i=1}^n ({x_i}-{\bar{x}})^2}$$intead of $$\sqrt{\frac{1}{n}\sum_{i=1}^n ({x_i}-{\bar{x}})^2$$

Sorry if this question insults your intelligence. I just can't see the reason.

mathman
When you compute the average of the sum of the squares using the sample mean (i.e. the sample variance), the mathematical expectation of the sample variance equals the theoretical variance with n-1 not n.

You want to estimate the variance of the whole population, if you have the whole population you can just calculate it, but when you have only a small portion (a sample) of the population you know that the variance in that sample is probably going to be bit lower than the variance in the whole population, so in order to make your estimate of the variance in the population more suitable you divide by one less than the number of data points in your sample so that you get a somewhat higher number for your estimated population variance. If the sample gets large enough there is hardly any difference (it does not matter very much whether you divide by 1000 or by 999).

Thanks guys. That makes sense.

I liked your post TenaliRaman. After reading mathman and gerben's "laymen" explaination, it was pretty easy to read and discover mathematically what was going on.

Thanks.