Question about vaiance, population and sample

  • Context: Undergrad 
  • Thread starter Thread starter robert Ihnot
  • Start date Start date
  • Tags Tags
    population
Click For Summary
SUMMARY

The forum discussion centers on the distinction between sample variance and population variance, specifically addressing why the sample variance is calculated by dividing by \( n-1 \) instead of \( n \). Users highlight that dividing by \( n \) leads to a biased estimator, while dividing by \( n-1 \) provides an unbiased estimate of the population variance. The conversation references the statistical principles outlined in "Principles of Statistics" by MG Bulmer, emphasizing the mathematical reasoning behind these calculations and the implications for statistical analysis.

PREREQUISITES
  • Understanding of basic statistical concepts such as variance and standard deviation.
  • Familiarity with the difference between sample and population statistics.
  • Knowledge of mathematical notation and operations, particularly in statistics.
  • Access to statistical textbooks, such as "Principles of Statistics" by MG Bulmer.
NEXT STEPS
  • Study the derivation of the unbiased estimator for sample variance.
  • Explore the implications of biased versus unbiased estimators in statistical analysis.
  • Learn about the Central Limit Theorem and its relationship to sample variance.
  • Investigate other statistical estimators and their properties in different contexts.
USEFUL FOR

Statisticians, data analysts, and students studying statistics who seek a deeper understanding of variance calculations and their applications in data analysis.

robert Ihnot
Messages
1,057
Reaction score
1
In the October edition of the magazine, Active Trader, a reader writing in Chat Room, "Deviating from deviation?" asks that in explaining last month the viaiance, why did you not in your example divide by two?

{(8-9)^2 + (9-9)^2 +(10-9)^2}/3 = .667.

The explanation given is nothing more than,'That's how it is done,' and completely ignores, adding, "We're not math majors," the difference between the sample diviation and the population deviation. (There is no explanation of where the above example come from, and probably it is nothing but an equation invented by the writers.)

Elementary statistic books do a very poor job of explaining WHY that difference occurs, saying such as "It eliminates bias," or even "It makes the theory work out better, and isn't worth going into."

Does anyone have a good explanation of why there is that distinction, and assuming it is a sample deviation, why is it better to divide by 2 than by 3?
 
Last edited:
Physics news on Phys.org
robert Ihnot said:
Does anyone have a good explanation of why there is that distinction, and assuming it is a sample deviation, why is it better to divide by 2 than by 3?
In the estimators section of your statistics text, you should get, either as a problem or example, a simple calculation that shows the "divide by n" estimator of population variance for variance of samples is biased, while "divide by n-1" is unbiased. Are you looking for an intuitive answer ?
 
Well, I have made up several examples about dice, but when the number of trials falls, say three throws of the dice, this greatly changes the variance.

This is my example, the population is the six sides of a dice, mean is 3.5 on a throw, variance is 2.92. Now if we throw three times, and get a perfectly reasonable outcome: 2,3,4. The mean is 3, and dividing by 2 the variance is 1, where as dividing by 3 it would have been 2/3. In neither case are we near 2.94. Thanks, bob
 
robert Ihnot said:
Well, I have made up several examples about dice, but when the number of trials falls, say three throws of the dice, this greatly changes the variance.
No, not "on average." Your example is conditional on a given sample. That's not a good basis to verify the expected value of any variance estimator.
 
This is my example, the population is the six sides of a dice, mean is 3.5 on a throw, variance is 2.92. Now if we throw three times, and get a perfectly reasonable outcome: 2,3,4. The mean is 3, and dividing by 2 the variance is 1, where as dividing by 3 it would have been 2/3. In neither case are we near 2.94. Thanks, bob

Run this experiment a million times, and look at the average value for the variance that you compute.
 
Hurkyl: Run this experiment a million times, and look at the average value for the variance that you compute.

If it is so run, there will not be much difference between dividing by 1,000,000 or 999,999.
 
robert Ihnot said:
If it is so run, there will not be much difference between dividing by 1,000,000 or 999,999.
Correct, both "1/n" and "1/(n-1)" are unbiased estimators.
 
If it is so run, there will not be much difference between dividing by 1,000,000 or 999,999.

You entirely misunderstand:

You described an experiment where you roll a die three times, and then compute two different estimates for the variance, one where you divide by 3, and one where you divide by 2.

Now, you perform that experiment a million times, and you get a million estimates where you divided by 3, and a million estimates where you divided by 2.

You can then find the average of the divide by 3 estimates, and the average of the divide by 2 estimates. One of them will be (very close to) the actual variance. One will not.


Correct, both "1/n" and "1/(n-1)" are unbiased estimators.

They cannot possibly both be unbiased, unless the variance is zero.

if s/n is, on average (where s is the calculation in the numerator), the variance v, and so is s/(n-1), then we have:

E = nv
E = (n-1)v
0 = v

:-p
 
Hurkyl, you are correct. My bad. What I meant was, although the "1/n" estimator is biased, it is consistent.
 
  • #10
Well, I got an answer here in this statistic book, "Principals of Statistics," MG Bulmer, dover paperback, 1967, p130:

"It may seem surprising that the Expected value of the sample variance is slightly less than the population variance. The reason is that sum of the squared deviations of a set of observations from their mean is always less than the sum the squared deviations from the population mean."
 
  • #11
Good work; now you can give advice to the needy in these forums. :smile:
 
  • #12
It may seem surprising that the Expected value of the sample variance is slightly less than the population variance.

If we look at F(x)=\sum_{i=1}^{i=n}(a_i-x)^2

By taking the derivative and setting it equal to 0, we arrive at the minimal value of the function: nx=\sum_{i=1}^{i=n}a_i

Thus letting x take on the value of the mean of the sample gives us the minimal value for the variance.
 
Last edited:
  • #13
It may seem surprising that the Expected value of the sample variance is slightly less than the population variance.

On page 129-130, Principles of Statistics, we have this problem gone into, though here a few additional details are presented. He writes:

(S^2)= \sum(x_i-X)^2=\sum(x_i-\mu)^2-N(X-\mu)^2
for the above S^2 is as defined, N is the number of samples, mu is the mean, X is the sample mean, each X_i represents a variable that takes on various sample values.

Now the point is to find the expectation, E. We have:
E(\sum(x_i-\mu)^2 = N\sigma^2, where sigma is the STD.

For the second term, E(x)=mu, and we have N*E(X-\mu)^2=N*(EX^2-(EX)^2)<br />
The later term after N is V(X)=\frac{V(NX)=\sum V(X_i)=N\sigma^2}{N^2}

Thus returning to the original equation we have:
E(S^2)=N\sigma^2-\sigma^2=(N-1)\sigma^2.

Author adds: "Because of this fact S^2 is often divided by N-1 instead of N in order to obtain an unbiased estimate..."
 
Last edited:

Similar threads

  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 18 ·
Replies
18
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
4
Views
3K