Bessel's correction and degrees of freedom

In summary, the statistic computed from the sample will not be a good estimate of the population mean. Instead, the statistic should be used to inform the estimate of the population mean that is obtained by taking the sum of the sample residuals.
  • #1
chiropter
25
0
Wondering about degrees of freedom. So basically let me express my current understanding:
Comparing the statistical errors to the residuals, the former involves taking a subset of observations from a population and comparing them to the population mean, while the latter involves comparing the observations to the sample mean calculated from the same subset of observations. The differences between a sample of observations and the population mean obviously does not have to sum to 0, while the differences between the observations and the sample mean does sum to 0.

Thus, if you are trying to estimate the 'average' of the difference between the population mean and the set of values in the population, using sample residuals (which are derived from a set of sample observations and a sample mean) to do so will result in the last 'residual' NOT being free to vary over the 'full range' a true statistical error would normally do, it will just have to be whatever the value is that will allow the sum to add up to 0.

So, if this is correct so far, it seems pretty straightforward to then claim that final sample residual won't contribute the required quantity to the sum of (squared) residuals that would be needed to estimate the 'average' of n (squared) residuals, and so instead, we redistribute that final quantity to all the other residuals and divide the sum by n-1 instead. So it's like you don't actually have n bits of variability from which you can estimate the errors (variance).

Ok, if that makes sense so far, my question is, why does n-1 take care of it? Why is it as if we have exactly one fewer residuals to estimate the population variance, wherein we take the remaining amount of 'variability' and distribute it equally to the other residuals before dividing by the number of things to get the average (my interpretation of dividing by n-1)? Why isn't it n-2 or 0.5n?

It bothers me that something as intuitive as calculating the average, or dividing, or summing can suddenly become so cryptic when we come to sample statistics. Hoping to resolve some of this confusion.
 
Physics news on Phys.org
  • #2
chiropter said:
It bothers me that something as intuitive as calculating the average, or dividing, or summing can suddenly become so cryptic when we come to sample statistics. Hoping to resolve some of this confusion.

I've read that people had the same outlook in the early days of statistics. They valued methods of estimating population properties from samples that used calculations that were analogous how the population properties were computed from probability densities. (For example, the sample mean is the sum of the sample values, each weighted by its frequency of occurrence. The population mean is the sum of the population values each weighted by its probability density (or the integral over each value with respect to the probability density.) When a method of estimation a population parameter from a sample was analogous to the definition of how the parameter is computed from a probability density, the method of estimating was called "consistent". (This is not the modern definition of a "consistent" estimator.)

I suppose it's human nature to hope that a strong analogy between two processes will always show us the "best" way to do things. However, consider the following problem:

A random variable X is know to have a probability density given by P(X=k-1/2) = 1/2, P(X=k+1) = 1/2 where k is some unknown integer. If you have the set of sample values { 3 1/2, 5, 3 1/2, 3 /12} how should you estimate the mean of X?

The population mean of X , which is 4 1/4, can be deduced from the values that occur in the sample. Using the sample mean as the estimate wouldn't be "best". The process of deducing the population mean from the sample can be described as a computer algorithm, which implements various "if...then..." statements. From the modern point of view of a "function", such a computer algorithm is as much a function as a something written as a single algebraic expression.

You must open your mind to the complexity of the general scenario for estimation. When you consider estimating a population parameter from samples of a random variable X, don't think of X as "the" random variable in the problem. The values in the sample, are random variables, even if they are independent realizations of X. When you compute some function S of the random variables in the sample, you obtain another random variable, which need not have the same distribution as X. Such a random variable is called a "statistic". The random variable S has its own population mean, variance, etc which need not be the same as corresponding population mean, variance etc. of X.


The two common uses of statistics are 1) to test a hypothesis 2) to estimate something - often a parameter of the population distribution.

When we use a statistic to estimate something, we naturally call it an "estimator". An estimator is a thus a random variable. If we have two different estimators for a population parameter, how do we decide which one is "best". How do we even define "best" mathematically? I don't know whether you studies in statistics have gotten to that question yet.
 
  • Like
Likes 1 person
  • #3
So is what I'm saying in the second and third paragraphs off-base?
 
  • #4
I wouldn't say your 2nd and 3rd paragraphs are "wrong". I'd say they are not clear enough to be mathematical arguments. You can find many threads on the forum that give a mathematical derivation of why the customary "unbiased" estimator of the population variance of a normal distribution uses a n-1 in the denominator. (If you can't find any searching on keywords "unbiased", "estimator","variance" then let me know and I'll look up one for you.

"Degrees of freedom" means various things in various contexts. It usually has to do with situations where some given equations have solutions that can be formed by making "arbitrary" choices for the values of a certain number of variables. There are many possible sets of equations. There isn't a single simple verbal argument that explains the mathematics of all these situation.
 
  • #5
Thanks, Stephen. I have already read and understand that taking the expectation of the sample variance resolves to equal the population variance multiplied by n/(n-1), which is equivalent to dividing by (n-1) instead of (n) in a formula for calculating the sample variance.
 

FAQ: Bessel's correction and degrees of freedom

What is Bessel's correction and why is it important in statistical analysis?

Bessel's correction is a mathematical adjustment made to correct for bias in sample variance calculations. It is important because it takes into account the fact that sample variance tends to underestimate the true variance in a population.

How is Bessel's correction calculated?

Bessel's correction is calculated by subtracting 1 from the sample size (n) before dividing the sum of squared differences by n-1 instead of n. This decreases the bias in the sample variance calculation.

What is the relationship between Bessel's correction and degrees of freedom?

Bessel's correction is directly related to degrees of freedom, as it is the adjustment made to account for the decrease in degrees of freedom when using a sample instead of a population. Degrees of freedom represent the number of independent pieces of information used in a calculation.

When should Bessel's correction be used?

Bessel's correction should be used when calculating sample variance for a small sample size (n<30). For larger sample sizes, the difference between using Bessel's correction and not using it is negligible.

Why is Bessel's correction not necessary for calculating sample mean?

Bessel's correction is not necessary for calculating sample mean because the mean is not affected by the sample size. Each data point contributes equally to the mean, so there is no bias to correct for.

Similar threads

Back
Top