Why Do Degrees of Freedom Differ in Chi-Squared Distributions?

Usjes
Messages
9
Reaction score
0
Hi,

I am trying to understand the degrees of freedom parameter in the Chi_squared distribution and I have found two references from the same source that appear, to me, to contradict one-another. Can anyone explain what is going on ?
In https://onlinecourses.science.psu.edu/stat414/node/171 it states that:
Corollary. If X1, X2 , ... , Xn are independent normal random variables with mean=0 and variance=1, that is: Xi∼N(0,1) for i = 1, 2, ..., n. Then:

Sum_from_one_to_n(Xi)^2 ∼ χ2(n) ( I have simplified the formula in the source by setting all μi to 0 and σi to 1)

But https://onlinecourses.science.psu.edu/stat414/node/174 states that:
X1, X2, ... , Xn are observations of a random sample of size n from the normal distribution N(0,1) then:
Sum_from_one_to_n(Xi)^2 ∼ χ2(n-1) (Again setting all μi to 0 and σi to 1)

So it seems that a (slightly) different pdf is being given for the same R.V. , we have lost 1 degree of freedom. Can anyone explain this, or does the fact that the individual observations in the second case are described as a 'random sample' somehow impact on their independence ? If so, how exactly, is each element of the sample not an RV in its own right whose pdf is that of the population and => N(0,1) ?

Thanks,

Usjes
 
Last edited:
Physics news on Phys.org
The sample mean of a random sample (\bar{X} )is not the same as the mean (i.e. "population mean" \mu ) of the random variable from which the sample is taken.
 
"Can anyone explain this,..."

The explanation is in the proof shown below your second reference.
 
In the first case the population parameters are somehow known. The second case is more usual, in which the population parameters are unknown and are estimated from a random sample. This inaccuracy is compensated for by loss of a degree of freedom.
 
Your second statement is not what the reference says. Your statement uses the known mean, but the reference uses the sample mean. The sum of squares is the sum squares of differences from the known mean of 0. That has n degrees of freedom. It is only if the sample mean is used instead of the known mean that the degrees of freedom are reduced by 1. Notice that using the sample mean will reduce the sum of squares, so the different degrees of freedom is needed to compensate for that.
 
Last edited:
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top