Question on Pearson's Chi-squared test

  • Thread starter Thread starter mnb96
  • Start date Start date
  • Tags Tags
    Chi-squared Test
mnb96
Messages
711
Reaction score
5
Hello,

I was trying to interpret the formula of Pearson's Chi-squared test:
\chi^2 = \sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}

I thought that if we assume that each O_i is an observation of the random variable X_i, then the above formula essentially considers the sum-of-squares of n standardized random variables Y_i=\frac{X_i-\mu_i}{\sigma_i}. In fact, if such random variables are Y_i \sim N(0,1), then the random variable S = \sum_{i=1}^n Y_i^2 follows a \chi^2-distribution. Thus, the formula of the Chi-squared test would essentially evaluate the probability \mathrm{P}\left( S = \chi^2 \right), and of course compare it to some chosen P-value.

My question is about the standardization of the random variables X_i.
If my interpretation above is correct, then Pearson's Chi-squared test somehow assumes that each random variable X_i has variance equal to its expected value, that is: \sigma_i^2 = \mu_i

Why so?
Can anybody explain why we would need to assume that variance and expected values are numerically equal? That condition is satisfied only for some distributions like Poisson and Gamma (with \theta=1). Why such a restriction?
 
Last edited:
Physics news on Phys.org
mnb96 said:
if we assume that each O_i is an observation of the random variable X_i

The O_i are supposed to be a count of how many observations of a random variable fall within a "cell". How are you are defining the ith cell?
 
Stephen Tashi said:
The O_i are supposed to be a count of how many observations of a random variable fall within a "cell".

I see! That is an important observation. It probably means that the random variables X_i are supposed to follow a multinomial distribution.

For instance, if we have only one cell, then X_1 could be the amount of successes out of m independent trials of some experiment. Thus, X_1 would follow a binomial distribution, which in fact approaches a Poisson distribution for m large, and which has \sigma^2=\mu=\lambda.

If the above reasoning is correct, then Pearson's Chi-squared test should work only when the number of trials is sufficiently large.
 
mnb96 said:
It probably means that the random variables X_i are supposed to follow a multinomial distribution.

I'm not sure what you mean by that statement.

The test can be applied to repeated independent samples of a single random variable. The single random variable can have any distribution. It is only necessary to define the cells so that they partition the range of the random variable.
 
  • Like
Likes 1 person
Hi Stephen, and thanks for your help!

What I meant, is that X_i is a random variable that "counts" the number of observations that happened to fall into the i-th cell. For instance, if we consider a continuous random variable Z having some unknown probability density function, and we partition the real line into two cells corresponding to the events: Z\geq 10 (=success) and Z< 10 (=failure), then the two events will have probabilities p and (1-p).

We can sample the random variable Z many times, say n times.
Now, X_1 is the random variable that keeps the total counts of successes, thus X_1 follows a binomial distribution, i.e. X_1\sim B(n,p).

I thought that if we extend this reasoning to k cells, then the vector of random variables (X_1,\ldots,X_k) should follow a multinomial distribution, i.e. (X_1,\ldots,X_k) \sim M(n;p_1,\ldots,p_k).

Or am I misunderstanding something?
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top