# Chi-square test: why does it follow a Chi-square distribution

Hello,

it is well-known that the Chi-square test between an observed distribution O and an expected distribution E can be interpreted as a test based on (twice) the second order Taylor approximation of the Kullback-Leibler divergence, i.e.: $$2\,\mathcal{D}_{KL}(O \| E) \approx \sum_i \frac{(O_i-E_i)^2}{E_i} = \chi^2$$
where i is the bin of the histogram (or contigency table). A proof is given here (page 5).

The question is: how do we know that each of the error terms $\frac{(O_i-E_i)^2}{E_i}$ on the right side of the above equation follows a normal distribution N(0,1)? There is probably some some assumption to be made...?

Stephen Tashi
The question is: how do we know that each of the error terms $\frac{(O_i-E_i)^2}{E_i}$ on the right side of the above equation follows a normal distribution N(0,1)?

$\frac{ (O_i - E_i)^2}{E_i}$ is nonnegative, so it doesn't follow a normal distribution.

If $X$ is a binomial random variable representing the number of "successes" n independent trials with probability of success $p$ on each trial then the distribution of $Y = \frac {X-np}{\sqrt{np(1-p)} }$ can be approximated by a $N(0,1)$ distribution.

Last edited:
• 1 person


If $X$ is a binomial random variable ...

I see. There it is our assumption!
It seems to me that such an assumption automatically implies that the data in the cells of the contingency table are assumed to follow a multinomial distribution.

So in the end, although the formula for calculating the $\chi^2$ value is just an approximation of the Kullback-Leibler divergence, if we are willing to perform a decision test we still need the assumption that we are dealing with a multinomial distribution, otherwise the $\chi^2$ value that we calculated according to the formula above, does not necessarily follow a chi2-distribution.