caffeine
I don't think I *really* understand the Central Limit Theorem.
Suppose we have a set of n independent random variables \{X_i\} with the same distribution function, same finite mean, and same finite variance. Suppose we form the sum S_n = \sum_{i=1}^n X_i. Suppose I want to know the probability that S_n is between a and b. In other words, I want to know P(b > S_n > a).
The central limit theorem uses a standardized sum:
<br /> P\left(b > \frac{ \sum_{i=1}^n X_i - n\mu}{\sqrt{n}\sigma} > a\right)<br /> = \frac{1}{\sqrt{2\pi}} \int_a^b e^{-y^2/2} \, dy<br />
What is the relationship between what I want:
P(b > S_n > a)
and what the central limit theorem tells me about:
P\left(b > \frac{ \sum_{i=1}^n X_i - n\mu}{\sqrt{n}\sigma} > a\right)How can they possibly be equal? If they are equal, how is that possible? And if they're not equal, how would I get what I want?
Suppose we have a set of n independent random variables \{X_i\} with the same distribution function, same finite mean, and same finite variance. Suppose we form the sum S_n = \sum_{i=1}^n X_i. Suppose I want to know the probability that S_n is between a and b. In other words, I want to know P(b > S_n > a).
The central limit theorem uses a standardized sum:
<br /> P\left(b > \frac{ \sum_{i=1}^n X_i - n\mu}{\sqrt{n}\sigma} > a\right)<br /> = \frac{1}{\sqrt{2\pi}} \int_a^b e^{-y^2/2} \, dy<br />
What is the relationship between what I want:
P(b > S_n > a)
and what the central limit theorem tells me about:
P\left(b > \frac{ \sum_{i=1}^n X_i - n\mu}{\sqrt{n}\sigma} > a\right)How can they possibly be equal? If they are equal, how is that possible? And if they're not equal, how would I get what I want?
Last edited by a moderator: