- #1

- 111

- 1

Thank you.

- I
- Thread starter Adeimantus
- Start date

- #1

- 111

- 1

Thank you.

- #2

tnich

Homework Helper

- 1,048

- 336

If you are doing a p-test, then it matters a lot.

Thank you.

- #3

- 111

- 1

That is a good point. So in that case, how would you decide when the approximation is good enough?

- #4

tnich

Homework Helper

- 1,048

- 336

It seems to me that it would be better to use the binomial distribution for the p-test in that case.That is a good point. So in that case, how would you decide when the approximation is good enough?

- #5

- 111

- 1

- #6

tnich

Homework Helper

- 1,048

- 336

If the decision about significance of the result is the same for the approximation and the actual distribution, then the approximation is good enough.

- #7

FactChecker

Science Advisor

Gold Member

- 6,049

- 2,336

Good point. This implies that it is the accuracy of the cumulative distribution function at important and frequently used confidence values that really matters.If the decision about significance of the result is the same for the approximation and the actual distribution, then the approximation is good enough.

- #8

- 111

- 1

Okay, but it will never be exactly the same, right? So even there, don't you need to consider how big the range is where the two distributions will lead to opposite conclusions. And then reason, I suppose, that if this range is small, then the approximation is good enough. In other words, you will be led to the same conclusion with the normal distribution as with the binomial most of the time. You just have to specify how often is often enough.If the decision about significance of the result is the same for the approximation and the actual distribution, then the approximation is good enough.

Also, this might be a dumb question, but is hypothesis testing the main application of this limit theorem? Is that what Laplace and DeMoivre were using it for, or were they more interested in the values of the density/mass function?

- #9

tnich

Homework Helper

- 1,048

- 336

Right. My point is, why would you want to use an approximation, when you can easily calculate or look up exact values for the binomial distribution? Exact values are unimpeachable.Okay, but it will never be exactly the same, right?

I think a normal distribution table would have been quite useful for computing approximate binomial probabilities by hand. Of course, DeMoivre would have realized that at extreme values where the approximation was not good, doing the computation using the actual binomial distribution would be fairly simple.Is that what Laplace and DeMoivre were using it for, or were they more interested in the values of the density/mass function?

You could also use the normal approximation to calculate the expected value of a function of a binomial random variable as long as the function was not heavily weighted at the extremes, but the actual binomial distribution would not be any more difficult to use for that.

- #10

- 111

- 1

- #11

StoneTemplePython

Science Advisor

Gold Member

- 1,179

- 577

As is often the case, it depends. It's worth remarking that the sum of two independent binomial distributions isn't necessarily binomial, but the sum of 2 independent Gaussians is Gaussian. If you are interesting in general for bounding the error of a Gaussian approximation, look into Stein's method. Other reasons: in math, people tend to like the exponential function a lot more than binomial coefficients, and the fact that the Gaussian is entirely characterized by its first two moments (and for a continuous distribution on ##(-\infty, \infty)## with a given finite variance, it has maximum entropy) and so on.

The Gaussian is one of the most important distributions in probability. In general the various central limit theorem proofs are not easy... it just so happens that the theorem for binomial -> normal, is an easy one, so it has pedagogic value.

- #12

- 111

- 1

Excellent points. Also, thank you for suggesting Stein's method. I looked it up on Wikipedia, and that may be exactly what I'm looking for!As is often the case, it depends. It's worth remarking that the sum of two independent binomial distributions isn't necessarily binomial, but the sum of 2 independent Gaussians is Gaussian. If you are interesting in general for bounding the error of a Gaussian approximation, look into Stein's method. Other reasons: in math, people tend to like the exponential function a lot more than binomial coefficients, and the fact that the Gaussian is entirely characterized by its first two moments (and for a continuous distribution on ##(-\infty, \infty)## with a given finite variance, it has maximum entropy) and so on.

The Gaussian is one of the most important distributions in probability. In general the various central limit theorem proofs are not easy... it just so happens that the theorem for binomial -> normal, is an easy one, so it has pedagogic value.

- Replies
- 3

- Views
- 4K

- Last Post

- Replies
- 1

- Views
- 2K

- Last Post

- Replies
- 4

- Views
- 2K

- Last Post

- Replies
- 1

- Views
- 2K

- Replies
- 3

- Views
- 6K

- Last Post

- Replies
- 2

- Views
- 11K

- Last Post

- Replies
- 2

- Views
- 4K

- Last Post

- Replies
- 2

- Views
- 4K

- Replies
- 1

- Views
- 9K

- Last Post

- Replies
- 2

- Views
- 2K