Distribution/confidence question for dummies

  • Thread starter Thread starter CRGreathouse
  • Start date Start date
AI Thread Summary
The discussion focuses on constructing a 95% confidence interval for an unknown probability p based on k successes in n independent trials. For small n, numerical combinatorics can be used to calculate probabilities, but this becomes cumbersome for larger n. The maximum likelihood estimator for p is k/n, and while normal approximations are common, they may not yield satisfactory results in this case. The Neyman-Pearson lemma is referenced as a justification for using likelihood estimates, emphasizing their statistical power. The conversation also touches on the importance of correctly setting alpha values for two-tailed tests in confidence interval calculations.
CRGreathouse
Science Advisor
Homework Helper
Messages
2,832
Reaction score
0
Binomial distribution/confidence question for dummies

(The 'dummy' would be me.)

I have an event that happens with unknown probability p. Each of n independent events results in k of these events happening. How do I construct a (95%) confidence interval for p?

For small n it's easy to figure this out with numerical combinatorics:

Pr(at most k events) = \sum_{i=0}^k{n\choose i}p^i(1-p)^{n-i}
Pr(at least k events) = \sum_{i=k}^n{n\choose i}p^i(1-p)^{n-i}

and then find the roots of Pr(at most k events) - 0.05 and Pr(at least k events) - 0.05. (Maybe I should use 0.025 instead?)


But for large n (even not all that large!), this is inconvenient. Surely there is some standard statistical method for this? Sticking as close to the roots as possible would be best -- I'd prefer to use as little Central Limit Theorem as I can.
 
Last edited:
Physics news on Phys.org
Actually, forget it. Computers are fast, and wasting a few billion cycles isn't going to kill me.

Pari/GP code:
Code:
probrange(n,k,confidence=.05)={
	[if(k==0,0,solve(p=0,1,sum(i=k,n,binomial(n,i)*p^i*(1-p)^(n-i))-confidence)),
	if(k==n,1,solve(p=0,1,sum(i=0,k,binomial(n,i)*p^i*(1-p)^(n-i))-confidence))]
};
 


CRGreathouse said:
(The 'dummy' would be me.)

I have an event that happens with unknown probability p. Each of n independent events results in k of these events happening. How do I construct a (95%) confidence interval for p?

For small n it's easy to figure this out with numerical combinatorics:

Pr(at most k events) = \sum_{i=0}^k{n\choose i}p^i(1-p)^{n-i}
Pr(at least k events) = \sum_{i=k}^n{n\choose i}p^i(1-p)^{n-i}

and then find the roots of Pr(at most k events) - 0.05 and Pr(at least k events) - 0.05. (Maybe I should use 0.025 instead?)


But for large n (even not all that large!), this is inconvenient. Surely there is some standard statistical method for this? Sticking as close to the roots as possible would be best -- I'd prefer to use as little Central Limit Theorem as I can.


If you want the proper statistical reasoning for using the combinatorial things then look up Neyman Pearson lemma. It tells you that out of all the estimators, the one using the likelihood estimate gives the largest power. I'm afraid without Central Limit or Poisson approximation you can't simplify the computation.

Oo and your values should be 0.025 (so it adds to 0.5 when you do two sided test)

Good luck.
 


Focus said:
If you want the proper statistical reasoning for using the combinatorial things then look up Neyman Pearson lemma. It tells you that out of all the estimators, the one using the likelihood estimate gives the largest power. I'm afraid without Central Limit or Poisson approximation you can't simplify the computation.

The maximum likelihood estimator is k/n -- if I have 10 trials and win at 3, the most likely probability is 30%. That's not hard to figure out.

I'm not actually getting particularly good results with using the normal estimate, so I think I'll have to compute numerically.

I will look up that lemma, though; that might help.

Focus said:
Oo and your values should be 0.025 (so it adds to 0.5 when you do two sided test)

You know, this is where my lack of stats knowledge hurts. It's clear that I should use 0.05, not 0.025, when k = 0. But for k > 0, where I have a choice, I'm not sure what the justification is on choosing left and right errors to be equal. What do you think?
 


CRGreathouse said:
The maximum likelihood estimator is k/n -- if I have 10 trials and win at 3, the most likely probability is 30%. That's not hard to figure out.

I'm not actually getting particularly good results with using the normal estimate, so I think I'll have to compute numerically.

I will look up that lemma, though; that might help.

The lemma tells you that the likelihood function is, well, "best" to use.


CRGreathouse said:
You know, this is where my lack of stats knowledge hurts. It's clear that I should use 0.05, not 0.025, when k = 0. But for k > 0, where I have a choice, I'm not sure what the justification is on choosing left and right errors to be equal. What do you think?

Confidence intervals always use alpha/2 on each tail. It is analogous (sp?) to two sided hypothesis tests.
 
Here's the revised numerical function I'm using:

Code:
probrange(n,k,conf=.05)={
	if (k==0, return([0, solve(p=0,1,(1-p)^n-conf)]));
	if (k==n, return([solve(p=0,1,p^n-conf), 1]));

	conf = conf/2;
	if(k+k < n,
		[solve(p=0,1,1-conf-sum(i=0,k-1,binomial(n,i)*p^i*(1-p)^(n-i))),
		solve(p=0,1,sum(i=0,k,binomial(n,i)*p^i*(1-p)^(n-i))-conf)],
		[solve(p=0,1,sum(i=k,n,binomial(n,i)*p^i*(1-p)^(n-i))-conf),
		solve(p=0,1,1-conf-sum(i=k+1,n,binomial(n,i)*p^i*(1-p)^(n-i)))]
	)
};
addhelp(probrange, "probrange(n,k,conf=.05): Gives a confidence interval for the probability of an event which happens k times in n trials.");

I'd like to test this against some normal approximations, but I'm having a bit of trouble in Pari -- I don't know how to calculate a normal percentile (z_{1-\alpha/2}). All Pari has built-in is the complementary error function. Is there a better way than numerically solving for the inverse?
 
Back
Top