One sided testing of two Poisson distributions?

Click For Summary
To test if one Poisson-distributed result is larger than another, the Skellam distribution can be used to compute the difference between the two means. With large enough samples, the normal distribution can serve as an approximation, allowing for a z-test if true variances are known, or a t-test if using computed variances. The discussion highlights the importance of understanding sample size and variance in statistical testing. Users are encouraged to utilize appropriate alpha levels and z-values from statistical tables for their calculations. Overall, familiarity with the properties of Poisson distributions is essential for accurate interpretation and analysis.
Gerenuk
Messages
1,027
Reaction score
5
I want to test if one Poisson distributed result a is large than another one b.
I don't know much about statistics, but I understood the Wiki article about testing normal distribution however they need the number of samples there.

Basically I measure two Poisson distributed variables, I get two values and want to know the probability that one is larger than the other.

Can someone give my a quick reference (online or good book), where I can find my problem as close as possible?
 
Last edited:
Physics news on Phys.org
You can compute the difference between the two Poisson means and see whether the difference is significantly different from zero under the Skellam distribution.

Or (with a large enough sample) you can assume that the normal distribution will be a reasonable approximation.
 
Last edited:
EnumaElish said:
You can compute the difference between the two Poisson means and see whether the difference is significantly different from zero under the Skellam distribution.

Or (with a large enough sample) you can assume that the normal distribution will be a reasonable approximation.

I think I just about know what to do with the Skellam. But it has a funny Bessel function.
I have both means approx. 500.
What would be the method with the normal distribution?
 
If you know the true variances, the z test.

If you are using computed variances, then technically you should use t test with equal or unequal variances, as the case may be. As the sample size increases, the z-test becomes a good approximation to the t-test (e.g. for n > 40).
 
EnumaElish said:
If you know the true variances, the z test.

If you are using computed variances, then technically you should use t test with equal or unequal variances, as the case may be. As the sample size increases, the z-test becomes a good approximation to the t-test (e.g. for n > 40).

I tried to look through these tests, but I'm not sure what to take. I only know that I measured a single value
x and single value y. Both are supposed to be Poisson (so I expect x+- sqrt(x) and y+-sqrt(y)).

In this case I'm not sure how interpret "computed variance" or "sample size".
I know about mathematics, but not of the formalities of statistics :(
 
I found the following (attachment) in
"An improved approximate two-sample poisson test" (M.D.Huffman)

Just to make sure I got it right and plug in the right values:
I use \alpha=0.05, p=0.90 as sensible values?
I look up z in a table? (i.e. z_{0.95}=1.65, z_{0.90}=1.28)
I estimate \varrho from initial measurements.
Should I use equal counting time d=1 for best results?
By equation (4) I will find how long I have to measure...
 

Attachments

  • 99p01327_l.2.jpg
    99p01327_l.2.jpg
    34.3 KB · Views: 546
Gerenuk said:
I tried to look through these tests, but I'm not sure what to take. I only know that I measured a single value
x and single value y. Both are supposed to be Poisson (so I expect x+- sqrt(x) and y+-sqrt(y)).

In this case I'm not sure how interpret "computed variance" or "sample size".
I know about mathematics, but not of the formalities of statistics :(
In a z-test you are assumed to know both means and variances. Poisson is a one-parameter distribution (say k) where mean is a function of k, and variance is also a function of k; so you can derive both means and both variances if you know the k parameter for each of the distributions. Put differently, if you know the mean then you know the variance.
 
Last edited:

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 12 ·
Replies
12
Views
4K
  • · Replies 31 ·
2
Replies
31
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K