Undergrad Poisson distribution with conditional probability

Click For Summary
The discussion focuses on computing conditional probabilities within a Poisson distribution, specifically P(X > x1 | X > x2) for x1 > x2. It is clarified that P(X > x1 ∩ X > x2) is equivalent to P(X > x1) due to the nature of the events. The participants debate the independence of events in a Poisson distribution, noting that while individual events are independent, certain probabilities like P(X ≥ 70) and P(X ≥ 80) are not. The conversation also touches on the memorylessness property, highlighting that it does not apply to Poisson distributions, which is a key distinction. The thread concludes with a reference to the memorylessness of geometric distributions, emphasizing the uniqueness of this property among discrete distributions.
Woolyabyss
Messages
142
Reaction score
1
Hi guys,
I have a question about computing conditional probabilities of a Poisson distribution.
Say we have a Poisson distribution P(X = x) = e^(−λ)(λx)/(x!) where X is some event.
My question is how would we compute P(X > x1 | X > x2), or more specifically P(X> x1 ∩ X > x2) with x1 > x2?
I originally thought that P(X > x1 ∩ X > x2) = P(X > x1) but recently read about the memorylessness property of exponential distributions and I'm not sure if it applies to Poisson distributions.
 
Physics news on Phys.org
Woolyabyss said:
I originally thought that P(X > x1 ∩ X > x2) = P(X > x1)

If x1 > x2 then ##\{X: X > x1, X > x2\}## is the same event as ##\{X:X>x1\}## , isn't it?
 
Stephen Tashi said:
If x1 > x2 then ##\{X: X > x1, X > x2\}## is the same event as ##\{X:X>x1\}## , isn't it?
Yes. Also I know that the events X = x of a Poisson distribution are independent of one another but surely P(X >= 70) and P(X >= 80) for example can't be, because given at least 70 events happen, the probability that at least 80 events happen would be 10 no?
 
Woolyabyss said:
but surely P(X >= 70) and P(X >= 80) for example can't be, because given at least 70 events happen, the probability that at least 80 events happen would be 10 no?

But my remark wasn't about the independence of events. If ##A \subset B ## then ##Pr(A \cap B) = Pr(A)##.

As far as independence goes, in most cases if ##A \subset B## then ##A## and ##B## are not independent events. Exceptions would be cases like ##Pr(A) = Pr(B) = 0 ## or ##Pr(A) = Pr(B) = 1 ##.

To find ##Pr(X > 80 | X > 70)##, what does Bayes theorem tell you ?

In the current Wikipedia article on "Memorylessness" https://en.wikipedia.org/wiki/Memorylessness there is the interesting claim:

The only memoryless discrete probability distributions are the geometric distributions, which feature the number of independent Bernoulli trials needed to get one "success," with a fixed probability p of "success" on each trial. In other words those are the distributions of waiting time in a Bernoulli process.

- surprising (to me), if true.
 
  • Like
Likes Woolyabyss
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 12 ·
Replies
12
Views
4K
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 9 ·
Replies
9
Views
5K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K