B The probability that two or more of six events occur with known chances

AI Thread Summary
The discussion focuses on calculating the probability of two or more independent events occurring from a set of six events with known probabilities. The preferred method involves using the formula P(2 or more events) = 1 - ∏(1 - p_k) - Σ(p_j ∏(1 - p_l)), which avoids the problematic "or" addition rule. The conversation also touches on the importance of mutual exclusivity when considering event outcomes, such as in coin tosses. It is emphasized that the provided equation yields valid probabilities between 0 and 1, assuming independence and proper probability values. Overall, the approach is deemed effective for determining the desired probability.
benorin
Science Advisor
Insights Author
Messages
1,442
Reaction score
191
TL;DR Summary
Let me borrow your cleverness, please?
I have six events with known probabilities ##p_1, ..., p_6##. Find the probability of two or more of these events occurring together? I can't think of a clever way to calculate this without using the problematic "or" is addition rule, but using that rule I get the required probability is

P(2 or more events) ##= 1-\prod_{k=1}^{6}\left( 1-p_{k}\right) - \sum_{j=1}^{6}p_{j} \prod_{l\neq j}^{6}\left( 1-p_{l}\right)##
 
Physics news on Phys.org
I believe the equation you are showing is the best there is.
 
The stated equation is probably going to be OP's preferred approach.

Alternative ways to get the same answer are discussed at some length here:
https://www.physicsforums.com/threads/probability-of-at-least-two-happening.965141/
- - - -
note: what I've said assumes the events are independent... nothing in the original post explicitly mentioned independence or dependencies though it was implied by the stated formula
 
StoneTemplePython said:
note: what I've said assumes the events are independent... nothing in the original post explicitly mentioned independence or dependencies though it was implied by the stated formula

Correct, the events are independent. Thank you. I just wanted to avoid using the “or” rule because sometimes it leads to bad results (e.g. probabilities greater than 1).
 
benorin said:
Correct, the events are independent. Thank you. I just wanted to avoid using the “or” rule because sometimes it leads to bad results (e.g. probabilities greater than 1).

so this is really the other item of interest for dependencies -- mutual exlcusivity. When you toss 6 coins you either have 0, 1, 2, 3, 4,5, or 6 instances of 'heads'. That's a partition of the sample space -- i.e. mutually exclusive events that cover the entire sample space. Denote ##A_k## being the event of ##k## heads after tossing all 6 coins

with
##A = A_2 \cup A_3 \cup A_4 \cup A_5 \cup A6##
you want ##P(A)## but you know ##P(A) + P(A^C) = 1##
so your original post correctly calculates
##P\big(A\big)= 1 - P\big(A^C\big) = 1 - P\big(A_0 \cup A_1\big) = 1 - \big\{ P\big(A_0\big) + P\big(A_1\big) \big\}##
where we can safely apply the 'or' rule here because the probability of union of mutually exclusive events is equal to the sum of those probabilities. Again because you cannot have both 0 out of 6 coins be heads and 1 out of 6 coins be heads on a given trial -- they are mutually exclusive events.
 
Last edited:
benorin said:
Correct, the events are independent. Thank you. I just wanted to avoid using the “or” rule because sometimes it leads to bad results (e.g. probabilities greater than 1).
The equation you provided will give you the correct answer - and, as long as each probability is from 0 to 1, the result will also be 0 to 1.
 

Similar threads

Back
Top