# Baye's Theorem

## Main Question or Discussion Point

P(B/A) = P(A/B).P(B) / P(A)

Later we expand P(A) as P(A/B).P(B) + P(A/B).P(B) ... B is complement of B

I don't understand how we can expand P(A) like that. Doesn't that assume that A ℂ B?

Related Set Theory, Logic, Probability, Statistics News on Phys.org
marcusl
Gold Member
Think about what it is saying. The probability that A happens is the probability that A happens given that B happens plus the probability that A happens given that B doesn't happen. Both cases are needed to cover all possibilities.

Well basically what my book says is that : -
P(A) = P(A/B1).P(B1) + P(A/B2).P(B2) + ... + P(A/Bn).P(Bn)

Doesn't this assume that B1 U B2 ... U Bn is a super-set of A?

D H
Staff Emeritus
Doesn't this assume that B1 U B2 ... U Bn is a super-set of A?
Of course.

Your text should have specified that B1, B2, ···, Bn are a set of mutually disjoint subsets of the universe U of possible outcomes and that B1B2 ∪ ··· ∪ Bn=U. The set A must be a subset of this universe of outcomes U; otherwise it doesn't even make sense to talk about P(B1|A).

Of course.

Your text should have specified that B1, B2, ···, Bn are a set of mutually disjoint subsets of the universe U of possible outcomes and that B1B2 ∪ ··· ∪ Bn=U. The set A must be a subset of this universe of outcomes U; otherwise it doesn't even make sense to talk about P(B1|A).
It didn't. Anyways, thank you. This clears my doubt.