Mogarrr
- 120
- 6
I'd like some help understanding a proof, from http://www.statlect.com/cndprb1.htm. Properties are introduced, which a conditional probability ought to have:
Then a proof is given for the proposition: Whenever P(I) is positive, P(E|I) satisfies the four properties above if and only if P(E|I) = \frac {P(E \cap I)}{P(I)}.
I'm having a hard time following the proof of the "only if" part. That is, if P(E|I) satisfies the four properties above, then P(E|I) = \frac {P(E \cap I)}{P(I)}.
Here's a quote:
The proof by contradiction, seems more like a proof of the uniqueness of a conditional probability.
Anyways, I'm not really seeing the statement, *. How is it that \frac {\bar{P}(E|I)}{\bar{P}(I|I)} = \frac {P(E)}{P(I)}?
1) Must satisfy properties of probability measures:
3) If E \subseteq I and F \subseteq I, and P(I) is greater than 0, then \frac {P(E|I)}{P(F|I)} = \frac {P(E)}{P(F)}.
a) for any event E, 0≤P(E)≤1;
b) P(Ω)=1;
c) Sigma-additivity: Let {E1, E2, ... En, ...} be a sequence of events, where i≠j implies Ei and Ej are mutually exclusive, then P(\bigcup_{n=1}^∞ E_n) = \sum_{n=1}^∞ P(E_n).
2)P(I|I)=1b) P(Ω)=1;
c) Sigma-additivity: Let {E1, E2, ... En, ...} be a sequence of events, where i≠j implies Ei and Ej are mutually exclusive, then P(\bigcup_{n=1}^∞ E_n) = \sum_{n=1}^∞ P(E_n).
3) If E \subseteq I and F \subseteq I, and P(I) is greater than 0, then \frac {P(E|I)}{P(F|I)} = \frac {P(E)}{P(F)}.
Then a proof is given for the proposition: Whenever P(I) is positive, P(E|I) satisfies the four properties above if and only if P(E|I) = \frac {P(E \cap I)}{P(I)}.
I'm having a hard time following the proof of the "only if" part. That is, if P(E|I) satisfies the four properties above, then P(E|I) = \frac {P(E \cap I)}{P(I)}.
Here's a quote:
Now we prove the 'only if' part. We prove it by contradiction. Suppose there exists another conditional probability \bar{P} that satifies the four properties. Then There Exists an even E such that:
\bar{P}(E|I) ≠ P(E|I)
It can not be noted that E \subseteq I, otherwise we would have:
\frac {\bar{P}(E|I)}{\bar{P}(I|I)} = \frac {\bar{P}(E|I)}1 ≠ \frac {P(E|I)}1 = \frac {P(E \cap I)}{P(I)} = \frac {P(E)}{P(I)}
*which would be a contradiction, since if \bar{P} was a conditional probability, it would satisfy:
\frac {\bar{P}(E|I)}{\bar{P}(I|I)} = \frac {P(E)}{P(I)}
\bar{P}(E|I) ≠ P(E|I)
It can not be noted that E \subseteq I, otherwise we would have:
\frac {\bar{P}(E|I)}{\bar{P}(I|I)} = \frac {\bar{P}(E|I)}1 ≠ \frac {P(E|I)}1 = \frac {P(E \cap I)}{P(I)} = \frac {P(E)}{P(I)}
*which would be a contradiction, since if \bar{P} was a conditional probability, it would satisfy:
\frac {\bar{P}(E|I)}{\bar{P}(I|I)} = \frac {P(E)}{P(I)}
The proof by contradiction, seems more like a proof of the uniqueness of a conditional probability.
Anyways, I'm not really seeing the statement, *. How is it that \frac {\bar{P}(E|I)}{\bar{P}(I|I)} = \frac {P(E)}{P(I)}?