# Relation of conditional and joint probability

1. Jun 4, 2010

### SW VandeCarr

$$P(A|B)=\frac{P(B|A)P(A)}{P(B)}$$

$$P(B|A)P(A)= \frac{(P(B)\cap (P(A)) P(A)}{P(A)}=P(B)\cap P(A)=P(A)\cap P(B)$$

$$P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}$$

$$0\leq(P(A)\cap P(B))\leq P(A); P(A)\leq P(B)$$

$$0\leq(P(B)\cap P(A))\leq P(B); P(B)\leq P(A)$$

Last edited: Jun 5, 2010
2. Jun 4, 2010

### SW VandeCarr

No question posed. I posted inadvertently. I was practicing with the Latex preview. There's apparently no delete option for the first post.

EDIT: Since it's posted, any questions or comments are welcome. The issue is although the intersection relation commutes, there seems to be an order dependent consideration regarding the range of conditional probabilities if one takes the first term as the one with smaller marginal probability.

Last edited: Jun 5, 2010
3. Jun 6, 2010

The general formula for

$$P(B \mid A)$$

is

$$P(B \mid A) = \frac{P(B \cap A)}{P(A)}$$

The notation (you use)

$$P(A) \cap P(B)$$

is incorrect: the two exterior items are numbers, and [\itex] \cap [/itex] denotes a set operation. (I'm assuming you made a typo here).

I'm not sure what you wanted to do here:

$$P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}$$

Assuming both items in the numerator were meant to be $P(A \cap B)$,
the numerator is simply $P(A \cap B)$.

4. Jun 7, 2010

### SW VandeCarr

I was considering the behavior of a function F(A,B)=P(A|B)

$$0\leq P(A\cap B)\leq P(B) if P(B)\leq P(A)$$

$$0\leq P(A\cap B)\leq P(A) if P(A)\leq P(B)$$

The form of F(A,B) is dependent on which of the marginal probabilities is larger when the marginal probabilities of A and B are unequal. Therefore, I believe it would be useful to be able to incorporate this information into a general notation. Any suggestions?

Last edited: Jun 7, 2010
5. Jun 8, 2010

I'm still not sure what you're getting at in your final post. It is always true that

$$0 \le P(A \cap B) \le P(A), \quad 0 \le P(A \cap B) \le P(B),$$

regardless of which marginal is larger, since $A \cap B$ is always a subset of both $A$ and $B$. You can summarize this by saying

$$P(A \cap B) \le \min\{P(A), P(B)\}$$

Without some assumptions about how the values of $P(A)$ and $P(B)$ compare, I'm not sure much is possible.

6. Jun 14, 2010

### SW VandeCarr

I'm thinking about the use of the Bayes factor (B) in hypothesis testing:

$$B_{10}=\frac{P(D|H_{1})}{P(D|H_{0})}=\frac{P(H_{1}|D)P(D)/P(H_{1})}{P(H_{0}|D)P(D)/P(H_{0})}=\frac{P(H_{1}\cap D)/P(H_{1})}{P(H_{0}\cap D)/P(H_{0})}$$

If $$P(H_{1}\cap D)=P(H_{1})$$ and $$P(H_{0}\cap D)=P(H_{0})$$ then we get

1/1=1 ?

This is not an invalid value for B, but it seems not to take into account the relative sizes of the marginal probabilities P(H_1) and P(H_0).

EDIT: My question goes to the interpretation and theory behind the math, not the math itself, which is clear. If P(H_1) > P(H_0), both being contained entirely within the probability space of D, can we say that H_1 "explains" the data D better than H_0? If not, why not? Perhaps this is frequentist thinking. I've used Bayesian methods for conditional probabilities and MLE , but not hypothesis testing with the Bayes factor.

Last edited: Jun 14, 2010