Relation of conditional and joint probability

SW VandeCarr
Messages
2,193
Reaction score
77
P(A|B)=\frac{P(B|A)P(A)}{P(B)}

P(B|A)P(A)= \frac{(P(B)\cap (P(A)) P(A)}{P(A)}=P(B)\cap P(A)=P(A)\cap P(B)

P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}

0\leq(P(A)\cap P(B))\leq P(A); P(A)\leq P(B)

0\leq(P(B)\cap P(A))\leq P(B); P(B)\leq P(A)
 
Last edited:
Physics news on Phys.org
No question posed. I posted inadvertently. I was practicing with the Latex preview. There's apparently no delete option for the first post.

EDIT: Since it's posted, any questions or comments are welcome. The issue is although the intersection relation commutes, there seems to be an order dependent consideration regarding the range of conditional probabilities if one takes the first term as the one with smaller marginal probability.
 
Last edited:
The general formula for

<br /> P(B \mid A) <br />

is

<br /> P(B \mid A) = \frac{P(B \cap A)}{P(A)}<br />

The notation (you use)

<br /> P(A) \cap P(B)<br />

is incorrect: the two exterior items are numbers, and [\itex] \cap [/itex] denotes a set operation. (I'm assuming you made a typo here).

I'm not sure what you wanted to do here:

<br /> P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}<br />

Assuming both items in the numerator were meant to be P(A \cap B),
the numerator is simply P(A \cap B).
 
statdad said:
Assuming both items in the numerator were meant to be P(A \cap B),
the numerator is simply P(A \cap B).

Thanks statdad.

I was considering the behavior of a function F(A,B)=P(A|B)

0\leq P(A\cap B)\leq P(B) if P(B)\leq P(A)

0\leq P(A\cap B)\leq P(A) if P(A)\leq P(B)

The form of F(A,B) is dependent on which of the marginal probabilities is larger when the marginal probabilities of A and B are unequal. Therefore, I believe it would be useful to be able to incorporate this information into a general notation. Any suggestions?
 
Last edited:
I'm still not sure what you're getting at in your final post. It is always true that

<br /> 0 \le P(A \cap B) \le P(A), \quad 0 \le P(A \cap B) \le P(B),<br />

regardless of which marginal is larger, since A \cap B is always a subset of both A and B. You can summarize this by saying

<br /> P(A \cap B) \le \min\{P(A), P(B)\}<br />

Without some assumptions about how the values of P(A) and P(B) compare, I'm not sure much is possible.
 
statdad said:
But some assumptions about how the values of P(A) and P(B) compare, I'm not sure much is possible.

I'm thinking about the use of the Bayes factor (B) in hypothesis testing:

B_{10}=\frac{P(D|H_{1})}{P(D|H_{0})}=\frac{P(H_{1}|D)P(D)/P(H_{1})}{P(H_{0}|D)P(D)/P(H_{0})}=\frac{P(H_{1}\cap D)/P(H_{1})}{P(H_{0}\cap D)/P(H_{0})}

If P(H_{1}\cap D)=P(H_{1}) and P(H_{0}\cap D)=P(H_{0}) then we get

1/1=1 ?

This is not an invalid value for B, but it seems not to take into account the relative sizes of the marginal probabilities P(H_1) and P(H_0).

EDIT: My question goes to the interpretation and theory behind the math, not the math itself, which is clear. If P(H_1) > P(H_0), both being contained entirely within the probability space of D, can we say that H_1 "explains" the data D better than H_0? If not, why not? Perhaps this is frequentist thinking. I've used Bayesian methods for conditional probabilities and MLE , but not hypothesis testing with the Bayes factor.
 
Last edited:
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top