Relation of conditional and joint probability

Click For Summary

Discussion Overview

The discussion revolves around the relationship between conditional and joint probabilities, exploring the mathematical formulations and implications of these concepts. Participants examine the behavior of conditional probabilities, the notation used, and the interpretation of results in the context of hypothesis testing.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant presents a formula for conditional probability and discusses the intersection of probabilities, raising questions about the order-dependent nature of conditional probabilities when marginal probabilities differ.
  • Another participant corrects the notation used in the initial post, suggesting that the intersection notation is incorrect and clarifying that the numerator should represent the joint probability P(A ∩ B).
  • A participant proposes that the function F(A,B) = P(A|B) is influenced by which marginal probability is larger, suggesting a need for a general notation to reflect this dependency.
  • Another participant asserts that the inequalities regarding P(A ∩ B) hold true regardless of the relative sizes of P(A) and P(B), emphasizing that P(A ∩ B) is always a subset of both A and B.
  • A later reply introduces the concept of the Bayes factor in hypothesis testing, questioning how the relative sizes of marginal probabilities affect interpretations of data explanations by different hypotheses.
  • One participant expresses uncertainty about the implications of the Bayes factor when marginal probabilities are equal, pondering whether one hypothesis can be said to explain the data better than another based on their probabilities.

Areas of Agreement / Disagreement

Participants exhibit a mix of agreement and disagreement, particularly regarding the interpretation of conditional probabilities and the implications of the Bayes factor in hypothesis testing. No consensus is reached on the necessity of incorporating marginal probability comparisons into general notation.

Contextual Notes

Some assumptions about the relationships between P(A) and P(B) remain unresolved, and the discussion highlights the complexity of interpreting conditional probabilities in different contexts.

SW VandeCarr
Messages
2,199
Reaction score
77
P(A|B)=\frac{P(B|A)P(A)}{P(B)}

P(B|A)P(A)= \frac{(P(B)\cap (P(A)) P(A)}{P(A)}=P(B)\cap P(A)=P(A)\cap P(B)

P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}

0\leq(P(A)\cap P(B))\leq P(A); P(A)\leq P(B)

0\leq(P(B)\cap P(A))\leq P(B); P(B)\leq P(A)
 
Last edited:
Physics news on Phys.org
No question posed. I posted inadvertently. I was practicing with the Latex preview. There's apparently no delete option for the first post.

EDIT: Since it's posted, any questions or comments are welcome. The issue is although the intersection relation commutes, there seems to be an order dependent consideration regarding the range of conditional probabilities if one takes the first term as the one with smaller marginal probability.
 
Last edited:
The general formula for

<br /> P(B \mid A) <br />

is

<br /> P(B \mid A) = \frac{P(B \cap A)}{P(A)}<br />

The notation (you use)

<br /> P(A) \cap P(B)<br />

is incorrect: the two exterior items are numbers, and [\itex] \cap [/itex] denotes a set operation. (I'm assuming you made a typo here).

I'm not sure what you wanted to do here:

<br /> P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}<br />

Assuming both items in the numerator were meant to be P(A \cap B),
the numerator is simply P(A \cap B).
 
statdad said:
Assuming both items in the numerator were meant to be P(A \cap B),
the numerator is simply P(A \cap B).

Thanks statdad.

I was considering the behavior of a function F(A,B)=P(A|B)

0\leq P(A\cap B)\leq P(B) if P(B)\leq P(A)

0\leq P(A\cap B)\leq P(A) if P(A)\leq P(B)

The form of F(A,B) is dependent on which of the marginal probabilities is larger when the marginal probabilities of A and B are unequal. Therefore, I believe it would be useful to be able to incorporate this information into a general notation. Any suggestions?
 
Last edited:
I'm still not sure what you're getting at in your final post. It is always true that

<br /> 0 \le P(A \cap B) \le P(A), \quad 0 \le P(A \cap B) \le P(B),<br />

regardless of which marginal is larger, since A \cap B is always a subset of both A and B. You can summarize this by saying

<br /> P(A \cap B) \le \min\{P(A), P(B)\}<br />

Without some assumptions about how the values of P(A) and P(B) compare, I'm not sure much is possible.
 
statdad said:
But some assumptions about how the values of P(A) and P(B) compare, I'm not sure much is possible.

I'm thinking about the use of the Bayes factor (B) in hypothesis testing:

B_{10}=\frac{P(D|H_{1})}{P(D|H_{0})}=\frac{P(H_{1}|D)P(D)/P(H_{1})}{P(H_{0}|D)P(D)/P(H_{0})}=\frac{P(H_{1}\cap D)/P(H_{1})}{P(H_{0}\cap D)/P(H_{0})}

If P(H_{1}\cap D)=P(H_{1}) and P(H_{0}\cap D)=P(H_{0}) then we get

1/1=1 ?

This is not an invalid value for B, but it seems not to take into account the relative sizes of the marginal probabilities P(H_1) and P(H_0).

EDIT: My question goes to the interpretation and theory behind the math, not the math itself, which is clear. If P(H_1) > P(H_0), both being contained entirely within the probability space of D, can we say that H_1 "explains" the data D better than H_0? If not, why not? Perhaps this is frequentist thinking. I've used Bayesian methods for conditional probabilities and MLE , but not hypothesis testing with the Bayes factor.
 
Last edited:

Similar threads

  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 6 ·
Replies
6
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
5K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K