Relation of conditional and joint probability

In summary, the Bayes factor is used to help decide between two hypotheses by taking into account the relative size of the marginal probabilities. If the marginal probabilities of the hypotheses are equal, then the Bayes factor is 1. If the marginal probabilities of the hypotheses are not equal, then the Bayes factor is 1/1.
  • #1
SW VandeCarr
2,199
81
[tex]P(A|B)=\frac{P(B|A)P(A)}{P(B)}[/tex]

[tex] P(B|A)P(A)= \frac{(P(B)\cap (P(A)) P(A)}{P(A)}=P(B)\cap P(A)=P(A)\cap P(B)[/tex]

[tex] P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}[/tex]

[tex]0\leq(P(A)\cap P(B))\leq P(A); P(A)\leq P(B)[/tex]

[tex]0\leq(P(B)\cap P(A))\leq P(B); P(B)\leq P(A)[/tex]
 
Last edited:
Physics news on Phys.org
  • #2
No question posed. I posted inadvertently. I was practicing with the Latex preview. There's apparently no delete option for the first post.

EDIT: Since it's posted, any questions or comments are welcome. The issue is although the intersection relation commutes, there seems to be an order dependent consideration regarding the range of conditional probabilities if one takes the first term as the one with smaller marginal probability.
 
Last edited:
  • #3
The general formula for

[tex]
P(B \mid A)
[/tex]

is

[tex]
P(B \mid A) = \frac{P(B \cap A)}{P(A)}
[/tex]

The notation (you use)

[tex]
P(A) \cap P(B)
[/tex]

is incorrect: the two exterior items are numbers, and [\itex] \cap [/itex] denotes a set operation. (I'm assuming you made a typo here).

I'm not sure what you wanted to do here:

[tex]
P(A|B)=\frac {(P(A)\cap P(B))\vee (P(B)\cap P(A))}{P(B)}
[/tex]

Assuming both items in the numerator were meant to be [itex] P(A \cap B) [/itex],
the numerator is simply [itex] P(A \cap B) [/itex].
 
  • #4
statdad said:
Assuming both items in the numerator were meant to be [itex] P(A \cap B) [/itex],
the numerator is simply [itex] P(A \cap B) [/itex].

Thanks statdad.

I was considering the behavior of a function F(A,B)=P(A|B)

[tex] 0\leq P(A\cap B)\leq P(B) if P(B)\leq P(A)[/tex]

[tex] 0\leq P(A\cap B)\leq P(A) if P(A)\leq P(B)[/tex]

The form of F(A,B) is dependent on which of the marginal probabilities is larger when the marginal probabilities of A and B are unequal. Therefore, I believe it would be useful to be able to incorporate this information into a general notation. Any suggestions?
 
Last edited:
  • #5
I'm still not sure what you're getting at in your final post. It is always true that

[tex]
0 \le P(A \cap B) \le P(A), \quad 0 \le P(A \cap B) \le P(B),
[/tex]

regardless of which marginal is larger, since [itex] A \cap B [/itex] is always a subset of both [itex] A [/itex] and [itex] B [/itex]. You can summarize this by saying

[tex]
P(A \cap B) \le \min\{P(A), P(B)\}
[/tex]

Without some assumptions about how the values of [itex] P(A) [/itex] and [itex] P(B) [/itex] compare, I'm not sure much is possible.
 
  • #6
statdad said:
But some assumptions about how the values of [itex] P(A) [/itex] and [itex] P(B) [/itex] compare, I'm not sure much is possible.

I'm thinking about the use of the Bayes factor (B) in hypothesis testing:

[tex]B_{10}=\frac{P(D|H_{1})}{P(D|H_{0})}=\frac{P(H_{1}|D)P(D)/P(H_{1})}{P(H_{0}|D)P(D)/P(H_{0})}=\frac{P(H_{1}\cap D)/P(H_{1})}{P(H_{0}\cap D)/P(H_{0})}[/tex]

If [tex] P(H_{1}\cap D)=P(H_{1})[/tex] and [tex] P(H_{0}\cap D)=P(H_{0})[/tex] then we get

1/1=1 ?

This is not an invalid value for B, but it seems not to take into account the relative sizes of the marginal probabilities P(H_1) and P(H_0).

EDIT: My question goes to the interpretation and theory behind the math, not the math itself, which is clear. If P(H_1) > P(H_0), both being contained entirely within the probability space of D, can we say that H_1 "explains" the data D better than H_0? If not, why not? Perhaps this is frequentist thinking. I've used Bayesian methods for conditional probabilities and MLE , but not hypothesis testing with the Bayes factor.
 
Last edited:

1. What is the difference between conditional and joint probability?

Conditional probability is the likelihood of an event occurring given that another event has already occurred, while joint probability is the likelihood of two or more events occurring together.

2. How are conditional and joint probability related?

Conditional and joint probability are related through the formula P(A|B) = P(A∩B)/P(B), where P(A|B) represents the conditional probability of event A given event B, P(A∩B) represents the joint probability of events A and B occurring together, and P(B) represents the probability of event B occurring.

3. Can conditional and joint probability be calculated for more than two events?

Yes, conditional and joint probability can be calculated for any number of events. For example, the conditional probability of event A given events B and C occurring would be represented as P(A|B∩C) = P(A∩B∩C)/P(B∩C).

4. How can conditional and joint probability be used in real-life situations?

Conditional and joint probability can be used to make predictions or decisions by taking into account the likelihood of events occurring in relation to one another. For example, in the medical field, conditional and joint probability can be used to calculate the likelihood of a patient having a certain disease given their symptoms and medical history.

5. What are some common misconceptions about conditional and joint probability?

One common misconception is that conditional probability is always smaller than joint probability, when in fact it can be larger. Another misconception is that events must be independent for conditional and joint probability to be calculated, when in reality they can also be used for dependent events.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
991
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
744
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
829
Replies
0
Views
262
  • Set Theory, Logic, Probability, Statistics
2
Replies
36
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
989
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
898
Back
Top