How Do You Calculate Conditional Probability with Complements?

AI Thread Summary
The discussion focuses on calculating the conditional probability P(A | \overline{B}) using given probabilities and Bayes' theorem. Participants explore various approaches, including the multiplicative and additive rules of probability, while addressing specific values provided in the homework. There is confusion regarding the correct application of Bayes' theorem and the need to derive certain probabilities accurately. Ultimately, the correct calculation of P(A | \overline{B}) is confirmed to be approximately 0.30769, with emphasis on careful consideration of significant figures in the final answer. The conversation highlights the importance of understanding conditional probabilities and their relationships within set theory.
TheSodesa
Messages
224
Reaction score
7

Homework Statement


[/B]
P(A | \overline{B}) = ?

Homework Equations


Multiplicative rule:
\begin{equation}
P(A | B) = \frac{P(A \cap B)}{P(B)}
\end{equation}
Additive rule:
\begin{equation}
P(A \cup B) = P(A) + P(B) - P(A \cap B)
\end{equation}
Difference:
\begin{equation}
A \backslash B = A \cap \overline{B}
\end{equation}
A hint:
\begin{equation}
P(\overline{A} \backslash B) = P(\overline{A} \cap \overline{B})
\end{equation}

The Attempt at a Solution



Using equation (1):
P(A | \overline{B}) = \frac{P(A \cap \overline{B})}{P(\overline{B})}

This is where I'm stuck. I don't see how ##(3)## nor ##(4)## would help me here, since there is not an identity I could use to convert a difference into something more operable.

What to do?
 
Physics news on Phys.org
What's the full question?
 
micromass said:
What's the full question?

Ah damn, sorry! My blood sugar is low and I'm a bit stressed out.

They gave us ##P(A) = 0.4##, ##P(B|A)=0.60## and ##P(B|\overline{A})=0.40## and asked us to calculate a few probabilities:

\begin{align*}
&a) P(A∩B) &= 0.24\\
&b) P(B) &= 0.48\\
&c) P(A∪B) &= 0.64\\
&d) P(A|B) &= 0.50\\
&e) P(A|\overline{B}) &= ?\\
&f) P(\overline{A}∖B) &= ?
\end{align*}

I'm having trouble with e) and f) (possibly just e). I'm somehow supposed to use the identities above to manipulate these expressions into a form I can plug the given or the previously calculated values into.
 
Are you familiar with Bayes' theorem?
 
micromass said:
Are you familiar with Bayes' theorem?

Looking at my course handout, it is mentioned under Kokonaistodennäköisyys ja Bayesin kaava (Total probability and Bayes' theorem), but we didn't yet cover it in class. Just a sec and I'll see if I can understand it.
 
Last edited:
micromass said:
Are you familiar with Bayes' theorem?

Ok, so basically it goes like this:

Let's assume that our sample space ##\Omega## is partitioned into separate subsets like so:

\Omega = B_1 \cup \cdot\cdot\cdot \cup B_n

Then if we have a subset of ##\Omega##, ##A##, that intersects all or some of the partitions, we can write ##A## like this:

A = (A \cap B_1) \cup (A \cap B_2) \cup ... \cup (A \cap B_n)

Then
P(A) = \sum_{i=1}^{n} P(A \cap B_i)

If ##B_i > 0##, based on the multiplicative identity, we have the total probability:

P(A) = \sum_{i=1}^{n} P(B_i)P(A|B_i)

The Bayes' theorem can be derived using both the above total probability formula and the multiplicative identity:

P(B_k|A) = \frac{P(B_k)P(A|B_k)}{\sum_{i=1}^{n} P(B_i)P(A|B_i)}
 
Yes, here the partition is ##A## and ##\overline{A}##.

You can do this without Bayes, but I think Bayes is the most natural approach here.
 
micromass said:
Yes, here the partition is ##A## and ##\overline{A}##.

You can do this without Bayes, but I think Bayes is the most natural approach here.

I'll see if I can figure out how to apply it. But first dinner.
 
  • Like
Likes member 587159
micromass said:
Yes, here the partition is ##A## and ##\overline{A}##.

You can do this without Bayes, but I think Bayes is the most natural approach here.

Juts to clarify, are you sure the partition is just ##A## and ##\overline{A}##? My understanding of set theory is very limited, but I'd drawn up the situation like this (not in scale of course):
Sample space.png

I'm not sure I understand why I should partition the space into ##A## and ##\overline{A}##. Is it because ##A## intersects both ##B## and ##\Omega##?

Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}
Now
<br /> P(\overline{B}|A) = \frac{P(\overline{B}\cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} = \frac{P(A)-P(A \cap B)}{P(A)} = \frac{0.4 - 0.24}{0.4} = 0.4<br />
Then
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times 0.4}\\
&= 0.4
\end{align*}
I'm told this is still wrong. :frown:
 
  • #10
TheSodesa said:
Juts to clarify, are you sure the partition is just ##A## and ##\overline{A}##? My understanding of set theory is very limited, but I'd drawn up the situation like this (not in scale of course):
View attachment 105417
I'm not sure I understand why I should partition the space into ##A## and ##\overline{A}##. Is it because ##A## intersects both ##B## and ##\Omega##?

Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}
Now
<br /> P(\overline{B}|A) = \frac{P(\overline{B}\cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} = \frac{P(A)-P(A \cap B)}{P(A)} = \frac{0.4 - 0.24}{0.4} = 0.4<br />
Then
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times 0.4}\\
&= 0.4
\end{align*}
I'm told this is still wrong. :frown:

##A, \bar{A}## form a partition 0f ##\Omega## because ##A \cap \bar{A} = \emptyset## (they are disjoint) and ##A \cup \bar{A} = \Omega## (together, they make up the whole space).
 
  • #11
TheSodesa said:
Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}

Are you sure about this? I would double check for some typos.
 
  • #12
micromass said:
Are you sure about this? I would double check for some typos.

If ##A## and ##\overline{A}## are the partitions, then their probabilities should be the coefficients in front of the ##P(\overline{B}|A)##s in the denominator in Bayes' theorem, no? And at least according to the handout, ##B_k## and ##A## do switch places like this ##P(B_k|A) \leftrightarrow P(A|B_k)## as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that is.
 
  • #13
Ray Vickson said:
##A, \bar{A}## form a partition 0f ##\Omega## because ##A \cap \bar{A} = \emptyset## (they are disjoint) and ##A \cup \bar{A} = \Omega## (together, they make up the whole space).

Ahh, so the partitions have to cover the entire space. Got it.
 
  • #14
TheSodesa said:
If ##A## and ##\overline{A}## are the partitions, then their probabilities should be the coefficients in front of the ##P(\overline{B}|A)##s in the denominator in Bayes' theorem, no? And at least according to the handout, ##B_k## and ##A## do switch places like this ##P(B_k|A) \leftrightarrow P(A|B_k)## as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that is.

Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?
 
  • #15
micromass said:
Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?
micromass said:
Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?

Wait, let's recap. So our conditional probability:

P(B_k|A) = \frac{P(B_k \cap A)}{P(A)}

becomes the Bayes' formula

P(B_k|A) = \frac{P(B_k) \times P(A|B_k)}{\sum_{i=1}^{n} P(B_i) \times P(A|B_i)},

when the pruduct identity and the formula for the total probability for ##P(A)## are applied to the topmost probability. Here ##B_i##s are the partitions. So if we apply this to my situation:

\begin{align*}
P(A|\overline{B})
&= \frac{P(A)\times P(\overline{B}|A)}{P(A) \times P(\overline{B} | A) + P(\overline{A}) \times P(\overline{B} | \overline{A})}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times P(\overline{B} | \overline{A})}
\end{align*}

Alright, this looks different. Now I just need to figure out what ##P(\overline{B} | \overline{A})## is.
 
  • #16
You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.
 
  • #17
micromass said:
You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.

Ok. :biggrin:

I'm pretty sure my last iteration of the formula was finally correct; there's just that pain-in-the-butt term in the denominator.

But if we take the above approach:
P(\overline{B}) \times P(A | \overline{B}) = P(A) \times P(\overline{B} | A)

We've already shown, that ##P(\overline{B} | A) = 0.4## above (in the post with the picture, assuming my understanding of basic set theory holds; nothing to do with Bayes). Then:

P(A|\overline{B}) = \frac{P(A) \times P(\overline{B} | A)}{P(\overline{B})} = \frac{0.4 \times 0.4}{0.52} = 0.30769

Apparently this was still wrong. My derivation of ##P(\overline{B} | A)## was probably wrong.
 
  • #18
TheSodesa said:
Wait, let's recap. So our conditional probability:

P(B_k|A) = \frac{P(B_k \cap A)}{P(A)}

becomes the Bayes' formula

P(B_k|A) = \frac{P(B_k) \times P(A|B_k)}{\sum_{i=1}^{n} P(B_i) \times P(A|B_i)},

when the pruduct identity and the formula for the total probability for ##P(A)## are applied to the topmost probability. Here ##B_i##s are the partitions. So if we apply this to my situation:

\begin{align*}
P(A|\overline{B})
&= \frac{P(A)\times P(\overline{B}|A)}{P(A) \times P(\overline{B} | A) + P(\overline{A}) \times P(\overline{B} | \overline{A})}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times P(\overline{B} | \overline{A})}
\end{align*}

Alright, this looks different. Now I just need to figure out what ##P(\overline{B} | \overline{A})## is.

Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?
 
  • #19
Ray Vickson said:
Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?

It's ##1##, isn't it?
 
  • #20
Ray Vickson said:
Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?

If ## P(\Omega|\overline{A}) = 1##, then
\begin{align*}
P(\overline{B}|\overline{A})
&= 1 - P(B|\overline{A})\\
&= 1 - 0.4\\
&= 0.6
\end{align*}

Then
\begin{align*}
P(A|\overline{B})
&= \frac{0.4 \times 0.4}{0.4^2 + 0.6^2} \approx 0.30769
\end{align*}

This is the same answer I got with micromass' other method, but it is wrong. Again, my guess is that my derivation of ##P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} \stackrel{error?}{=} \frac{P(A) - P(A \cap B)}{P(A)} = 0.4## was wrong.
 
  • #21
Ray Vickson said:
Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?

Scratch what I just said!

It was correct. The system was **very** picky about significant figures
 
  • #22
micromass said:
You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.

Your answer was correct after all. I messed up with the significant figures when I put the answer through the system.
 
  • #23
TheSodesa said:
If ## P(\Omega|\overline{A}) = 1##, then
\begin{align*}
P(\overline{B}|\overline{A})
&= 1 - P(B|\overline{A})\\
&= 1 - 0.4\\
&= 0.6
\end{align*}

Then
\begin{align*}
P(A|\overline{B})
&= \frac{0.4 \times 0.4}{0.4^2 + 0.6^2} \approx 0.30769
\end{align*}

This is the same answer I got with micromass' other method, but it is wrong. Again, my guess is that my derivation of ##P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} \stackrel{error?}{=} \frac{P(A) - P(A \cap B)}{P(A)} = 0.4## was wrong.

$$P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A|\overline{B}) P(\overline{B})}{P(A)} $$
 
  • #24
Ray Vickson said:
$$P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A|\overline{B}) P(\overline{B})}{P(A)} $$

Thanks for the patience. It was correct.
 

Similar threads

Back
Top