Homework Help: Basic probability with set theory

Tags:
1. Sep 2, 2016

TheSodesa

1. The problem statement, all variables and given/known data

$$P(A | \overline{B}) = ?$$

2. Relevant equations
Multiplicative rule:

P(A | B) = \frac{P(A \cap B)}{P(B)}

P(A \cup B) = P(A) + P(B) - P(A \cap B)

Difference:

A \backslash B = A \cap \overline{B}

A hint:

P(\overline{A} \backslash B) = P(\overline{A} \cap \overline{B})

3. The attempt at a solution

Using equation (1):
$$P(A | \overline{B}) = \frac{P(A \cap \overline{B})}{P(\overline{B})}$$

This is where I'm stuck. I don't see how $(3)$ nor $(4)$ would help me here, since there is not an identity I could use to convert a difference into something more operable.

What to do?

2. Sep 2, 2016

micromass

What's the full question?

3. Sep 2, 2016

TheSodesa

Ah damn, sorry! My blood sugar is low and I'm a bit stressed out.

They gave us $P(A) = 0.4$, $P(B|A)=0.60$ and $P(B|\overline{A})=0.40$ and asked us to calculate a few probabilities:

\begin{align*}
&a) P(A∩B) &= 0.24\\
&b) P(B) &= 0.48\\
&c) P(A∪B) &= 0.64\\
&d) P(A|B) &= 0.50\\
&e) P(A|\overline{B}) &= ?\\
&f) P(\overline{A}∖B) &= ?
\end{align*}

I'm having trouble with e) and f) (possibly just e). I'm somehow supposed to use the identities above to manipulate these expressions into a form I can plug the given or the previously calculated values into.

4. Sep 2, 2016

micromass

Are you familiar with Bayes' theorem?

5. Sep 2, 2016

TheSodesa

Looking at my course handout, it is mentioned under Kokonaistodennäköisyys ja Bayesin kaava (Total probability and Bayes' theorem), but we didn't yet cover it in class. Just a sec and I'll see if I can understand it.

Last edited: Sep 2, 2016
6. Sep 2, 2016

TheSodesa

Ok, so basically it goes like this:

Let's assume that our sample space $\Omega$ is partitioned into separate subsets like so:

$$\Omega = B_1 \cup \cdot\cdot\cdot \cup B_n$$

Then if we have a subset of $\Omega$, $A$, that intersects all or some of the partitions, we can write $A$ like this:

$$A = (A \cap B_1) \cup (A \cap B_2) \cup ... \cup (A \cap B_n)$$

Then
$$P(A) = \sum_{i=1}^{n} P(A \cap B_i)$$

If $B_i > 0$, based on the multiplicative identity, we have the total probability:

$$P(A) = \sum_{i=1}^{n} P(B_i)P(A|B_i)$$

The Bayes' theorem can be derived using both the above total probability formula and the multiplicative identity:

$$P(B_k|A) = \frac{P(B_k)P(A|B_k)}{\sum_{i=1}^{n} P(B_i)P(A|B_i)}$$

7. Sep 2, 2016

micromass

Yes, here the partition is $A$ and $\overline{A}$.

You can do this without Bayes, but I think Bayes is the most natural approach here.

8. Sep 2, 2016

TheSodesa

I'll see if I can figure out how to apply it. But first dinner.

9. Sep 2, 2016

TheSodesa

Juts to clarify, are you sure the partition is just $A$ and $\overline{A}$? My understanding of set theory is very limited, but I'd drawn up the situation like this (not in scale of course):

I'm not sure I understand why I should partition the space into $A$ and $\overline{A}$. Is it because $A$ intersects both $B$ and $\Omega$?

Then the Bayes theorem would give me the following result:
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
\end{align*}
Now
$$P(\overline{B}|A) = \frac{P(\overline{B}\cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} = \frac{P(A)-P(A \cap B)}{P(A)} = \frac{0.4 - 0.24}{0.4} = 0.4$$
Then
\begin{align*}
P(A|\overline{B})
&= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times 0.4}\\
&= 0.4
\end{align*}
I'm told this is still wrong.

10. Sep 2, 2016

Ray Vickson

$A, \bar{A}$ form a partition 0f $\Omega$ because $A \cap \bar{A} = \emptyset$ (they are disjoint) and $A \cup \bar{A} = \Omega$ (together, they make up the whole space).

11. Sep 2, 2016

micromass

Are you sure about this? I would double check for some typos.

12. Sep 2, 2016

TheSodesa

If $A$ and $\overline{A}$ are the partitions, then their probabilities should be the coefficients in front of the $P(\overline{B}|A)$s in the denominator in Bayes' theorem, no? And at least according to the handout, $B_k$ and $A$ do switch places like this $P(B_k|A) \leftrightarrow P(A|B_k)$ as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that is.

13. Sep 2, 2016

TheSodesa

Ahh, so the partitions have to cover the entire space. Got it.

14. Sep 2, 2016

micromass

Shouldn't there be a $P(B|\overline{A})$ in the denominator somewhere?

15. Sep 2, 2016

TheSodesa

Wait, let's recap. So our conditional probability:

$$P(B_k|A) = \frac{P(B_k \cap A)}{P(A)}$$

becomes the Bayes' formula

$$P(B_k|A) = \frac{P(B_k) \times P(A|B_k)}{\sum_{i=1}^{n} P(B_i) \times P(A|B_i)}$$,

when the pruduct identity and the formula for the total probability for $P(A)$ are applied to the topmost probability. Here $B_i$s are the partitions. So if we apply this to my situation:

\begin{align*}
P(A|\overline{B})
&= \frac{P(A)\times P(\overline{B}|A)}{P(A) \times P(\overline{B} | A) + P(\overline{A}) \times P(\overline{B} | \overline{A})}\\
&= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times P(\overline{B} | \overline{A})}
\end{align*}

Alright, this looks different. Now I just need to figure out what $P(\overline{B} | \overline{A})$ is.

16. Sep 2, 2016

micromass

You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
Notice that $P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}$
Now use also that $P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}$.

17. Sep 2, 2016

TheSodesa

Ok.

I'm pretty sure my last iteration of the formula was finally correct; there's just that pain-in-the-butt term in the denominator.

But if we take the above approach:
$$P(\overline{B}) \times P(A | \overline{B}) = P(A) \times P(\overline{B} | A)$$

We've already shown, that $P(\overline{B} | A) = 0.4$ above (in the post with the picture, assuming my understanding of basic set theory holds; nothing to do with Bayes). Then:

$$P(A|\overline{B}) = \frac{P(A) \times P(\overline{B} | A)}{P(\overline{B})} = \frac{0.4 \times 0.4}{0.52} = 0.30769$$

Apparently this was still wrong. My derivation of $P(\overline{B} | A)$ was probably wrong.

18. Sep 2, 2016

Ray Vickson

Since $(B, \overline{B})$ is a partition of $\Omega$ we have $P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})$. Can you figure out what is $P(\Omega|\overline{A})$?

19. Sep 2, 2016

TheSodesa

It's $1$, isn't it?

20. Sep 2, 2016

TheSodesa

If $P(\Omega|\overline{A}) = 1$, then
\begin{align*}
P(\overline{B}|\overline{A})
&= 1 - P(B|\overline{A})\\
&= 1 - 0.4\\
&= 0.6
\end{align*}

Then
\begin{align*}
P(A|\overline{B})
&= \frac{0.4 \times 0.4}{0.4^2 + 0.6^2} \approx 0.30769
\end{align*}

This is the same answer I got with micromass' other method, but it is wrong. Again, my guess is that my derivation of $P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} \stackrel{error?}{=} \frac{P(A) - P(A \cap B)}{P(A)} = 0.4$ was wrong.

21. Sep 2, 2016

TheSodesa

Scratch what I just said!

It was correct. The system was **very** picky about significant figures

22. Sep 2, 2016

TheSodesa

Your answer was correct after all. I messed up with the significant figures when I put the answer through the system.

23. Sep 2, 2016

Ray Vickson

$$P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A|\overline{B}) P(\overline{B})}{P(A)}$$

24. Sep 2, 2016

TheSodesa

Thanks for the patience. It was correct.