1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Basic probability with set theory

  1. Sep 2, 2016 #1
    1. The problem statement, all variables and given/known data

    [tex]P(A | \overline{B}) = ?[/tex]

    2. Relevant equations
    Multiplicative rule:
    \begin{equation}
    P(A | B) = \frac{P(A \cap B)}{P(B)}
    \end{equation}
    Additive rule:
    \begin{equation}
    P(A \cup B) = P(A) + P(B) - P(A \cap B)
    \end{equation}
    Difference:
    \begin{equation}
    A \backslash B = A \cap \overline{B}
    \end{equation}
    A hint:
    \begin{equation}
    P(\overline{A} \backslash B) = P(\overline{A} \cap \overline{B})
    \end{equation}

    3. The attempt at a solution

    Using equation (1):
    [tex]P(A | \overline{B}) = \frac{P(A \cap \overline{B})}{P(\overline{B})}[/tex]

    This is where I'm stuck. I don't see how ##(3)## nor ##(4)## would help me here, since there is not an identity I could use to convert a difference into something more operable.

    What to do?
     
  2. jcsd
  3. Sep 2, 2016 #2

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    What's the full question?
     
  4. Sep 2, 2016 #3
    Ah damn, sorry! My blood sugar is low and I'm a bit stressed out.

    They gave us ##P(A) = 0.4##, ##P(B|A)=0.60## and ##P(B|\overline{A})=0.40## and asked us to calculate a few probabilities:

    \begin{align*}
    &a) P(A∩B) &= 0.24\\
    &b) P(B) &= 0.48\\
    &c) P(A∪B) &= 0.64\\
    &d) P(A|B) &= 0.50\\
    &e) P(A|\overline{B}) &= ?\\
    &f) P(\overline{A}∖B) &= ?
    \end{align*}

    I'm having trouble with e) and f) (possibly just e). I'm somehow supposed to use the identities above to manipulate these expressions into a form I can plug the given or the previously calculated values into.
     
  5. Sep 2, 2016 #4

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Are you familiar with Bayes' theorem?
     
  6. Sep 2, 2016 #5
    Looking at my course handout, it is mentioned under Kokonaistodennäköisyys ja Bayesin kaava (Total probability and Bayes' theorem), but we didn't yet cover it in class. Just a sec and I'll see if I can understand it.
     
    Last edited: Sep 2, 2016
  7. Sep 2, 2016 #6
    Ok, so basically it goes like this:

    Let's assume that our sample space ##\Omega## is partitioned into separate subsets like so:

    [tex]\Omega = B_1 \cup \cdot\cdot\cdot \cup B_n[/tex]

    Then if we have a subset of ##\Omega##, ##A##, that intersects all or some of the partitions, we can write ##A## like this:

    [tex]A = (A \cap B_1) \cup (A \cap B_2) \cup ... \cup (A \cap B_n)[/tex]

    Then
    [tex]P(A) = \sum_{i=1}^{n} P(A \cap B_i)[/tex]

    If ##B_i > 0##, based on the multiplicative identity, we have the total probability:

    [tex]P(A) = \sum_{i=1}^{n} P(B_i)P(A|B_i)[/tex]

    The Bayes' theorem can be derived using both the above total probability formula and the multiplicative identity:

    [tex]P(B_k|A) = \frac{P(B_k)P(A|B_k)}{\sum_{i=1}^{n} P(B_i)P(A|B_i)}[/tex]
     
  8. Sep 2, 2016 #7

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Yes, here the partition is ##A## and ##\overline{A}##.

    You can do this without Bayes, but I think Bayes is the most natural approach here.
     
  9. Sep 2, 2016 #8
    I'll see if I can figure out how to apply it. But first dinner.
     
  10. Sep 2, 2016 #9
    Juts to clarify, are you sure the partition is just ##A## and ##\overline{A}##? My understanding of set theory is very limited, but I'd drawn up the situation like this (not in scale of course):
    Sample space.png
    I'm not sure I understand why I should partition the space into ##A## and ##\overline{A}##. Is it because ##A## intersects both ##B## and ##\Omega##?

    Then the Bayes theorem would give me the following result:
    \begin{align*}
    P(A|\overline{B})
    &= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}
    \end{align*}
    Now
    [tex]
    P(\overline{B}|A) = \frac{P(\overline{B}\cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} = \frac{P(A)-P(A \cap B)}{P(A)} = \frac{0.4 - 0.24}{0.4} = 0.4
    [/tex]
    Then
    \begin{align*}
    P(A|\overline{B})
    &= \frac{P(A)P(\overline{B}|A)}{P(A)P(\overline{B}|A) + P(\overline{A})P(\overline{B}|A)}\\
    &= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times 0.4}\\
    &= 0.4
    \end{align*}
    I'm told this is still wrong. :frown:
     
  11. Sep 2, 2016 #10

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    ##A, \bar{A}## form a partition 0f ##\Omega## because ##A \cap \bar{A} = \emptyset## (they are disjoint) and ##A \cup \bar{A} = \Omega## (together, they make up the whole space).
     
  12. Sep 2, 2016 #11

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Are you sure about this? I would double check for some typos.
     
  13. Sep 2, 2016 #12
    If ##A## and ##\overline{A}## are the partitions, then their probabilities should be the coefficients in front of the ##P(\overline{B}|A)##s in the denominator in Bayes' theorem, no? And at least according to the handout, ##B_k## and ##A## do switch places like this ##P(B_k|A) \leftrightarrow P(A|B_k)## as we move from one side of the equals sign to the other; unless I've completely misunderstood the formula, that is.
     
  14. Sep 2, 2016 #13
    Ahh, so the partitions have to cover the entire space. Got it.
     
  15. Sep 2, 2016 #14

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Shouldn't there be a ##P(B|\overline{A})## in the denominator somewhere?
     
  16. Sep 2, 2016 #15
    Wait, let's recap. So our conditional probability:

    [tex]P(B_k|A) = \frac{P(B_k \cap A)}{P(A)}[/tex]

    becomes the Bayes' formula

    [tex]P(B_k|A) = \frac{P(B_k) \times P(A|B_k)}{\sum_{i=1}^{n} P(B_i) \times P(A|B_i)}[/tex],

    when the pruduct identity and the formula for the total probability for ##P(A)## are applied to the topmost probability. Here ##B_i##s are the partitions. So if we apply this to my situation:

    \begin{align*}
    P(A|\overline{B})
    &= \frac{P(A)\times P(\overline{B}|A)}{P(A) \times P(\overline{B} | A) + P(\overline{A}) \times P(\overline{B} | \overline{A})}\\
    &= \frac{0.4 \times 0.4}{0.4 \times 0.4 + 0.6 \times P(\overline{B} | \overline{A})}
    \end{align*}

    Alright, this looks different. Now I just need to figure out what ##P(\overline{B} | \overline{A})## is.
     
  17. Sep 2, 2016 #16

    micromass

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    You know, there's no need for Bayes. So although I think it's most natural, here's a way to do it without:
    Notice that ##P(A|\overline{B}) = \frac{P(A\cap \overline{B})}{P(\overline{B})}##
    Now use also that ##P(\overline{B}|A) = \frac{P(A\cap \overline{B})}{P(A)}##.
     
  18. Sep 2, 2016 #17
    Ok. :biggrin:

    I'm pretty sure my last iteration of the formula was finally correct; there's just that pain-in-the-butt term in the denominator.

    But if we take the above approach:
    [tex]P(\overline{B}) \times P(A | \overline{B}) = P(A) \times P(\overline{B} | A)[/tex]

    We've already shown, that ##P(\overline{B} | A) = 0.4## above (in the post with the picture, assuming my understanding of basic set theory holds; nothing to do with Bayes). Then:

    [tex]P(A|\overline{B}) = \frac{P(A) \times P(\overline{B} | A)}{P(\overline{B})} = \frac{0.4 \times 0.4}{0.52} = 0.30769[/tex]

    Apparently this was still wrong. My derivation of ##P(\overline{B} | A)## was probably wrong.
     
  19. Sep 2, 2016 #18

    Ray Vickson

    User Avatar
    Science Advisor
    Homework Helper

    Since ##(B, \overline{B})## is a partition of ##\Omega## we have ##P(B|\overline{A}) + P(\overline{B}|\overline{A}) = P(\Omega|\overline{A})##. Can you figure out what is ## P(\Omega|\overline{A})##?
     
  20. Sep 2, 2016 #19
    It's ##1##, isn't it?
     
  21. Sep 2, 2016 #20
    If ## P(\Omega|\overline{A}) = 1##, then
    \begin{align*}
    P(\overline{B}|\overline{A})
    &= 1 - P(B|\overline{A})\\
    &= 1 - 0.4\\
    &= 0.6
    \end{align*}

    Then
    \begin{align*}
    P(A|\overline{B})
    &= \frac{0.4 \times 0.4}{0.4^2 + 0.6^2} \approx 0.30769
    \end{align*}

    This is the same answer I got with micromass' other method, but it is wrong. Again, my guess is that my derivation of ##P(\overline{B}|A) = \frac{P(\overline{B} \cap A)}{P(A)} = \frac{P(A \backslash B)}{P(A)} \stackrel{error?}{=} \frac{P(A) - P(A \cap B)}{P(A)} = 0.4## was wrong.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Basic probability with set theory
  1. Probability theory (Replies: 20)

Loading...