Example involving conditional probability and transitivity

Click For Summary
The discussion revolves around understanding the manipulation of conditional probabilities, specifically how to rewrite P(T|A) using the law of total probability. The key point is that P(T|A) can be expressed as the sum of products of conditional probabilities, which stems from the general formula for total probability. A participant initially struggles with the algebraic substitution but later realizes it involves P(T|A,F) and P(F|A). There is also a clarification regarding notation consistency, as one participant attempts to shorthand their notation while discussing the example. Ultimately, the conversation highlights the application of fundamental probability laws in solving conditional probability problems.
hodor
Messages
7
Reaction score
0
I'm just going to post a screenshot of the Example (free online textbook). I'm having a tough time making the leap to the first sum - what allows me to rewrite P(T|A) as the sum of the product of those two conditional probabilities?

x0YhzYJ.png


Thanks
 
Physics news on Phys.org
It doesn't have anything to do with the fact that you have a conditional probability to start, it's an application of the more general statement
P(X=true) = P(X=true | Y = true)P(Y= true) + P(X = true | Y = false)P(Y = false)
 
Well I understand the bolded statement. I don't know why it doesn't have anything to do with the fact that I'm dealing with a conditional probability, since it's P( T = tr | A = tr ). In my mind I'm looking for an algebraic substitution or something that I know that allows me to manipulate this into something resembling P(T|A,F). What I don't know is where the P(T|A,F)*P(F|A) comes from or how I could get there.

For example, why not P(T|F,A)*P(F) and sum over F? Why is it P(F|A)?
 
Ok, I found what I was looking for. It's an application of the law of total probability for conditional probabilities:

8UvY6P4.png
 
hodor said:
something that I know that allows me to manipulate this into something resembling P(T|A,F). What I don't know is where the P(T|A,F)*P(F|A) comes from or how I could get there.

What is the notation "P(T|A,F)" supposed to mean? It isn't consistent with the notation in the image you gave.
 
Just an attempt to shorthand what was in the image since I'm on my phone. That post should just be ignored at this point.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 11 ·
Replies
11
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 7 ·
Replies
7
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K