Simple probability question: Suppose P(B|A)=1. Does that mean that P(A|B)=1?

Click For Summary

Discussion Overview

The discussion revolves around the implications of conditional probabilities, specifically whether P(A|B) can be inferred from P(B|A) = 1. Participants explore concepts related to probability theory, causation, and the relationships between events A and B.

Discussion Character

  • Exploratory
  • Debate/contested
  • Mathematical reasoning
  • Conceptual clarification

Main Points Raised

  • Some participants question whether P(A|B) = 1 necessarily follows from P(B|A) = 1, suggesting that A could be a subset of B.
  • Others propose that if P(A) = P(B), then it might hold true, but this is not universally accepted.
  • Bayes' Theorem is mentioned, with some arguing that if P(A) = P(B), then P(A|B) could also equal 1.
  • There is a discussion about the nature of causality, with some asserting that A cannot be both the cause and effect of B.
  • Participants express confusion about the definitions of causality and how they relate to conditional probabilities.
  • Some argue that the concept of "possible" outcomes in probability theory is not formally defined, leading to philosophical discussions about the nature of events.
  • There is a suggestion that the definition of causality may vary, and that retrocausality is generally ruled out under standard definitions.
  • One participant emphasizes that conditional probability should be viewed as an updated probability rather than implying causation.
  • Another participant highlights the ambiguity in defining "cause" within the context of probability theory.

Areas of Agreement / Disagreement

Participants do not reach a consensus on whether P(A|B) = 1 follows from P(B|A) = 1. Multiple competing views on causality and the implications of conditional probabilities remain unresolved.

Contextual Notes

Discussions include limitations in definitions of causality, the implications of events with zero probability, and the philosophical aspects of probability theory that are not strictly defined within the formal mathematical framework.

entropy1
Messages
1,232
Reaction score
72
Suppose P(B|A)=1. Does that mean that P(A|B)=1?
 
Physics news on Phys.org
Your question boils down to ##P(A\cap B) = P(A) \implies P(A \cap B) = P(B)##. Do you think this is true? Try to think what happens when ##A\subseteq B##, for example.

Intuitively, you know that ##B## will happen (almost surely) when ##A## happens. But does this mean that ##A## will happen when you know that ##B## happens?
 
  • Like
Likes   Reactions: etotheipi
Ah, A can still be a subset of B. I have to think sets here. Then it does not hold.

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?
 
Last edited:
entropy1 said:
Ah, A can still be a subset of B. I have to think sets here. Then it does not hold.

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?

Do you know Bayes' Theorem?
 
PeroK said:
Do you know Bayes' Theorem?
##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?
 
  • Like
Likes   Reactions: Dale
entropy1 said:
##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?
It's hard to argue with that.
 
  • Haha
Likes   Reactions: Demystifier
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
 
entropy1 said:
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
Saying ##A = B## is simpler.
 
entropy1 said:
In other words: A is the cause of B, but B is also the cause of A?
No, this does not follow. A cause precedes an effect.
 
  • Like
Likes   Reactions: sysprog
  • #10
PeroK said:
Saying ##A = B## is simpler.
Since these may be sets, there may be points (or events) with zero probability that make the sets different.
 
  • #11
FactChecker said:
Since these may be sets, there may be points (or events) with zero probability that make the sets different.
In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.
 
  • Like
Likes   Reactions: etotheipi and FactChecker
  • #12
PeroK said:
In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.
That is a reasonable idea, but I do not agree that it can be assumed. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.
 
Last edited:
  • Like
Likes   Reactions: etotheipi
  • #13
I'm awfully confused... A probability space as defined by Kolmogorov is a triple ##(\Omega, \mathcal{F}, P)##, where ##\Omega## is the set of all possible outcomes, ##\mathcal{F}## is the set of all possible events (where any given event is a set of outcomes, i.e. a certain subset of ##\Omega##), and ##P## is a probability measure, i.e. a function ##P : \mathcal{F} \rightarrow [0,1]## which takes a particular event to its corresponding probability.

If ##A## and ##B## are events, then saying ##A = B## is just saying that they are the same set of outcomes?
 
  • Like
Likes   Reactions: Dale
  • #14
FactChecker said:
That is a reasonable idea, but I do not agree that it is normal. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.
Yes, your right. To be precise ##A = B## "almost surely" or "almost everywhere": i.e. using equivalence of sets if they differ only by a set of probability measure zero.
 
  • Like
Likes   Reactions: FactChecker and etotheipi
  • #15
etotheipi said:
I'm awfully confused... A probability space as defined by Kolmogorov is a triple ##(\Omega, \mathcal{F}, P)##, where ##\Omega## is the set of all possible outcomes,

The idea of "possible" outcomes is a concept in applying probability theory, not something defined in the formal theory. The formal theory of probability (as a special case of "measure theory") does not define the concept of "possible". It simply says a probability space includes a set ##\Omega## of elements, which are called "outcomes".

In applications of probability theory, we think of "possible" outcomes, some of which "actually" happen. But "possible" and "actual" don't have formal mathematical definitions. Discussing possibility and actuality can lead to complex philosophical discussions.
 
  • Like
Likes   Reactions: Dale, hutchphd and etotheipi
  • #16
You're quite right, I should have just said 'outcomes'. It's perfectly fine to include in ##\Omega## outcomes for which the probability is exactly zero. I was just being a little too colloquial :wink:
 
  • #17
Dale said:
No, this does not follow. A cause precedes an effect.
I that the definition? Is retrocausality, to name it such, ruled out?
 
  • #18
entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
 
  • Informative
  • Like
Likes   Reactions: sysprog and Dale
  • #19
entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
There are lots of different definitions of causality, but I think that the Wikipedia page does a decent job sorting through them.

https://en.wikipedia.org/wiki/Causality

The one that I think is my "default" understanding of causality is a "sufficient cause":

If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y.

Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.

entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
Retrocausality is ruled out under the usual definitions, but it is not hard to change the definitions. It is important to know that it is a different definition so that you don't get confused.
 
  • Like
Likes   Reactions: sysprog
  • #20
It is safer to think of conditional probability Prob( A|B ) as the updated value of the probability of A given the knowledge that B is true, not that B caused A. If you know that a person bumped his head on the doorway, then you know that he is probably tall. Bumping his head did not make him tall.
 
  • Like
Likes   Reactions: sysprog, atyy, DrClaude and 1 other person
  • #21
entropy1 said:
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
Define "cause"! The theory of probability does not define it.
 
  • Like
Likes   Reactions: sysprog
  • #22
Demystifier said:
Define "cause"! The theory of probability does not define it.
In this context you might say that if the occurrence of (event) B implies the occurrence of A, that B is a sufficient cause of A?

I am aware that of course the numbers may run, but the question remains what "cause" means. I guess it would just mean what the numbers say.

(We are discussing this among other things right now in this thread)

Perhaps there should be the requirement that, to be a cause, the cause should be possible to be made freely, whatever that means. :oldbiggrin:

However, Dale wrote:
Dale said:
Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.
I haven't seen a convincing reason why I may not reverse the causal temporal direction.
PeroK said:
E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
This is certainly a strong argument to me, and @FactChecker made a similar one. So if I am wrong, just calling something a sufficient cause because the numbers are compatible with that notion, doesn't follow. So yes, perhaps there should be some kind of definition of "cause".
 
Last edited:
  • #23
PeroK said:
It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?

Well, that's a good point, but then I do not understand why, and also not when something is a cause.
 
Last edited:
  • #24
entropy1 said:
So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?
No. Correlation is completely different from causation. If a man has a long left leg, he almost certainly also has a long right leg. You wouldn't say that either leg length caused the other leg length.
Well, that's a good point, but then I do not understand why, and also not when something is a cause.
You will need to look for causation in the logic and science of the application subject. Probability and statistics will not give you that.
 
  • Like
Likes   Reactions: sysprog, atyy, pbuk and 1 other person
  • #25
entropy1 said:
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?

No, there are several possibilities.
1. ##A## could cause ##B## with certainty
2. ##B## could cause ##A## with certainty
3. ##C## could cause ##A## and ##B## with certainty

If (3) is true with ##B## and ##A## not being causes of each other, then manipulating ##B## will not affect ##A##, but manipulating ##C## will affect both ##A## and ##B##.
 
  • Like
Likes   Reactions: Demystifier and sysprog
  • #26
entropy1 said:
Suppose P(B|A)=1. Does that mean that P(A|B)=1?
I would think that it does not.

I think that this is similar to asking whether
##A \rightarrow B## means that ##B \rightarrow A##,
which it clearly does not.

Regarding the preceding causality discussion, I for my part agree with @atyy (and others), and also say that the causative idea here looks to me like (the fallacy named) post hoc ergo propter hoc. ##-##

(which references the idea that someone has supposed incorrectly that if ##a## follows ##b## then ##b## has definitely caused ##a##, even though ##b## may have merely preceded ##a##, and not caused ##a##).
 
Last edited:
  • #27
entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
(joke) If you rule in tachyons, then you're good to go ##\dots##
 
  • #28
sysprog said:
I think that this is similar to asking whether
##A \rightarrow B## means that ##B \rightarrow A##,
which it clearly does not.
No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.

But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).
 
Last edited:
  • #29
entropy1 said:
No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.
Agree. That's the modus tollens rule. But you asked whether P(A|B)=1 meant that P(B|A)=1, and l remarked that it to me seemed similar to supposing that B entails A given that A entails B.

Let's please look at another example − I think that @FactChecker's head-bumping and leg-length examples were not imperspicuous; however this illustration might be even more fun:

Does the probability that ##-##
I have it, given that I stole it
equal the probability that ##-##
I stole it, given that I have it ?

I think that if the Police already know that I stole it before finding out that I have it, then that's evidence obtained, but if I'm merely found in possession of something, that tells nothing about whether I stole it.

Also ##-## I might not still have it even if I stole it, and I might still have it even if I didn't steal it ##\dots##

:wink:
 
Last edited:
  • Like
Likes   Reactions: FactChecker and BWV
  • #30
Or the probability that a center in the NBA is tall vs the probability that a tall person is a NBA centerOr the prob that the president of China speaks Mandarin vs the prob that someone who speaks Mandarin is the president of China
 
  • Like
Likes   Reactions: FactChecker and sysprog

Similar threads

  • · Replies 11 ·
Replies
11
Views
1K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 7 ·
Replies
7
Views
1K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 29 ·
Replies
29
Views
6K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
5K