# Simple probability question: Suppose P(B|A)=1. Does that mean that P(A|B)=1?

• B
• entropy1
In summary, the conversation discusses the relationship between probabilities of events happening in different orders. The concept of causality is also brought up, with different definitions being mentioned. The conclusion is that if two events have a probability of 1 given the other, then they are both necessary and sufficient causes for each other, but only one can be considered a sufficient cause due to the concept of "subsequent" occurrence.

#### entropy1

Suppose P(B|A)=1. Does that mean that P(A|B)=1?

Your question boils down to ##P(A\cap B) = P(A) \implies P(A \cap B) = P(B)##. Do you think this is true? Try to think what happens when ##A\subseteq B##, for example.

Intuitively, you know that ##B## will happen (almost surely) when ##A## happens. But does this mean that ##A## will happen when you know that ##B## happens?

• etotheipi
Ah, A can still be a subset of B. I have to think sets here. Then it does not hold.

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?

Last edited:
entropy1 said:
Ah, A can still be a subset of B. I have to think sets here. Then it does not hold.

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?

Do you know Bayes' Theorem?

PeroK said:
Do you know Bayes' Theorem?
##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?

• Dale
entropy1 said:
##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?
It's hard to argue with that.

• Demystifier
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?

entropy1 said:
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
Saying ##A = B## is simpler.

entropy1 said:
In other words: A is the cause of B, but B is also the cause of A?
No, this does not follow. A cause precedes an effect.

• sysprog
PeroK said:
Saying ##A = B## is simpler.
Since these may be sets, there may be points (or events) with zero probability that make the sets different.

FactChecker said:
Since these may be sets, there may be points (or events) with zero probability that make the sets different.
In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.

• etotheipi and FactChecker
PeroK said:
In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.
That is a reasonable idea, but I do not agree that it can be assumed. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.

Last edited:
• etotheipi
I'm awfully confused... A probability space as defined by Kolmogorov is a triple ##(\Omega, \mathcal{F}, P)##, where ##\Omega## is the set of all possible outcomes, ##\mathcal{F}## is the set of all possible events (where any given event is a set of outcomes, i.e. a certain subset of ##\Omega##), and ##P## is a probability measure, i.e. a function ##P : \mathcal{F} \rightarrow [0,1]## which takes a particular event to its corresponding probability.

If ##A## and ##B## are events, then saying ##A = B## is just saying that they are the same set of outcomes?

• Dale
FactChecker said:
That is a reasonable idea, but I do not agree that it is normal. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.
Yes, your right. To be precise ##A = B## "almost surely" or "almost everywhere": i.e. using equivalence of sets if they differ only by a set of probability measure zero.

• FactChecker and etotheipi
etotheipi said:
I'm awfully confused... A probability space as defined by Kolmogorov is a triple ##(\Omega, \mathcal{F}, P)##, where ##\Omega## is the set of all possible outcomes,

The idea of "possible" outcomes is a concept in applying probability theory, not something defined in the formal theory. The formal theory of probability (as a special case of "measure theory") does not define the concept of "possible". It simply says a probability space includes a set ##\Omega## of elements, which are called "outcomes".

In applications of probability theory, we think of "possible" outcomes, some of which "actually" happen. But "possible" and "actual" don't have formal mathematical definitions. Discussing possibility and actuality can lead to complex philosophical discussions.

• Dale, hutchphd and etotheipi
You're quite right, I should have just said 'outcomes'. It's perfectly fine to include in ##\Omega## outcomes for which the probability is exactly zero. I was just being a little too colloquial Dale said:
No, this does not follow. A cause precedes an effect.
I that the definition? Is retrocausality, to name it such, ruled out?

entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.

• • sysprog and Dale
entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
There are lots of different definitions of causality, but I think that the Wikipedia page does a decent job sorting through them.

https://en.wikipedia.org/wiki/Causality

The one that I think is my "default" understanding of causality is a "sufficient cause":

If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y.

Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.

entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
Retrocausality is ruled out under the usual definitions, but it is not hard to change the definitions. It is important to know that it is a different definition so that you don't get confused.

• sysprog
It is safer to think of conditional probability Prob( A|B ) as the updated value of the probability of A given the knowledge that B is true, not that B caused A. If you know that a person bumped his head on the doorway, then you know that he is probably tall. Bumping his head did not make him tall.

• sysprog, atyy, DrClaude and 1 other person
entropy1 said:
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
Define "cause"! The theory of probability does not define it.

• sysprog
Demystifier said:
Define "cause"! The theory of probability does not define it.
In this context you might say that if the occurrence of (event) B implies the occurrence of A, that B is a sufficient cause of A?

I am aware that of course the numbers may run, but the question remains what "cause" means. I guess it would just mean what the numbers say.

(We are discussing this among other things right now in this thread)

Perhaps there should be the requirement that, to be a cause, the cause should be possible to be made freely, whatever that means. However, Dale wrote:
Dale said:
Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.
I haven't seen a convincing reason why I may not reverse the causal temporal direction.
PeroK said:
E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
This is certainly a strong argument to me, and @FactChecker made a similar one. So if I am wrong, just calling something a sufficient cause because the numbers are compatible with that notion, doesn't follow. So yes, perhaps there should be some kind of definition of "cause".

Last edited:
PeroK said:
It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?

Well, that's a good point, but then I do not understand why, and also not when something is a cause.

Last edited:
entropy1 said:
So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?
No. Correlation is completely different from causation. If a man has a long left leg, he almost certainly also has a long right leg. You wouldn't say that either leg length caused the other leg length.
Well, that's a good point, but then I do not understand why, and also not when something is a cause.
You will need to look for causation in the logic and science of the application subject. Probability and statistics will not give you that.

• sysprog, atyy, pbuk and 1 other person
entropy1 said:
So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?

No, there are several possibilities.
1. ##A## could cause ##B## with certainty
2. ##B## could cause ##A## with certainty
3. ##C## could cause ##A## and ##B## with certainty

If (3) is true with ##B## and ##A## not being causes of each other, then manipulating ##B## will not affect ##A##, but manipulating ##C## will affect both ##A## and ##B##.

• Demystifier and sysprog
entropy1 said:
Suppose P(B|A)=1. Does that mean that P(A|B)=1?
I would think that it does not.

I think that this is similar to asking whether
##A \rightarrow B## means that ##B \rightarrow A##,
which it clearly does not.

Regarding the preceding causality discussion, I for my part agree with @atyy (and others), and also say that the causative idea here looks to me like (the fallacy named) post hoc ergo propter hoc. ##-##

(which references the idea that someone has supposed incorrectly that if ##a## follows ##b## then ##b## has definitely caused ##a##, even though ##b## may have merely preceded ##a##, and not caused ##a##).

Last edited:
entropy1 said:
I that the definition? Is retrocausality, to name it such, ruled out?
(joke) If you rule in tachyons, then you're good to go ##\dots##

sysprog said:
I think that this is similar to asking whether
##A \rightarrow B## means that ##B \rightarrow A##,
which it clearly does not.
No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.

But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).

Last edited:
entropy1 said:
No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.
Agree. That's the modus tollens rule. But you asked whether P(A|B)=1 meant that P(B|A)=1, and l remarked that it to me seemed similar to supposing that B entails A given that A entails B.

Let's please look at another example − I think that @FactChecker's head-bumping and leg-length examples were not imperspicuous; however this illustration might be even more fun:

Does the probability that ##-##
I have it, given that I stole it
equal the probability that ##-##
I stole it, given that I have it ?

I think that if the Police already know that I stole it before finding out that I have it, then that's evidence obtained, but if I'm merely found in possession of something, that tells nothing about whether I stole it.

Also ##-## I might not still have it even if I stole it, and I might still have it even if I didn't steal it ##\dots## Last edited:
• FactChecker and BWV
Or the probability that a center in the NBA is tall vs the probability that a tall person is a NBA center

Or the prob that the president of China speaks Mandarin vs the prob that someone who speaks Mandarin is the president of China

• FactChecker and sysprog
entropy1 said:
But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).
We can do a short proof here:

To be proven: ##C \rightarrow \neg A##

##1: A \rightarrow B## assumption 1
##2: C \rightarrow \neg B## assumption 2
##3: C## assumption 3
##4: \neg B## modus ponens 2,3
##5: \neg A ## modus tollens 1,4
##6: C \rightarrow \neg A## hypothethical syllogism (1,2),3,(4),5

##-## and there you have it ##\dots##

Last edited:
entropy1 said:
No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.

But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).
I always wondered why probability and logic are in the same forum. Now I see why. sysprog said:
We can do a short proof here:

To be proven: ##C \rightarrow \neg A##

##1: A \rightarrow B## assumption 1
##2: C \rightarrow \neg B## assumption 2
##3: C## assumption 3
##4: \neg B## modus ponens 2,3
##5: \neg A ## modus tollens 1,4
##6: C \rightarrow \neg A## hypothethical syllogism (1,2),3,(4),5

##-## and there you have it ##\dots##
So does that mean that one of (1) or (2) gets "reversed"? Can we then speak of retrocausality? (reversed causality?)

entropy1 said:
Can we then speak of retrocausality? (reversed causality?)
Please do not ask this question again without a clear and exact definition of retrocausality. Preferably one from the professional literature.

To all other participants: please do not respond to this question without such a definition.

At this point we will go ahead and close this thread. I strongly recommend studying the existing literature in this topic, perhaps including the time symmetric formulation of quantum mechanics. It is best to use definitions from the literature as they are more likely to have addressed some of the basic issues mentioned so far.