- #1

- 1,224

- 72

Suppose P(B|A)=1. Does that mean that P(A|B)=1?

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- B
- Thread starter entropy1
- Start date

In summary, the conversation discusses the relationship between probabilities of events happening in different orders. The concept of causality is also brought up, with different definitions being mentioned. The conclusion is that if two events have a probability of 1 given the other, then they are both necessary and sufficient causes for each other, but only one can be considered a sufficient cause due to the concept of "subsequent" occurrence.

- #1

- 1,224

- 72

Suppose P(B|A)=1. Does that mean that P(A|B)=1?

Physics news on Phys.org

- #2

Intuitively, you know that ##B## will happen (almost surely) when ##A## happens. But does this mean that ##A## will happen when you know that ##B## happens?

- #3

- 1,224

- 72

Ah, A can still be a subset of B. I have to think sets here. Then it does not hold.

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?

Last edited:

- #4

- 25,823

- 17,377

entropy1 said:

If the sets are the same size, it holds.

So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?

Do you know Bayes' Theorem?

- #5

- 1,224

- 72

##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?PeroK said:Do you know Bayes' Theorem?

- #6

- 25,823

- 17,377

It's hard to argue with that.entropy1 said:##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?

- #7

- 1,224

- 72

- #8

- 25,823

- 17,377

Saying ##A = B## is simpler.entropy1 said:

- #9

Mentor

- 34,832

- 12,894

No, this does not follow. A cause precedes an effect.entropy1 said:In other words: A is the cause of B, but B is also the cause of A?

- #10

Science Advisor

Homework Helper

Gold Member

- 8,229

- 3,828

Since these may be sets, there may be points (or events) with zero probability that make the sets different.PeroK said:Saying ##A = B## is simpler.

- #11

- 25,823

- 17,377

In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.FactChecker said:Since these may be sets, there may be points (or events) with zero probability that make the sets different.

- #12

Science Advisor

Homework Helper

Gold Member

- 8,229

- 3,828

That is a reasonable idea, but I do not agree that it can be assumed. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.PeroK said:In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.

Last edited:

- #13

If ##A## and ##B## are events, then saying ##A = B## is just saying that they are the same set of outcomes?

- #14

- 25,823

- 17,377

Yes, your right. To be precise ##A = B## "almost surely" or "almost everywhere": i.e. using equivalence of sets if they differ only by a set of probability measure zero.FactChecker said:That is a reasonable idea, but I do not agree that it is normal. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.

- #15

Science Advisor

- 7,859

- 1,591

etotheipi said:I'm awfully confused... A probability space as defined by Kolmogorov is a triple ##(\Omega, \mathcal{F}, P)##, where ##\Omega## is the set of all possibleoutcomes,

The idea of "possible" outcomes is a concept in applying probability theory, not something defined in the formal theory. The formal theory of probability (as a special case of "measure theory") does not define the concept of "possible". It simply says a probability space includes a set ##\Omega## of elements, which are called "outcomes".

In applications of probability theory, we think of "possible" outcomes, some of which "actually" happen. But "possible" and "actual" don't have formal mathematical definitions. Discussing possibility and actuality can lead to complex philosophical discussions.

- #16

- #17

- 1,224

- 72

I that the definition? Is retrocausality, to name it such, ruled out?Dale said:No, this does not follow. A cause precedes an effect.

- #18

- 25,823

- 17,377

It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?

- #19

Mentor

- 34,832

- 12,894

There are lots of different definitions of causality, but I think that the Wikipedia page does a decent job sorting through them.entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?

https://en.wikipedia.org/wiki/Causality

The one that I think is my "default" understanding of causality is a "sufficient cause":

If

Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.

Retrocausality is ruled out under the usual definitions, but it is not hard to change the definitions. It is important to know that it is a different definition so that you don't get confused.entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?

- #20

Science Advisor

Homework Helper

Gold Member

- 8,229

- 3,828

- #21

- 14,044

- 6,472

Define "cause"! The theory of probability does not define it.entropy1 said:

- #22

- 1,224

- 72

In this context you might say thatDemystifier said:Define "cause"! The theory of probability does not define it.

I am aware that of course the numbers may run, but the question remains what "cause" means. I guess it would just mean what the numbers say.

(We are discussing this among other things right now in this thread)

Perhaps there should be the requirement that, to be a cause, the cause should be possible to be made freely, whatever that means.

However, Dale wrote:

I haven't seen a convincing reason why I may not reverse the causal temporal direction.Dale said:Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.

This is certainly a strong argument to me, and @FactChecker made a similar one. So if I am wrong, just calling something a sufficient cause because the numbers are compatible with that notion, doesn't follow. So yes, perhaps there should be some kind of definition of "cause".PeroK said:E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.

Last edited:

- #23

- 1,224

- 72

So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?PeroK said:It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.

Well, that's a good point, but then I do not understand

Last edited:

- #24

Science Advisor

Homework Helper

Gold Member

- 8,229

- 3,828

No. Correlation is completely different from causation. If a man has a long left leg, he almost certainly also has a long right leg. You wouldn't say that either leg length caused the other leg length.entropy1 said:So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?

You will need to look for causation in the logic and science of the application subject. Probability and statistics will not give you that.Well, that's a good point, but then I do not understandwhy, and also notwhensomethingisa cause.

- #25

Science Advisor

- 15,170

- 3,345

entropy1 said:

No, there are several possibilities.

1. ##A## could cause ##B## with certainty

2. ##B## could cause ##A## with certainty

3. ##C## could cause ##A## and ##B## with certainty

If (3) is true with ##B## and ##A## not being causes of each other, then manipulating ##B## will not affect ##A##, but manipulating ##C## will affect both ##A## and ##B##.

- #26

- 2,616

- 1,782

I would think that it does not.entropy1 said:Suppose P(B|A)=1. Does that mean that P(A|B)=1?

I think that this is similar to asking whether

##A \rightarrow B## means that ##B \rightarrow A##,

which it clearly does not.

Regarding the preceding causality discussion, I for my part agree with @atyy (and others), and also say that the causative idea here looks to me like (the fallacy named)

(which references the idea that someone has supposed incorrectly that if ##a## follows ##b## then ##b## has definitely caused ##a##, even though ##b## may have merely preceded ##a##, and not caused ##a##).

Last edited:

- #27

- 2,616

- 1,782

(joke) If you rule in tachyons, then you're good to go ##\dots##entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?

- #28

- 1,224

- 72

No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.sysprog said:I think that this is similar to asking whether

##A \rightarrow B## means that ##B \rightarrow A##,

which it clearly does not.

But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).

Last edited:

- #29

- 2,616

- 1,782

Agree. That's the modus tollens rule. But you asked whether P(A|B)=1 meant that P(B|A)=1, and l remarked that it to me seemed similar to supposing that B entails A given that A entails B.entropy1 said:No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.

Let's please look at another example − I think that @FactChecker's head-bumping and leg-length examples were not imperspicuous; however this illustration might be even more fun:

Does the probability that ##-##

I have it, given that I stole it

equal the probability that ##-##

I stole it, given that I have it ?

I think that if the Police already know that I stole it before finding out that I have it, then that's evidence obtained, but if I'm merely found in possession of something, that tells nothing about whether I stole it.

Also ##-## I might not still have it even if I stole it, and I might still have it even if I didn't steal it ##\dots##

Last edited:

- #30

- 1,434

- 1,701

Or the prob that the president of China speaks Mandarin vs the prob that someone who speaks Mandarin is the president of China

- #31

- 2,616

- 1,782

We can do a short proof here:entropy1 said:But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).

To be proven: ##C \rightarrow \neg A##

##1: A \rightarrow B## assumption 1

##2: C \rightarrow \neg B## assumption 2

##3: C## assumption 3

##4: \neg B## modus ponens 2,3

##5: \neg A ## modus tollens 1,4

##6: C \rightarrow \neg A## hypothethical syllogism (1,2),3,(4),5

##-## and there you have it ##\dots##

Last edited:

- #32

- 14,044

- 6,472

I always wondered why probability and logic are in the same forum. Now I see why.entropy1 said:No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.

But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).

- #33

- 1,224

- 72

So does that mean that one of (1) or (2) gets "reversed"? Can we then speak of retrocausality? (reversed causality?)sysprog said:We can do a short proof here:

To be proven: ##C \rightarrow \neg A##

##1: A \rightarrow B## assumption 1

##2: C \rightarrow \neg B## assumption 2

##3: C## assumption 3

##4: \neg B## modus ponens 2,3

##5: \neg A ## modus tollens 1,4

##6: C \rightarrow \neg A## hypothethical syllogism (1,2),3,(4),5

##-## and there you have it ##\dots##

- #34

Mentor

- 34,832

- 12,894

Please do not ask this question again without a clear and exact definition of retrocausality. Preferably one from the professional literature.entropy1 said:Can we then speak of retrocausality? (reversed causality?)

To all other participants: please do not respond to this question without such a definition.

- #35

Mentor

- 34,832

- 12,894

For any future threads on this topic please start with a professional scientific reference that can serve as the basis of discussion

Share:

- Replies
- 5

- Views
- 1K

- Replies
- 3

- Views
- 161

- Replies
- 4

- Views
- 884

- Replies
- 14

- Views
- 737

- Replies
- 5

- Views
- 719

- Replies
- 5

- Views
- 704

- Replies
- 7

- Views
- 1K

- Replies
- 2

- Views
- 831

- Replies
- 3

- Views
- 818

- Replies
- 31

- Views
- 1K