- #1
- 1,224
- 72
Suppose P(B|A)=1. Does that mean that P(A|B)=1?
entropy1 said:Ah, A can still be a subset of B. I have to think sets here. Then it does not hold.
If the sets are the same size, it holds.
So suppose P(A)=0.5 and P(B)=0.5. Then it holds, right?
##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?PeroK said:Do you know Bayes' Theorem?
It's hard to argue with that.entropy1 said:##P(A|B)=\frac{P(B|A)P(A)}{P(B)}##. So if P(A)=P(B) then it holds, right?
Saying ##A = B## is simpler.entropy1 said:So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
No, this does not follow. A cause precedes an effect.entropy1 said:In other words: A is the cause of B, but B is also the cause of A?
Since these may be sets, there may be points (or events) with zero probability that make the sets different.PeroK said:Saying ##A = B## is simpler.
In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.FactChecker said:Since these may be sets, there may be points (or events) with zero probability that make the sets different.
That is a reasonable idea, but I do not agree that it can be assumed. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.PeroK said:In probability theory, the universal set is the sample space of possible events. You wouldn't normally consider the sets ##A, B## to extend beyond the sample space.
Yes, your right. To be precise ##A = B## "almost surely" or "almost everywhere": i.e. using equivalence of sets if they differ only by a set of probability measure zero.FactChecker said:That is a reasonable idea, but I do not agree that it is normal. In fact, it is not always completely known what is possible and what is not. Probability theory works in the common, more general case that includes impossible events.
etotheipi said:I'm awfully confused... A probability space as defined by Kolmogorov is a triple ##(\Omega, \mathcal{F}, P)##, where ##\Omega## is the set of all possible outcomes,
I that the definition? Is retrocausality, to name it such, ruled out?Dale said:No, this does not follow. A cause precedes an effect.
It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?
There are lots of different definitions of causality, but I think that the Wikipedia page does a decent job sorting through them.entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?
Retrocausality is ruled out under the usual definitions, but it is not hard to change the definitions. It is important to know that it is a different definition so that you don't get confused.entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?
Define "cause"! The theory of probability does not define it.entropy1 said:So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
In this context you might say that if the occurrence of (event) B implies the occurrence of A, that B is a sufficient cause of A?Demystifier said:Define "cause"! The theory of probability does not define it.
I haven't seen a convincing reason why I may not reverse the causal temporal direction.Dale said:Note the word "subsequent" in the definition. If ##P(A|B)=1## and ##P(B|A)=1## then they are both necessary and sufficient for each other. But the word "subsequent" makes it so that only one of them can satisfy the definition of a sufficient cause. That same one will also, in this case, satisfy the definition of a necessary cause. But they cannot cause each other.
This is certainly a strong argument to me, and @FactChecker made a similar one. So if I am wrong, just calling something a sufficient cause because the numbers are compatible with that notion, doesn't follow. So yes, perhaps there should be some kind of definition of "cause".PeroK said:E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?PeroK said:It's not about retrocausality. Some people misinterpret ##B|A## as meaning that event ##A## happens first and then event ##B## second. It doesn't imply that. It simply means that we are looking at the cases where ##B## occurs restricted to the cases where ##A## occurs. ##B## could come before ##A##. E.g. ##A## could be that team X won the match and ##B## could be the event that team X led at half time. You still have ##P(B|A)## which is the probability that team X led at half time, given that they eventually won the match.
No. Correlation is completely different from causation. If a man has a long left leg, he almost certainly also has a long right leg. You wouldn't say that either leg length caused the other leg length.entropy1 said:So you claim that if P(B|A)=1, if that would be so, still doesn't mean that A and B would have a causal relationship? The question of correlation isn't causation?
You will need to look for causation in the logic and science of the application subject. Probability and statistics will not give you that.Well, that's a good point, but then I do not understand why, and also not when something is a cause.
entropy1 said:So if A is the cause of B, and B is the effect of A, and P(A)=P(B)=0.5, and P(B|A)=1 ("the probability of B given A is 1"), is then P(A|B)=1 ("the probability of A given B is 1")? In other words: A is the cause of B, but B is also the cause of A?
I would think that it does not.entropy1 said:Suppose P(B|A)=1. Does that mean that P(A|B)=1?
(joke) If you rule in tachyons, then you're good to go ##\dots##entropy1 said:I that the definition? Is retrocausality, to name it such, ruled out?
No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.sysprog said:I think that this is similar to asking whether
##A \rightarrow B## means that ##B \rightarrow A##,
which it clearly does not.
Agree. That's the modus tollens rule. But you asked whether P(A|B)=1 meant that P(B|A)=1, and l remarked that it to me seemed similar to supposing that B entails A given that A entails B.entropy1 said:No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.
We can do a short proof here:entropy1 said:But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).
I always wondered why probability and logic are in the same forum. Now I see why.entropy1 said:No, ##A \rightarrow B## is equivalent to ##NOT(B) \rightarrow NOT(A)##.
But actually, if ##A \rightarrow B## AND ##C \rightarrow NOT(B)##, then I wonder if C=True results in A=NOT True (or, of course, A=True in C=NOT True).
So does that mean that one of (1) or (2) gets "reversed"? Can we then speak of retrocausality? (reversed causality?)sysprog said:We can do a short proof here:
To be proven: ##C \rightarrow \neg A##
##1: A \rightarrow B## assumption 1
##2: C \rightarrow \neg B## assumption 2
##3: C## assumption 3
##4: \neg B## modus ponens 2,3
##5: \neg A ## modus tollens 1,4
##6: C \rightarrow \neg A## hypothethical syllogism (1,2),3,(4),5
##-## and there you have it ##\dots##
Please do not ask this question again without a clear and exact definition of retrocausality. Preferably one from the professional literature.entropy1 said:Can we then speak of retrocausality? (reversed causality?)