I Can you calculate the probability of a complex logic expression?

yeet991only
Messages
9
Reaction score
1
TL;DR Summary
Any logical expression can be simplified to AND , NOT , OR. But these are same as intersection , negation and reunion in probability.
Suppose this expression : ((A ∨ ~B) ∨ (C -> D))
This simplifies to : A + ~B + ~C + D , using boolean algebra.
Now suppose i know that this expression is true: p( ((A ∨ ~B) ∨ (C -> D)) ) = 1;
and I also know that p(A) = 0.3 , p(B) = 0.99 , p(C) = 0.92 .
but the probability of the expression is actually: p(A ∪ ~B ∪~C ∪ D) = 1
So can you calculate D from that? (Normally you could, but the question is if combining logic and probability like this is valid).

Also clearly (C -> D) isnt the kind of arrow that you would use in a bayesian network or stuff, because it reduces to p( ~C or D) , but in graphs it is the bayes formula for such arrow : p(C | D) = p(D|C) * p(C) / p(D).
 
Physics news on Phys.org
yeet991only said:
TL;DR Summary: Any logical expression can be simplified to AND , NOT , OR. But these are same as intersection , negation and reunion in probability.

So can you calculate D from that?
No. From just the information given we don’t know if the probabilities of one are dependent on the others. Without knowing that they are independent or mutually exclusive or knowing their conditional probabilities, ##p(D)## cannot be calculated from the given information
 
yeet991only said:
TL;DR Summary: Any logical expression can be simplified to AND , NOT , OR. But these are same as intersection , negation and reunion in probability.

Suppose this expression : ((A ∨ ~B) ∨ (C -> D))
This simplifies to : A + ~B + ~C + D , using boolean algebra.
Now suppose i know that this expression is true: p( ((A ∨ ~B) ∨ (C -> D)) ) = 1;
and I also know that p(A) = 0.3 , p(B) = 0.99 , p(C) = 0.92 .
but the probability of the expression is actually: p(A ∪ ~B ∪~C ∪ D) = 1
So can you calculate D from that? (Normally you could, but the question is if combining logic and probability like this is valid).

Also clearly (C -> D) isnt the kind of arrow that you would use in a bayesian network or stuff, because it reduces to p( ~C or D) , but in graphs it is the bayes formula for such arrow : p(C | D) = p(D|C) * p(C) / p(D).
A related area is probabilistic logic programming https://en.m.wikipedia.org/wiki/Probabilistic_logic_programming
 
yeet991only said:
TL;DR Summary: Any logical expression can be simplified to AND , NOT , OR. But these are same as intersection , negation and reunion in probability.
Any logical expression can also be simplified to NAND only, NOR only, and if one allows ##c\to\text{false}##, also by implication only as ##c\to\text{false}## is NOT c.

yeet991only said:
Suppose this expression : ((A ∨ ~B) ∨ (C -> D))
That expression can be write as ##((A\vee \neg B) \vee (\neg C \vee D))##, and now one can dispense with the parentheses, yielding ##(A\vee \neg B \vee \neg C \vee D)## -- and also ##((A \vee \neg B \vee \neg C) \vee D)##. I'll use that final form in just a bit.
yeet991only said:
Now suppose i know that this expression is true: p( ((A ∨ ~B) ∨ (C -> D)) ) = 1;
and I also know that p(A) = 0.3 , p(B) = 0.99 , p(C) = 0.92 .
but the probability of the expression is actually: p(A ∪ ~B ∪~C ∪ D) = 1
So can you calculate D from that? (Normally you could, but the question is if combining logic and probability like this is valid).
One obvious solution is that D is always true, in which case P(D) = 1. There are other solutions. Note that it doesn't matter what D is in the case that ##(A \vee \neg B \vee \neg C)## is true. However, we must have D being true whenever ##(A \vee \neg B \vee \neg C)## is false. Note that D always being true satisfies this requirement, but so does ##D = \neg(A \vee \neg B \vee \neg C) = (\neg A \wedge B \wedge C)##. Given the sited probabilities, that D is true (but not necessarily always true) seems quite likely.
 
Last edited:
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top