# Conditional Probability, Independence, and Dependence

• B
• Clifford Engle Wirt
In summary, the problem is that people may not share the same intuition when articulating an intuition.
Clifford Engle Wirt
(Mentor note: link removed as not essential to the question.)

The problem is: what is relevance anyhow?

My questions are these: did I get the math right in the following? Is there a better, more acceptable way to lay out the sample space Ω and the two events F and E? Apart from the math, when I articulate an intuition, do people share or not share the same intuition?

A pile of apples: The sample space Ω comprises a pile of 16 apples (identified by the numerals 1...16), 8 of which are red (indicated by the letter r), and 8 of which are yellow (indicated by the letter 'y'). All 8 of the red apples are, unfortunately, wormy (indicated by the letter 'w'). 4 of the yellow apples are also wormy, and 4 are, as of yet, not wormy (indicated by an absence of a 'w').

Ω = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw, a9yw, a10yw, a11yw, a12yw, a13y, a14y, a15y, a16y }

E is the event 'a red apple is picked up from the pile':

E = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw }

F is the event 'a wormy apple is picked up from the pile':

F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8r,a9yw, a10yw, a11yw, a12yw}

The probability that a wormy apple will be drawn from the pile is |F|/|Ω| = 12/16 = 3/4.

The conditional probability that the apple drawn from the pile will be wormy given that it is red is 1, as can be seen from the following:

P( F | E ) = P( E F ) / P(E)

Now the intersection E F is:

E F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw}

and P( E F ) = | E F|/||Ω| = 8/16 = 1/2.

and P(E) = |E|/|Ω| = 8/16 = 1/2.

So P( E F ) / P(E) = (1/2)/(1/2) = 1. So P( F | E ) = 1.

But two distinct events are independent of one another if and only if

P(E F) = P(E) * P(F)

So in this case E and F are not independent events. Should the probability of F (a wormy apple is drawn) given E increase over the probability of F given just the draw from the pile, this would be a sufficient condition for F's having a dependency upon E. This probability does increase to 1 from 3/4. So F has a dependency upon E.

This situation mirrors, I submit, two features of the following proposition, where 'this apple' refers to an apple drawn from the pile:

IF this apple is red, THEN it is wormy.

A necessary condition for the truth of this IF THEN proposition is that the conditional probability of the consequent be 1 given the antecedent. Check. Likewise, a necessary and sufficient condition for the relevance of the antecedent to the consequent is (so I claim) that the probability of the consequent increase given the antecedent. Check. (For why it should matter that the antecedent be relevant to the consequent, take a look at the examples provided in the link above.)

What grounds can be given for this (maybe dubious) claim that an increase in probability provides a necessary and sufficient condition for the relevance of the antecedent to the consequent? Let's see what happens when such an increase fails to occur. Again, given that I have absolutely zero natural talent in math, I would be very much interested in knowing if I got the following math right.

Suppose now that all of the apples in the pile, red or yellow, have become wormy. (These things are known to happen.) The sample space now looks like this:

Ω = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw, a9yw, a10yw, a11yw, a12yw, a13yw, a14yw, a15yw, a16yw }

E is as before. F is now identical with Ω -- all the apples are now wormy. And now E no longer increases the probability of F. P(F) is 1; P(F | E) is also 1. E and F are now independent events, as can be seen from the following:

P( F | E ) = 1, as before.

P(F) is now 1, since all the apples are now wormy. P(E) is 1/2, as before.

and P( E F ) = | E F|/||Ω| = 8/16 = 1/2 as before.

But again, two distinct events are independent of one another if and only if

P(E F) = P(E) * P(F)

1/2 = 1/2 * 1.

So E and F are now independent events.

Now once all the apples in the pile, red or yellow, are wormy, it seems (to me, at least) positively weird to say:

IF this apple is red, THEN it is wormy

because now the possible redness of the apple is no longer relevant to its possible worminess. (The Relevant Logician would say that the proposition is false because of this; the Classical Logician would say the proposition may be a bit weird, but still true.) The possible redness of the apple no longer has anything to do with its possible worminess. -- Or rather, this is my strong intuition. If anyone has a different intuition regarding this, I would be keenly interested to know.

This (at least alleged) lack of relevance of the antecedent to the consequent mirrors the independence of E (the apple is red) and F (the apple is wormy) considered as events. E no longer has 'anything to do' with F because E and F are independent. I take this as an argument in favor of the claim that, as regards IF THEN propositions, a necessary condition for the relevance of the antecedent to the consequent consists in an increase, given the antecedent, in the probability of the consequent (which has to increase to 1) over and above what the probability is given just the original sample space. As for this being also a sufficient condition, I don't think it can be denied that if E increases the probability of F, E is relevant to F.

Again, if I have made a mistake in the math, I would be very interested to know. Also, I would be very interested to know if people do not share my intuitions, or have trouble deciding what to think because the intuitions are way off in left field.

Last edited by a moderator:
Clifford Engle Wirt said:
F is the event 'a wormy apple is picked up from the pile':

F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8r,a9yw, a10yw, a11yw, a12yw}
Fix the typo , "a8r" should be "a8rw".

The probability that a wormy apple will be drawn from the pile is |F|/|Ω| = 12/16 = 3/4.
Provided we assume each apple in ##\Omega## has an equal probability of being drawn.

A necessary condition for the truth of this IF THEN proposition is that the conditional probability of the consequent be 1 given the antecedent.
Are you defining the truth value of an if...then statement in that manner? It's isn't the standard definition.

Check. Likewise, a necessary and sufficient condition for the relevance of the antecedent to the consequent is (so I claim) that the probability of the consequent increase given the antecedent.
The standard treatment of mathematical logic has no formal definition for "relevance". You could try to develop a system of mathematical logic where "relevance" has a formal definition.

Suppose now that all of the apples in the pile, red or yellow, have become wormy. (These things are known to happen.)

Now once all the apples in the pile, red or yellow, are wormy, it seems (to me, at least) positively weird to say:

IF this apple is red, THEN it is wormy

because now the possible redness of the apple is no longer relevant to its possible worminess.

To repeat, you would need to define "relevant" in some precise manner in order to formulate any mathematical proof.
Also, I would be very interested to know if people do not share my intuitions, or have trouble deciding what to think because the intuitions are way off in left field.

In interpreting common speech, most people share the intuition that an if...then statement implies some sort of causal connection between the if-part and the then-part. However, mathematicians are familiar with the disadvantages of taking that view in mathematical logic.

In the first place, mathematical logic is often applied in contexts where no probabilities are assigned to events. If we consider a theorem in geometry that begins "If ABC is a triangle..." then there is no mention of any probability associated with ABC being a triangle. Nothing is said about picking a figure at random from some set of figures.

In the second place, mathematics makes explicit use of the quantifiers "for each" and "there exists". A statement such as "if the apple is red then it is wormy" implicitly expresses the idea "For each apple A, if A is red then A is wormy." The definition of the falsity of a statement quantified by "For each", is that it is false when and only when "there exists" an example where the statement is false. So if we can find apple Ax that is red and not wormy, the statement "For each apple A, if A is red then A is wormy" is false.

This view of the falsity of statements of the form "for each..., if...then" has the unintuitive consequence that an example where A is a not a red apple must be considered to make the statement "If A is red then A is wormy" true. If this convention were not observed then the statement "For each apple A, if A is red then A is wormy" could be proven false by giving an example of a not-red Apple that was not-wormy.

In common speech, the sentence "If A is red then A is wormy" might be considered "meaningless" when A is not red. However, mathematical logic deals with "statements". These are assertions that take exactly one of the values "True" or "False".

Clifford Engle Wirt and FactChecker
I agree with @Stephen Tashi 's comments. My thoughts are probably just repeating some of his.

You are making this complicated by mixing 3 concepts:
1) probabilities, which can be any real from 0 to 1
2) logic, which is only {T,F} or {0,1}. You are also mixing probabilistic dependence with logical "If -- then" statements.
3) "relevance", which is not formally defined, as far as I know, but I would accept "relevance" as being the same as "probabilistic dependence".

Those mixtures make it difficult to decide if your statements are correct. I think they might be essentially correct.

If you are going to mix those concepts, you should keep in mind these facts:
1) probabilistic dependence implies that knowing A changes the probability of B. That change can be an increase or a decrease. I think that is the same as "relevance".
2) The logic statement "If A then B" is logically equivalent to "not(A) or B". So not(A) implies nothing about B. That is very different from a probabilistic statement that A and B are dependent. The statement "If A then B" certainly implies that A and B are dependent, but the converse is not necessarily true.
3) The concepts of conditional probability and logic have been worked on for a hundred years to get where they are today. So you should be very precise and formal when you try to mix them. Any less formal statement will probably only be partially correct.

Last edited:
Clifford Engle Wirt
General remark: all of the points you make are very useful to me. They are just what the doctor ordered.

Stephen Tashi said:
In interpreting common speech, most people share the intuition that an if...then statement implies some sort of causal connection between the if-part and the then-part. However, mathematicians are familiar with the disadvantages of taking that view in mathematical logic.

I have found that when people (or at least my tech colleagues) encounter the paradoxes...or maybe just weirdnesses ... of Material Implication, they say 'Well, there has to be a causal connection between the antecedent and the consequent, and that causal connection is what the relevance of the antecedent to the consequent consists in.' But at least in the case of this one IF THEN proposition ("IF this apple is red, THEN it is wormy") the relevance of the antecedent to the consequent (when it is relevant, as it is at least when all the red apples but only some of the yellow apples are wormy) is clearly not causal. The red color of the apple does not cause its wormy condition.

Stephen Tashi said:
However, mathematicians are familiar with the disadvantages of taking that view in mathematical logic.

In the first place, mathematical logic is often applied in contexts where no probabilities are assigned to events. If we consider a theorem in geometry that begins "If ABC is a triangle..." then there is no mention of any probability associated with ABC being a triangle. Nothing is said about picking a figure at random from some set of figures.

This is a very good point. Clearly no demand can be made that in all IF THEN propositions the relation between antecedent and consequent be describable using probability theory. But I was attempting to use ninth-grade probability theory to illuminate the notion of relevance as it pertains to Relevant Implication. That theory won't be of much use for Material Implication or (I suspect) Strict Implication. I should have made this much, much clearer.

So far in my reading I have gained the definite impression that mathematicians and about everyone else find Classical Logic and Material Implication perfectly fine for math. (There seem to be just occasional murmurings of discontent regarding this.) Mares, in his book Relevant Logic: A Philosophical Interpretation (chapter 10.7), argues that Classical Logic is fine when restricted to math. (I will count actually understanding his argument as a milestone in fully digesting his book.) In Philosophy Of Logics (p. 38), Susan Haack suggests that different formalizations (Material Implication, Strict Implication, Relevant Implication) of 'If' as that term is understood in common sense and ordinary language are appropriate for different domains. Material Implication may be a formalization suitable for math, but in other domains it may not be so suitable. I am not going to win a Nobel Prize for physics or any prize in astronomy should I argue: "IF Cliff lives in Houston, Texas, THEN the Earth has just one moon [this IF THEN proposition is true in Classical Logic because Material Implication is truth-functional]; Cliff lives in Houston, Texas; Therefore, the Earth has just one moon." -- Or, if this is an acceptable argument in physics and astronomy, I demand my Nobel prize!

Stephen Tashi said:
The standard treatment of mathematical logic has no formal definition for "relevance". You could try to develop a system of mathematical logic where "relevance" has a formal definition.

Stephen Tashi said:
To repeat, you would need to define "relevant" in some precise manner in order to formulate any mathematical proof.

Well, there has been a certain amount of mathematical work (obviously not completely digested by me -- I am just learning) done on Relevant Implication. Mares (p. 28) follows the mathematicians Routley and Meyer in postulating a 3-place relation R connecting situations to state the truth condition for relevant implication. A situation is a piece of the world, and (not necessarily) the complete world. I am currently suffering under the delusion that I (partly) understand this truth condition:
'A -> B' is true at a situation s if and only if for all situations x and y if Rsxy and 'A' is true at x, then 'B' is true at y.
To venture an example that I think will make sense of this, 'IF the doorbell is ringing in my apartment, THEN someone or something is pressing the button outside' is true at the situation s comprising the space just outside my apartment, the wiring of the doorbell, and the inside of my apartment if and only if for all inside-my-apartment situations x and all just-outside-my-apartment situations y, there is some relation R connecting s, x and y such that when the doorbell is ringing inside my apartment someone or something is pressing the button outside. The problem is that the mathematicians seem perfectly happy to leave to leave R uninterpreted. It doesn't seem to matter to them what this relation is. What matters is that there is (maybe) some such relation. So as Mares puts it (p. 38) we end up with a pile of mathematics that doesn't really give a meaning to relevant inferences. Mares attempts to remedy this situation by appealing to the notion of information -- R consists in information links. Now as we all know the notion of information relies pretty heavily on probability...

To arrive at a more or less precise interpretation of R is to come up with a notion of relevance as this pertains to Relevant Implication, which comprises a certain subset of IF THEN propositions. Maybe we shouldn't assume that there is just one relation that makes the antecedent relevant (when it is) to the consequent. If the Aha Erlebnis I experienced when thinking about red and yellow apples in relation to Relevant Implication has any validity at all (maybe I am just suffering under a delusion), that (real or alleged) "insight" might still apply to just that one proposition (IF this apple is red THEN it is wormy). So in answer to:
Stephen Tashi said:
Are you defining the truth value of an if...then statement in that manner? It's isn't the standard definition.
I will say, 'no, not every IF THEN statement -- just this one IF THEN statement.'

Stephen Tashi said:
In the second place, mathematics makes explicit use of the quantifiers "for each" and "there exists". A statement such as "if the apple is red then it is wormy" implicitly expresses the idea "For each apple A, if A is red then A is wormy." The definition of the falsity of a statement quantified by "For each", is that it is false when and only when "there exists" an example where the statement is false. So if we can find apple Ax that is red and not wormy, the statement "For each apple A, if A is red then A is wormy" is false.
If I understand this point correctly, it poses a serious threat to my (real or alleged) "insight." It is very intuitive to think that 'IF this apple is red THEN it is wormy' is true if and only if 'for each apple A, if A is red then A is wormy' is true. So if the first statement were to be shown to be false because the antecedent is no longer relevant to the consequent, the for-each statement would also have to be false. But the for-each statement could be proven false only if one red non-wormy apple were found. It seems totally bizarre to make the for-each statement false by finding that all the yellow apples are wormy (thereby making the IF red THEN wormy statement false by virtue of the antecedent becoming irrelevant ((allegedly)) to the consequent). So I would have to counter-intuitively deny the iff.

At the moment I am not totally sure how to deal with this point. But this is the sort of challenging point that I was looking for. Thank you very much.

Clifford Engle Wirt said:
(Mentor note: link removed as not essential to the question.)

The problem is: what is relevance anyhow?

My questions are these: did I get the math right in the following? Is there a better, more acceptable way to lay out the sample space Ω and the two events F and E? Apart from the math, when I articulate an intuition, do people share or not share the same intuition?

A pile of apples: The sample space Ω comprises a pile of 16 apples (identified by the numerals 1...16), 8 of which are red (indicated by the letter r), and 8 of which are yellow (indicated by the letter 'y'). All 8 of the red apples are, unfortunately, wormy (indicated by the letter 'w'). 4 of the yellow apples are also wormy, and 4 are, as of yet, not wormy (indicated by an absence of a 'w').

Ω = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw, a9yw, a10yw, a11yw, a12yw, a13y, a14y, a15y, a16y }

E is the event 'a red apple is picked up from the pile':

E = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw }F is the event 'a wormy apple is picked up from the pile':

F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw,a9yw, a10yw, a11yw, a12yw}

The probability that a wormy apple will be drawn from the pile is |F|/|Ω| = 12/16 = 3/4.

The conditional probability that the apple drawn from the pile will be wormy given that it is red is 1, as can be seen from the following:

P( F | E ) = P( E F ) / P(E)

Now the intersection E F is:

E F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw}

and P( E F ) = | E F|/||Ω| = 8/16 = 1/2.

and P(E) = |E|/|Ω| = 8/16 = 1/2.

So P( E F ) / P(E) = (1/2)/(1/2) = 1. So P( F | E ) = 1.

But two distinct events are independent of one another if and only if

P(E F) = P(E) * P(F)

So in this case E and F are not independent events. Should the probability of F (a wormy apple is drawn) given E increase over the probability of F given just the draw from the pile, this would be a sufficient condition for F's having a dependency upon E. This probability does increase to 1 from 3/4. So F has a dependency upon E.

This situation mirrors, I submit, two features of the following proposition, where 'this apple' refers to an apple drawn from the pile:

IF this apple is red, THEN it is wormy.

A necessary condition for the truth of this IF THEN proposition is that the conditional probability of the consequent be 1 given the antecedent. Check. Likewise, a necessary and sufficient condition for the relevance of the antecedent to the consequent is (so I claim) that the probability of the consequent increase given the antecedent. Check. (For why it should matter that the antecedent be relevant to the consequent, take a look at the examples provided in the link above.)

What grounds can be given for this (maybe dubious) claim that an increase in probability provides a necessary and sufficient condition for the relevance of the antecedent to the consequent? Let's see what happens when such an increase fails to occur. Again, given that I have absolutely zero natural talent in math, I would be very much interested in knowing if I got the following math right.

Suppose now that all of the apples in the pile, red or yellow, have become wormy. (These things are known to happen.) The sample space now looks like this:

Ω = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw, a9yw, a10yw, a11yw, a12yw, a13yw, a14yw, a15yw, a16yw }

E is as before. F is now identical with Ω -- all the apples are now wormy. And now E no longer increases the probability of F. P(F) is 1; P(F | E) is also 1. E and F are now independent events, as can be seen from the following:

P( F | E ) = 1, as before.

P(F) is now 1, since all the apples are now wormy. P(E) is 1/2, as before.

and P( E F ) = | E F|/||Ω| = 8/16 = 1/2 as before.

But again, two distinct events are independent of one another if and only if

P(E F) = P(E) * P(F)

1/2 = 1/2 * 1.

So E and F are now independent events.

Now once all the apples in the pile, red or yellow, are wormy, it seems (to me, at least) positively weird to say:

IF this apple is red, THEN it is wormy

because now the possible redness of the apple is no longer relevant to its possible worminess. (The Relevant Logician would say that the proposition is false because of this; the Classical Logician would say the proposition may be a bit weird, but still true.) The possible redness of the apple no longer has anything to do with its possible worminess. -- Or rather, this is my strong intuition. If anyone has a different intuition regarding this, I would be keenly interested to know.

This (at least alleged) lack of relevance of the antecedent to the consequent mirrors the independence of E (the apple is red) and F (the apple is wormy) considered as events. E no longer has 'anything to do' with F because E and F are independent. I take this as an argument in favor of the claim that, as regards IF THEN propositions, a necessary condition for the relevance of the antecedent to the consequent consists in an increase, given the antecedent, in the probability of the consequent (which has to increase to 1) over and above what the probability is given just the original sample space. As for this being also a sufficient condition, I don't think it can be denied that if E increases the probability of F, E is relevant to F.

Again, if I have made a mistake in the math, I would be very interested to know. Also, I would be very interested to know if people do not share my intuitions, or have trouble deciding what to think because the intuitions are way off in left field.

Clifford Engle Wirt said:
(Mentor note: link removed as not essential to the question.)

The problem is: what is relevance anyhow?

My questions are these: did I get the math right in the following? Is there a better, more acceptable way to lay out the sample space Ω and the two events F and E? Apart from the math, when I articulate an intuition, do people share or not share the same intuition?

A pile of apples: The sample space Ω comprises a pile of 16 apples (identified by the numerals 1...16), 8 of which are red (indicated by the letter r), and 8 of which are yellow (indicated by the letter 'y'). All 8 of the red apples are, unfortunately, wormy (indicated by the letter 'w'). 4 of the yellow apples are also wormy, and 4 are, as of yet, not wormy (indicated by an absence of a 'w').

Ω = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw, a9yw, a10yw, a11yw, a12yw, a13y, a14y, a15y, a16y }

E is the event 'a red apple is picked up from the pile':

E = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw }

F is the event 'a wormy apple is picked up from the pile':

F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw,a9yw, a10yw, a11yw, a12yw}

The probability that a wormy apple will be drawn from the pile is |F|/|Ω| = 12/16 = 3/4.

The conditional probability that the apple drawn from the pile will be wormy given that it is red is 1, as can be seen from the following:

P( F | E ) = P( E F ) / P(E)

Now the intersection E F is:

E F = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw}

and P( E F ) = | E F|/||Ω| = 8/16 = 1/2.

and P(E) = |E|/|Ω| = 8/16 = 1/2.

So P( E F ) / P(E) = (1/2)/(1/2) = 1. So P( F | E ) = 1.

But two distinct events are independent of one another if and only if

P(E F) = P(E) * P(F)

So in this case E and F are not independent events. Should the probability of F (a wormy apple is drawn) given E increase over the probability of F given just the draw from the pile, this would be a sufficient condition for F's having a dependency upon E. This probability does increase to 1 from 3/4. So F has a dependency upon E.

This situation mirrors, I submit, two features of the following proposition, where 'this apple' refers to an apple drawn from the pile:

IF this apple is red, THEN it is wormy.

A necessary condition for the truth of this IF THEN proposition is that the conditional probability of the consequent be 1 given the antecedent. Check. Likewise, a necessary and sufficient condition for the relevance of the antecedent to the consequent is (so I claim) that the probability of the consequent increase given the antecedent. Check. (For why it should matter that the antecedent be relevant to the consequent, take a look at the examples provided in the link above.)

What grounds can be given for this (maybe dubious) claim that an increase in probability provides a necessary and sufficient condition for the relevance of the antecedent to the consequent? Let's see what happens when such an increase fails to occur. Again, given that I have absolutely zero natural talent in math, I would be very much interested in knowing if I got the following math right.

Suppose now that all of the apples in the pile, red or yellow, have become wormy. (These things are known to happen.) The sample space now looks like this:

Ω = { a1rw, a2rw, a3rw, a4rw, a5rw, a6rw, a7rw, a8rw, a9yw, a10yw, a11yw, a12yw, a13yw, a14yw, a15yw, a16yw }

E is as before. F is now identical with Ω -- all the apples are now wormy. And now E no longer increases the probability of F. P(F) is 1; P(F | E) is also 1. E and F are now independent events, as can be seen from the following:

P( F | E ) = 1, as before.

P(F) is now 1, since all the apples are now wormy. P(E) is 1/2, as before.

and P( E F ) = | E F|/||Ω| = 8/16 = 1/2 as before.

But again, two distinct events are independent of one another if and only if

P(E F) = P(E) * P(F)

1/2 = 1/2 * 1.

So E and F are now independent events.

Now once all the apples in the pile, red or yellow, are wormy, it seems (to me, at least) positively weird to say:

IF this apple is red, THEN it is wormy

because now the possible redness of the apple is no longer relevant to its possible worminess. (The Relevant Logician would say that the proposition is false because of this; the Classical Logician would say the proposition may be a bit weird, but still true.) The possible redness of the apple no longer has anything to do with its possible worminess. -- Or rather, this is my strong intuition. If anyone has a different intuition regarding this, I would be keenly interested to know.

This (at least alleged) lack of relevance of the antecedent to the consequent mirrors the independence of E (the apple is red) and F (the apple is wormy) considered as events. E no longer has 'anything to do' with F because E and F are independent. I take this as an argument in favor of the claim that, as regards IF THEN propositions, a necessary condition for the relevance of the antecedent to the consequent consists in an increase, given the antecedent, in the probability of the consequent (which has to increase to 1) over and above what the probability is given just the original sample space. As for this being also a sufficient condition, I don't think it can be denied that if E increases the probability of F, E is relevant to F.

Again, if I have made a mistake in the math, I would be very interested to know. Also, I would be very interested to know if people do not share my intuitions, or have trouble deciding what to think because the intuitions are way off in left field.

FactChecker's comments are also highly useful to me.

FactChecker said:
You are making this complicated by mixing 3 concepts:
1) probabilities, which can be any real from 0 to 1
2) logic, which is only {T,F} or {0,1}. You are also mixing probabilistic dependence with logical "If -- then" statements.
3) "relevance", which is not formally defined, as far as I know, but I would accept "relevance" as being the same as "probabilistic dependence".

Let me take 3) first. I don't know if this counts as a formal definition, but a semantics stating a truth condition has been provided for Relevant Implication (see above, and Mares, p. 28). This semantics posits a 3-place relation, R. What R is is left uninterpreted, something which leaves me a bit baffled why this is called a 'semantics.' Apparently some writers have wanted to interpret it as an information channel; Mares wants to interpret it as an 'information link' (which strikes me, at least currently, as a bit vague); I am entertaining the notion that it is to be interpreted in terms of probabilistic dependence and is thereby linked to the notion of information in ways I have not yet worked out.

1) and 2): I am trying to analyze a subset of deductive logic (Relevant Implication, vs. Material Implication and Strict Implication) in terms of probabilistic dependence, much as has been done with inductive logic. In inductive logic an inference is valid if the conditional probability of the conclusion, given the premises, is greater than that of the negation of the conclusion (see Graham Priest, Logic: A Very Short Introduction, p. 83), but still falls short of 1. Were the conditional probability of the conclusion 1, it would be a deductively valid argument. (See Susan Haack's suggestion ((Philosophy Of Logics, p. 12)) that there are not two different kinds of arguments, inductive and deductive, but two different standards for judging an argument, the inductive standard being more lenient, the deductive standard being absolutely demanding.) Now a deductively valid argument can be turned into an IF THEN proposition. The argument All the apples in this pile are wormy; therefore, the apple I have just drawn from this pile is wormy' can be converted into 'IF all the apples in this pile are wormy, THEN the apple I have just drawn from this pile is wormy.' And that particular IF THEN statement could then count as true only if the conditional probability of the consequent, given the antecedent, were 1. So if it is legit to analyze inductive logic in terms of conditional probability, it should also be legit to analyze a subset of deductive logic, Relevant Implication, in terms of conditional probability. That the vocabulary used to talk about probabilities (0,1) differs from that used to talk about deductive {T,F}, {deductively valid, deductively invalid} no more means that we can't analyze (a subset of) deductive logic in terms of conditional probability than that the vocabulary used to talk about heat (hot, cold, tepid, Fahrenheit degrees, Celsius degrees) differs from that used to talk about the motion of molecules (molecule A zipping about at speed s) means that we can't analyze heat in terms of the motion of molecules. What is more, analyzing heat in terms of the motion of molecules differs from mixing the concepts of heat and molecular motion.

FactChecker said:
2) The logic statement "If A then B" is logically equivalent to "not(A) or B"...
That is true in Classical Logic, but is true in Relevant Logic only when A is relevant to B.

FactChecker said:
2) ... So not(A) implies nothing about B. That is very different from a probabilistic statement that A and B are dependent. The statement "If A then B" certainly implies that A and B are dependent, but the converse is not necessarily true.

I am not sure what you are saying here. The converse of IF (IF A THEN B) THEN (B depends upon A) is IF(B depends upon A) THEN (IF A THEN B), which of course is not a true proposition. Should some sort of weird, complicated Rube Goldberg contraption be set up that results in increasing the chances of a coin's turning up heads on the second toss to, say, 2/3, B (the coin's turning up heads on the second toss) would have a dependency on the coin's turning up heads on the first toss. But of course the proposition "IF the coin turns up heads on the first toss, THEN it will turn up heads on the second toss" will not be true. But what about this proposition: "IF(B depends upon A AND the conditional probability of B given A is 1) THEN the Relevant Implication (IF A THEN B) is true"? And: "IF the Relevant Implication (IF A THEN B) is false, THEN (Either B and A are independent OR the conditional probability of B given A is not 1)"? For the moment, I am going to stick my neck out and say these two propositions are true. (Or, to use different metaphors, I will throw it up into the air and see how quickly it gets shot down as obviously false. Or, again, I will throw it against the wall like spaghetti and see if it sticks.)

FactChecker said:
3) The concepts of conditional probability and logic have been worked on for a hundred years to get where they are today. So you should be very precise and formal when you try to mix them. Any less formal statement will probably only be partially correct.
Would taking the truth condition for Relevant Implication proposed by Routley and Meyer (Mares, p. 28):
A -> B' is true at a situation s if and only if for all situations x and y if Rsxy and 'A' is true at x, then 'B' is true at y.
then trying to come up with a satisfactory interpretation of R be sufficiently formal and precise?

FactChecker said:
1) probabilistic dependence implies that knowing A changes the probability of B. That change can be an increase or a decrease. I think that is the same as "relevance".
That is my intuition and what I am hoping to show.

FactChecker said:
Those mixtures make it difficult to decide if your statements are correct. I think they might be essentially correct.

That my (doubtlessly confused and obscure) statements might essentially be correct increases, I hope, the conditional probability that I am not barking up the wrong tree. But, of course, gloom and doom and total abject despair are the best attitudes to take by default with regard to any attempt to utter a true statement in philosophy.

Clifford Engle Wirt said:
Well, there has been a certain amount of mathematical work (obviously not completely digested by me -- I am just learning) done on Relevant Implication. Mares (p. 28) follows the mathematicians Routley and Meyer in postulating a 3-place relation R connecting situations to state the truth condition for relevant implication.

One may study reasoning by studying only its syntax - e.g. the study of formal languages as strings of symbols and rules for manipulating them. Adding the attempt to study meanings (semantics) for formal languages gives us "model theory".

Taking a casual glance at the Wikipedia article on relevance logic https://en.wikipedia.org/wiki/Relevance_logic indicates that Routley and Meyer are dealing with model theory.

I think the average internet responder to questions on the topic of if-then statements will take the view of introductory logic texts. These emphasizes the study of syntax and use a model based on common language notions of "True" and "False" which can be formalized into rules for assigning values "T" or "F" to some strings of symbols depending on the "T" or "F" values of substrings.

This approach describes standard mathematical reasoning very well. For example, when I read mathematical definitions and proofs about "alternative" logics (e.g. modal logics, fuzzy logic, quantum logic), I find that the definitions and proofs are done using "ordinary" logic. It would indeed be interesting to see a paper about modal logic where the proofs were executed using modal logic.

People (including mathematicians) often think of mathematical objects in a Platonic way. For example, they may imagine that "the natural numbers" have an existence independent of any formal mathematical definition. From this point of view, the mathematical object exists prior to attempts to formalize it. My impression is that you have a Platonic concept of relevance and you are investigating ways to formalize it. If that's the case, you also need the basic intuitions about how formalities work. Formalities involve definitions and definitions involve legalism. If you want to understand intuitively why "if A then B" is defined to be True whenever A is false, then it is more useful to base your intuition on imagining the procedures of a courtroom or legislature than thinking about physical situations.

Clifford Engle Wirt
Clifford Engle Wirt said:
That is true in Classical Logic, but is true in Relevant Logic only when A is relevant to B.
I'm not familiar with "Relevant Logic". It looks as though you are using formal definitions. I do not have the expertise to help in this discussion and will bow out.

FactChecker said:
The statement "If A then B" certainly implies that A and B are dependent, but the converse is not necessarily true.
Clifford Engle Wirt said:
I am not sure what you are saying here. The converse of IF (IF A THEN B) THEN (B depends upon A) is IF(B depends upon A) THEN (IF A THEN B), which of course is not a true proposition.
That is true in some cases but not in other cases. If A=B, then the converse is clearly true.

Clifford Engle Wirt
FactChecker said:
That is true in some cases but not in other cases. If A=B, then the converse is clearly true.

FactChecker said:
I'm not familiar with "Relevant Logic". It looks as though you are using formal definitions. I do not have the expertise to help in this discussion and will bow out.
I found your opposition/skepticism very useful for clarifying things in my own mind (or perhaps for creating the illusion in my own mind that things were clarified); thank you very much.

FactChecker said:
That is true in some cases but not in other cases. If A=B, then the converse is clearly true.
If I understand you correctly, both of the following are true: IF (IF A THEN A) THEN (A depends upon A) and IF(A depends upon A) THEN (IF A THEN A). And A would depend upon itself. Is it kosher to say that A depends upon itself? (Life would be made simpler for me if it is kosher).

Clifford Engle Wirt said:
If I understand you correctly, both of the following are true: IF (IF A THEN A) THEN (A depends upon A) and IF(A depends upon A) THEN (IF A THEN A). And A would depend upon itself. Is it kosher to say that A depends upon itself? (Life would be made simpler for me if it is kosher).
That certainly sounds good to me, but again, I am not familiar with a mathematically formal definition of "A depends on B". I am not an expert in formal logic. If there is one, just like with "relevant", then I am not familiar with that subject and can not give authoritative answers.

Clifford Engle Wirt
Stephen Tashi said:
This approach describes standard mathematical reasoning very well. For example, when I read mathematical definitions and proofs about "alternative" logics (e.g. modal logics, fuzzy logic, quantum logic), I find that the definitions and proofs are done using "ordinary" logic. It would indeed be interesting to see a paper about modal logic where the proofs were executed using modal logic.
In Chapter 10 of his book Mares defends both the use of Classical Logic by mathematicians and his own use of Classical Logic as a metalanguage for Relevant Logic. It is not totally impossible that I am developing some faint glimmers by way of understanding what he is saying.

Stephen Tashi said:
People (including mathematicians) often think of mathematical objects in a Platonic way. For example, they may imagine that "the natural numbers" have an existence independent of any formal mathematical definition. From this point of view, the mathematical object exists prior to attempts to formalize it. My impression is that you have a Platonic concept of relevance and you are investigating ways to formalize it.
I probably have to plead guilty to having a Platonic conception of relevance as something existing independently of any attempt to describe or formalize it.

This is slightly off-topic, but Ernest Lepore in his book Meaning and Argument: An Introduction to Logic Through Langauge takes a similarly Platonic approach to the notion of intuitively valid deductive arguments. Some of these arguments can be captured by Propositional Logic. But not all. So we need Property Predicate Logic. This formalization captures more intuitively valid arguments, but again fails to account for all of them. So we need Relational Predicate Logic. Rinse and repeat. Relational Predicate Logic fails to capture all of the intuitively valid deductive arguments...so we need Relational Predicate Logic with Identity...but again...so we end up with the conclusion that the already-existing valid arguments resist formalization. Susan Haack seems to be taking a similar approach ('there are already-existing valid arguments out there and we try to formalize those), but she is also open to the possibility that, even though it fails to square with all of our pre-reflective intuitions, a formalization might be appealing enough on its own terms that we might want to give up some of our pre-reflective intuitions.

Stephen Tashi said:
If you want to understand intuitively why "if A then B" is defined to be True whenever A is false, then it is more useful to base your intuition on imagining the procedures of a courtroom or legislature than thinking about physical situations.
Well, actually, I like to use Dretske's doorbell examples as a way of making intuitive the truth of IF A THEN B whenever A is false. Suppose the wiring of the doorbell plus the outside conditions are such that, given the doorbell's ringing, the probability that someone or something is pressing the button outside is 1. (The wiring of the doorbell is in perfect condition. And the weather is such that chances are zero that a lightning strike might cause a freak impulse causing the doorbell to ring in the absence of the button's getting depressed.) Right now, the doorbell is not ringing, and the button is not getting depressed. Both A and B are false at the moment; nonetheless, IF A THEN B is true because WERE the doorbell to be ringing, the probability would be 1 that the button outside is getting depressed. The gives us the F F part of the truth table. To get the F T part of the truth table, imagine a situation in which a defect in the wiring creates a situation in which, 1 times out of a thousand, the doorbell fails to ring even when the button is getting pushed. Nonetheless, the conditional probability remains 1 that the button outside is getting depressed given the doorbell's ringing. (Conditions are such that nothing else could cause the doorbell to ring.) Currently the doorbell is not ringing (A is false), but B is true -- someone is fruitlessly pressing the doorbell outside. Voila! We have the F T part of the truth table for implication.

Analyzing implication in terms of conditional probability seems to explain, at least for a subset of IF THEN propositions (those outside of mathematics and those which we don't reject as non-sequiturs because A fails to be relevant to B) our pre-reflective willingness to accept the truth of IF A THEN B whenever A is false.

Regarding the hypothetical statement, "all crows are black", one may view a black crow as a confirming instance, and regarding the logically entailed hypothesis, "all not black objects are not crows", i.e. "every object that is not black is ipso facto not a crow", one may view a red apple, by virtue of its being a not black object that is not a crow, as also a confirming instance of the hypothesis that all crows are black, though less strongly so, more weakly so.

Due to the disparities of set sizes involved, viz, the facts that, e.g., the set of all objects is much larger than the sets of all black objects, of all crows, of all red objects, of all apples, of all red apples, and of all black crows, the observation of a black crow must be accorded greater probative value, or inductive inferential weight, than an observation of a red apple, even though both are confirming instances inductively.

I think the notion of relevancy in this context is mappable to the comparative set sizes.

## 1. What is conditional probability?

Conditional probability is the likelihood of an event occurring given that another event has already occurred. It is calculated by dividing the probability of the joint event by the probability of the given event.

## 2. How is conditional probability different from regular probability?

Regular probability looks at the likelihood of an event occurring without taking into account any other events. Conditional probability takes into consideration the occurrence of another event before calculating the likelihood of the given event.

## 3. What is independence in terms of probability?

In probability, independence refers to the relationship between two events where the occurrence of one event does not affect the likelihood of the other event occurring. This means that the probability of both events happening together is equal to the product of their individual probabilities.

## 4. What is dependence in terms of probability?

Dependence in probability refers to the relationship between two events where the occurrence of one event affects the likelihood of the other event occurring. This means that the probability of both events happening together is not equal to the product of their individual probabilities.

## 5. How are conditional probability, independence, and dependence used in real life?

Conditional probability, independence, and dependence are used in various fields, such as finance, biology, and engineering, to make predictions and decisions based on the likelihood of certain events occurring. For example, in medicine, conditional probability is used to determine the accuracy of a diagnostic test, while independence and dependence are used to assess the risk of side effects from a medication.

• Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
5
Views
5K
• Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
• Set Theory, Logic, Probability, Statistics
Replies
15
Views
4K
• Set Theory, Logic, Probability, Statistics
Replies
7
Views
3K
• Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
• Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
• Precalculus Mathematics Homework Help
Replies
2
Views
9K
• Set Theory, Logic, Probability, Statistics
Replies
3
Views
2K
• Calculus and Beyond Homework Help
Replies
8
Views
2K