# Source of the asymmetries I see in mathematics

1. Nov 16, 2005

### Hurkyl

Staff Emeritus
I've recently been on a drive to look for the source of the asymmetries I see in mathematics, and I ran into the idea of cointuitionism. First, let me remind you about intuitionism (as I know it):

Intuitionism is the school of logic that rejects the law of excluded middle -- they do not require $P \vee \neg P$ to be true.

This leads to some interesting properties. For example, it's possible for $\neg P$ to be false when $P$ is not true. More generally, $\neg P \equiv \neq Q$ does not mean $P \equiv Q$.

Intuitionists still retain the law of contradiction, though: $P \wedge \neg P$ is false.

It seems that when doing intuitionist logic, the thing to do is to take
"implies" as a fundamental operation, instead of "not" as one might do in Boolean logic. So, you formulate things in terms of "and", "or", and "implies".

Now, this strikes me as being asymmetric! The law of the excluded middle, $P \vee \neg P$, and the law of contradiction, $P \wedge \neg P$ are duals of each other. So, I wondered what we would get if we tried it in the other direction:

So, what I call the "anti-intuitionist" school of logic would reject the law of contradiction -- $P \wedge \neg P$ is not required to be false.

Working through the duality, it turns out the right fundamental operation to consider, in addition to "and" and "or" is "B does not imply A"

It strikes me that this might be appropriate for the philosophy of science, since the philosophical role of experiment is not to arrive at an implication, but to arrive at the denial of an implication. In other words, an experiment is used as an attempt to falsify various theories -- in other words, to expose what the initial conditions do not imply.

Working further through the duality, it suggests another view on proof:

Normally, a proof consists of doing the following:

We start with a collection of statements, which we assume to be true.
Using (assumed true) implications, we infer additional statements which we add to the collection.

I.E. if we assume "A" and "A => B" are true, we can infer "B" is true.

By looking at the dual of this, it suggests that what we might want to consider doing is:

We start with a collection of statements, which we assume to be false.
Using (assumed false) instances of "A does not imply B" (which I'll write as A%B), we infer additional statements which we add to the collection.

I.E. if we assume "B" and "A % B" are false, we can infer "A" is false.

Again, maybe this is useful to science, since, intuitively we can be more sure about what is not true than what is?

Okay, I'm done rambling! What do you think?

2. Nov 17, 2005

It is not clear to me that "intuitively we can be more sure about what is not true than what is". For example, consider the concept of the "null hypothesis" . Suppose I test a hypothesis to see if fish A eats insect B. I set up the statistical test in the "false" form: Fish A never eats Insect B. I then conduct my experiment and either find that I cannot reject the null hypothesis or I can. The use of the null hypothesis in this example conclusively allows me to hold as very true indeed that which "is true" if in fact Fish A does eat insect B. One could say that the experiment proved that fish A does eat insect B if that is what is observed. However, if I find that I cannot reject the null hypothesis, it is clear that I am less sure about what is not true than what is true, for there could be many reasons why fish A did not eat insect B, and a single experiment would not allow me to hold with much confidence a statement that fish A "never" can eat fish B. Thus in this example, the use of the null hypothesis allows me to be more sure about that which is true (fish A eats insect B) than what is not true (fish A does not eat insect B), which appears to be contradictory to your conclusion...but, perhaps none of this in any way relates to your argument.

3. Nov 18, 2005

### Hurkyl

Staff Emeritus
Well, part of the problem is that we're firmly grounded in boolean logic! In particular, the double negative law, $\neg \neg P \equiv P$ is firmly entrenched in how we think. (Well, at least how I think)

But it fails in the intuitionistic case, and also the anti-intuitionistic case. The double negative law only works in "one way":

In intuitionistic logic, if we know that $\neg P$ is true, then $P$ must be false. However, the inverse fails: it is possible for $\neg P$ to be false, while $P$ is not true.

In anti-intuitionistic logic, the reverse holds. If $\neg P$ is false, then $P$ must be true. But if $\neg P$ is true, we cannot conclude $P$ is false.

To summarize, if your null hypothesis is "not P", then:

If you accept the null hypothesis, then the intuitionist will reject P, but the anti-intuitionist will not.
If you reject the null hypothesis, the intuitionist will not accept P, but the anti-intuitionist will.

It does seem, in this sense I have it exactly backwards.

Let me try explaining again what I was trying to say, though:

When we consider a theory, we extract from it an implication $A \rightarrow B$. An experiment is capable of telling us that this implication is false, but not that this implication is true.

Since I was thinking that anti-intuitionistic logic naturally reasons from statements of the form "A does not imply B", that it would be good to use. But, I forgot that anti-intuitionistic logic reasons from hypotheses like "`A does not imply B' is false". Stupid double negatives.

I'm still interested in figuring out just what the ramifications of anti-intuitionism are. I guess it would help if I knew better about intuitionism.

4. Nov 19, 2005

### WarrenPlatts

Actually, it's just as impossible to prove a hypothesis false as it is to prove it true.
Ordinarily, we construct an experiment that will see if we observe the implication predicted by the hypothesis:

H --->O
_____O
H

However, this commits the fallacy of affirming the antecedent, because other possible hypotheses could also explain the observation.

So, it seems like falsification might at least yield certain knowledge:

H---->O
____~O
~H

This is standard modus tollens--a logically valid and sound equation. But in real world experiments, there are always unspoken auxiliary assumptions built into the hypothesis, the experimental apparatus, and method of observation, etc. Thus, in reality what happens is this:

(H ^ A1 ^ A2 ^ A3 . . . ^An)----->O
_____________________________~O
~H v ~A1 v ~A2 v ~A3 . . . v ~An

That is, either H is false OR one or more of the auxiliary assumptions required to make the experiment work is false. This result is otherwise known as the Quine-Duhem thesis.

Last edited: Nov 19, 2005
5. Nov 19, 2005

I have a question about the use of "imply" and "infer" in your argument as shown in bold above. According to my symbolic logic text, "to infer is to draw conclusions from premises".. However, in your argument you seem to use the term "imply" to draw conclusions from premises, but, and here is my question, can you mix the use of "infer" and "imply" this way in the same argument. Thanks for you help understanding this.

6. Nov 19, 2005

Staff Emeritus
Imply is what someone or something else does to cause you to draw a conclusion. Inferring is your process of drawing that conclusion.

"This evidence implies that Bush is a liar."

"Reading this evidence I infer that Bush is a liar."

7. Nov 19, 2005

### Hurkyl

Staff Emeritus
I've been trying to consistently use "imply" to refer to the binary operation on logic values, and "infer" to mean something that follows by applying the rules of deduction. I can't promise that I've successfuly managed to be consistent.

Warren: I've been thinking along different lines:

A
~B
------
~(A -> B)

If the experiment we perform has A coming in, and ~B coming out, then we can conclude ~(A -> B). (At least in boolean logic -- I really should figure out if this is valid in the other logics, but I'm headed out the door in a minute)

8. Nov 19, 2005

### Hurkyl

Staff Emeritus
Of course, you also have

A
~B
------
A -> ~B

But I guess I'm thinking in the quantified sense: an experiment can prove $\neg \forall x: P(x) \implies Q(x)$, but generally not $\forall x: P(x) \implies Q(x)$.

9. Nov 21, 2005

### Cincinnatus

Why is it called "intuitionist"? it seems pretty darn unintuitive to me.

10. Nov 21, 2005

### hypnagogue

Staff Emeritus
I think Warren's point is that, while perhaps in an ideal world you might know for sure you only have A coming in, in reality you need to make assumptions about what you really have coming in much of the time. So it would look more like:

A ^ a1 ^ a2 ^ ... ^ an (where the lower case a's represent various assumptions about experimental setup and the like)
~B
------
~([A ^ a1 ^ a2 ^ ... ^ an] -> B)

Which makes sense, if you think about it. e.g., if a high school student seems to have falsified a well-tested scientific principle in lab, most likely what has really happened is that one of his (perhaps implicit) assumptions about the experimental setup was false.