I am a big proponent of intuitionist logic. Intuitionist logic corresponds nicely to computation, and it makes it easy to formalize mathematics in a way that can be checked and verified by a computer.
AcidRainLiTE said:
I am not familiar with formal logic/ZFC
In my opinion, ZFC isn't really that important to working mathematicians. The theory is worked out enough that there is confidence it serves as a legitimate foundation for general mathematics, but most people use very loose language for everyday proofs. It's hard work to go from a proof you might see in a paper to a formal proof in any formal system.
... in formal logic, the escape route you take out of the problem posed by the liar's paradox is simply to set up your definitions so as to say "we will not talk about the liar's paradox in this system
Paradoxes are often seen as mystical things to beginners.
They aren't.
There is a principle called the Howard-Curry Correspondence. It is a Rosetta Stone that allows you to translate concepts in formal logical to concepts in functional programming.
Under the Howard-Curry Correspondence, logical paradoxes correspond to infinite loops.
So when you set up a system attempts to avoid "paradox", what you're really doing is defining a programming language which makes it impossible to write infinite loops.
Of course, most programming languages are inconsistent, by this definition. This is usually because it's possible to write unbounded loops (unrestricted "while" statements) or to use general recursion (functions can call themselves without any restrictions).
However, there are specialty languages that avoid these things. BLOOP in Hofstadter's GEB book is one example. More practical examples are Coq, Agda, Epigram, as well as formal systems such as the Simply Typed Lambda Calculus, System F, and the Calculus of Constructions.
Writing programs in these languages is more difficult. You have to figure out how many times you need to loop before you start executing the loop, and you have to show your recursion will eventually hit a base case. However, because these systems are consistent, it becomes possible to use them as theorem provers, as well as programming languages.
why do most mathematicians use (formal) classical logic?
The reason is because it makes a certain "intuitive" sense. All things must be true or false, right? While strangely, you can't prove this fact from the other ZFC axioms, there is no apparent) contradiction from assuming it as an axiom itself.
However, computer scientists don't like it at all. And they have good reasons for it.
Intuitionist logic relies exclusively on "proof by show me an example". Under Howard-Curry, proofs corresponds to programs, and propositions correspond to the type of a program (Thing String, int, boolean, etc from Java, but on super steroids). If I want to "prove" a proposition, I just need to exhibit a concrete program that has the correct type.
The proof of a statement beginning with "for all" is a (computable) function. The proof of a statement beginning with "there exists" is an ordered pair. A proof of a conjunction "A and B" is an ordered pair of proofs: one of A and one of B. A proof of a disjunction "A or B" is either a proof of A or a proof of B -- BUT you can ask WHICH one you have! The Empty type has no proofs -- if you have a proof of Empty, you have derived a contradiction, and you can use a proof of Empty to prove any statement. A proof of "not A" is a function from A to Empty.
So how would I prove the identity function id(x) = x is continuous?
For all x and e, where e > 0, there exists a d such that d > 0 and for all y, whenever |x - y| < d, then |id(x) - id(y)| < e.
The way you'd do it in Coq is define a function that takes three parameters, x, e, and a PROOF that e > 0. Then, you would return an ordered pair, with e as the first element and a function taking y and PROOFS that |x - y| < e, and it returns the proof that |x - y| < e.
That might be a bit strange. I wouldn't expect you to fully understand it, but I assure you, it maps incredibly cleanly to a computer program. Beginning students get tripped up over this definition all the time in calculus. But most of the confusion comes from the fact that "for all x" - "where e > 0" - and "whenever |x - y| < d" are all just different ways to spell "function" in English.
Anyway --- Given that, you might wonder, what's wrong with the Law of Excluded Middle (LEM)?
The problem is that in intuitionist logic, you always need to prove things by example, and LEM doesn't do that. it asserts "A or not A", but it:
1) Doesn't tell you WHICH you get: Did you get "A"? Or did you get "not A".
2) Even if it did, it doesn't actually give you a proof of "A", nor a proof of "not A".
The Law of Excluded Middle can be added to languages like Coq or Agda as an axiom, and while the systems remain consistent, it breaks the computability of these systems. You end up with programs which don't "loop", but they *do* get "stuck". They become useless as programming languages.
Also, intuitionism has always seemed threatening to me as a mathematician; I always feel that it is lurking over my shoulder ready to tear apart the world of mathematics as I know and love it. You do not seem to be the least bit perturbed by it. Is there a reason for this?
It's true that when you omit LEM, you lose some theorems. It's true, but only technically true. Most theorems that depend on LEM have a slightly weaker variants that are still true. For example, the fundamental theorem of algebra (if I recall correctly) says that all non-constant complex polynomials have a point x where f(x) = 0. This theorem depends on LEM. Without LEM, though, you can still assert that for all such polynomials, and for all e > 0, there's a point x such that f(x) < e. Basically, you can't find exact roots, but you can prove there are "arbitrarily accurate approximations" of roots.
Fun fact about LEM. LEM is very closely related to the notion of decidability in computer science. For proofs that rely on "A or not A", if A is a decidable proposition (meaning, there is an algorithm that can find an a proof of "A" or a proof of "not A"), then LEM is actually redundant, and can be replaced by the proof that A is decidable. This lovely fact rescues many important theorems you might fear losing, as most of the mundane sets you work with in life are decidable.
AcidRainLiTE said:
This bothers me a lot. The purpose, in my mind, of mathematics is not to 'get my work done.' For me, there is a lot more to it than that. Granted, it is fun, and challenging. But there is something deeper to it, and if it weren't for that, I would not care to do it; I might as well clean toilets. To quote poincare:
"The scientist does not study nature because it is useful to do so. He studies it because he delights in it, and he delights in it because it is beautiful."
If math is neither true nor beautiful, screw it.
Philosophy is good for invoking a feeling of awe in your mind, but little else.
If ZFC was found to be inconsistent, that would be interesting. Just as Naive Set Theory was found to have flaws, ZFC very well could, as far as anyone knows. It would be the work of logicians to figure out why it breaks down.
Even in the worst case, if all logical systems had some paradox buried deep within them, they would still be interesting. You could ask yourself questions like "what is the shortest proof of a paradox"? "Under what conditions do paradoxes show up?" And "how does the complexity of the system influence the shortest proofs of paradox".
Ultimately, what we can prove with logic is limited to our computational ability. Even with the most powerful computer available to us on the planet, and all the computing power of all of mankind's mental power, we are unable to figure out many "solved" problems. Which side has a dominant strategy in, say, Go? This question is well-formed. It is decidable, meaning we know an algorithm to tell us the answer. But yet, no one knows the answer.
Even in the purest form of mathematics and logic, you shouldn't worry so much about "truth" or "reality". Practicality is king ;)
AcidRainLiTE said:
Though we chose our axioms so as to attempt to capture the behavior observed in the physical world, I do not think that the physical world contains the mathematics we come up with. Take euclidean geometry for instance. I do not think there are any such things as lines, circles, or spheres in the physical world.
Again, you dwell on aimless philosophy too much.
If you've studied analysis, many "infinities" found in the study of real and complex numbers can easily be reduced to finite, computable concepts. An "infinite series of rationals" is just a function from the natural numbers to rationals. Limits of series use epsilon-delta definitions which can be reduced to computer programs under the Howard-Curry Correspondence. Assuming an intuitionist approach, we only care about "computable functions", which kindly removes many awkward infinities (and everything of interest suddenly becomes countable).
Do not confuse the axioms of logic and math with the laws of physics. Physics works at many levels of abstraction. Classical physics has its own laws that are different from quantum physics that are different from relativity. And if you apply the wrong set of axioms to your problem, you get the wrong results.
Furthermore, science tends to be deductive while math is inductive. In physics, you see the special cases and infer the general principle. Math outright forbids this. You can show P(x) is true for a quintillion different values of x, but you still haven't proved "for all x, P(x)".
Also, it is good to point out, in a physical universe with different laws (say, a different mass for various particles, or a different coupling constant, or whatever), if life could exist in such a place, they would almost surely find themselves working with equivalently powerful logical systems. Intelligent life would eventually create a Turing-complete computer, with all the garbagety software bugs we have.
If you want to believe the physical world justifies our mathematics, you may begin to question where you went wrong when you encounter things like the Banach-Tarski paradox
A beach ball only serves as an intuition for our understanding of a sphere. It's helpful for giving our monkey brain enough hints to get the job done. Our minds are evolved to work with pictures, not with equations.
Banach-Tarski says something that's very true. If you have a sphere and you divide it properly using a certain method that relies on a property you have asserted (without justifying), you can apply rotations to the parts and create two identical copies of the sphere.
Some important things to note about this "paradox". It relies on the Axiom of Choice, which is non-intuitionist -- meaning you can't actually compute the answer to verify it. It is also talking about a set of points. There is no talk of mass, unlike our beach ball model, so there is no issue there. It's also nowhere stated that you have any intention of maintaining volume or surface area of any object. Remember that the area of a face (or the volume of a cell) is not determined by the number of points it contains (they are both infinite, of course, with equal cardinality). Again, in our beach ball model, the area (or volume) is determined by the number of "molecules" that make it up.
Just remember, the theorem never said **** about a beach ball. It was talking about a sphere the whole time :)
AcidRainLiTE said:
When you write \forall x\in\mathbb{R}
you would have said absolutely nothing.
Again, this is where intuitionist logic is nice. When you write "for all natural numbers n, blah", you are actually talking about a computable function. While this may not have any "absolute" meaning in the philosopher's sense, it *does* mean you can write a computer program that can be run on any IBM-compatible system, running Windows, Mac OS, or Linux. You can actually DO something with it :) And that's nice.
Just to give you a practical example (for certain values of practical), there is a ton of research going on into the realm of software verification. Basically, the goal is to find ways to prove theorems about your programs. Instead of relying on good engineering principles, testing, (and honestly, blind luck) to write correct programs, verified languages allow you to encode the properties you desire in the program into the type system. When you write your program, the compiler simply rejects any programs that don't guarantee that they conform to those properties.
Anyway, intuitionist logic is cool, philosophy is mostly a waste, and you should check out Agda and Coq.
http://www.cis.upenn.edu/~bcpierce/sf/
http://wiki.portal.chalmers.se/agda/pmwiki.php