Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Justifying Proof by Contradiction

  1. May 27, 2012 #1
    It seems to me (though I would be *extremely* glad to be proven wrong here) that in mathematics we often blindly assume that the theorems we attempt to prove/disprove are either true or false. Such an assumption is implicit in every proof by contradiction. We eliminate the possibility of the theorem being false (by deriving a contradiction), and thus conclude that the theorem is true since that is the only other option. But this is not a trivial assumption. The theorem could very well could be neither true nor false (like the liar paradox, for instance).

    How do we justify this assumption?
  2. jcsd
  3. May 27, 2012 #2


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    The liar's paradox is a completely separate issue: it's not a statement. The liar's paradox is directly analogous to Russel's paradox for set theory; it represents catastrophic failure for naïve foundations, but the foundations we actually use obviates them.

    i.e. just as in ZFC where the argument of Russel's paradox merely proves that [itex]\{ x : x \notin x \}[/itex] is a proper class and not a set, the argument of the Liar's paradox merely proves in formal logic that there is no statement P satisfying the expression "P if and only if not P", and some related facts.

    The excluded middle is part of classical logic itself: the expression "P or not P" is a tautology. So is the law of noncontradiction "not(P and notP)". Also important is "P if and only if not not P", which is frequently used in conjunction with proof by contradiction. many.

    Note that nothing I've said above actually cares whether or not True and False are the only truth values. Classical logic, in fact, allows any Boolean algebra to be used for truth values.

    Intuitionistic logic gets rid of the law of excluded middle, so that there can be statements where "P or not P" does not hold. However, proof by contradiction still works, because it depends on the law of noncontradiction. In fact, in intuitionistic logic, "not P" is still a synonym for "P implies a contradiction" (and this is often taken as the definition of "not P"). The law of double negation also fails in intuitionisitc logic.

    Note that the classical method to prove "P" by contradiction will only suffice to prove "not not P" in intuitionistic logic.

    Another important logical system is that of computability. For particular inputs, boolean function might return "true", "false", or it might go into an infinite loop, which effectively counts as a third truth value. The idea behind proof by contradiction again still works here, when properly modified. If you show that "P is true" leads to a contradiction, then that means either "P is false" or "P is an infinite loop".
  4. May 27, 2012 #3
    I am not familiar with formal logic/ZFC, but your response is very good and so I would like to interact with it as best as I can at the moment (I will be taking formal mathematical logic next spring).

    I would guess that there is not really a proof that there is not a statement satisfying the expression "P if and only if not P", but that instead that, in formal logic, the escape route you take out of the problem posed by the liar's paradox is simply to set up your definitions so as to say "we will not talk about the liar's paradox in this system (i.e. we will define the term 'statement' so that it does not include things like the liars paradox so as to not run into the type of problems it creates. Furthermore, in formal logic, we will only deal with logic on statements)."

    I guess this is what I was originally getting at. So in intuitionist logic, you could not prove a theorem "P" by assuming "not P" and showing it implies a contradiction since all you would have proven is "not not P". So, my next question is: why do most mathematicians use (formal) classical logic? It would seem that, if you instead accepted intuitionist logic, then you would lose a lot of theorems because you would no longer know that they are true, but rather only that their negations are false ("not not P").

    Also, intuitionism has always seemed threatening to me as a mathematician; I always feel that it is lurking over my shoulder ready to tear apart the world of mathematics as I know and love it. You do not seem to be the least bit perturbed by it. Is there a reason for this?
    Last edited: May 27, 2012
  5. May 27, 2012 #4
    Unlike building a house; in math we build the house first and the foundation later. If the foundation's no good, we get another one.

    Real life examples:

    * Newton developed the calculus in the 1760-1780 timeframe. It's clear from his writings that he fully understood the foundational issues with respect to the meaning of the difference quotient as numerator and denominator went to zero. But he successfully developed and applied calculus anyway. Not till the 1850-1900 period was calculus made fully rigorous. So the foundations followed the math by almost 200 years.

    * Russell and Whitehead attempted to axiomatize all of mathematics starting from elementary logic. Godel showed that their program was doomed to fail. Yet during that period of time, mainstream mathematics was unaffected; and continued as normal.

    * Modern algebraic geometry is based on category theory. In category theory it's often very tricky to know what's a set and what's a proper class. So algebraic geometers work in something called a Grothendieck universe, which has enough sets to make things work out.


    A Grothendieck universe is equivalent to the existence of an inaccessible cardinal. In other words the modern algebraic geometers have already left ZFC behind in their quest to do more math.

    Is this relevant to anything people like us can understand? Most definitely yes. Wiles's proof of Fermat's Last Theorem is done in the setting of modern algebraic geometry, inaccessibles and all. Could the proof be done without an inaccessible? Most specialists believe so.

    See this fascinating discussion and follow some of the linked references.


    * A Fields medal winner named Vladimir Vovoedsky is currently working on New Foundations. He's considering that perhaps our existing foundations are inconsistent; and he's working on an alternative.


    The bottom line is that 1 + 1 = 2. If set theory were found to be inconsistent, 1 + 1 would still be 2, and no mathematician would have to change their work much. The logicians and foundationalists would get busy for another hundred or two hundred years to fix up the foundations, but nobody else would care much.

    That's my understanding of the subject. Few working mathematicians care much about foundations. They just use what they need to get their work done.
    Last edited: May 27, 2012
  6. May 27, 2012 #5
    SteveL27, thanks for all of those links! They were quite interesting, if not disturbing.

    I think this is a bit of an oversimplification. The vast majority of mathematics today is much more abstract and much more distant from the intuitively obvious idea that 1 + 1 = 2. Maybe you can keep 1 + 1 = 2, but maybe you loose most of modern algebraic geometry, higher dimensional topology, measure theory, fractals, etc.

    That is an opinion. The opposite opinion is equally valid; it may not be possible to fix the foundations (or it may only be possible to provide consistent foundations for a small subset of math).

    You could view the history of math in two ways. (1) Someone developed new math, even if the foundations looked shaky at the time, only to have the foundations worked out later on. So, we should continue doing math today and, based on past experience, we can expect the foundations to get worked out some time in the future. (2) Or you could say: Someone developed new math and every time we have attempted to find a solid foundation and have thought we succeeded in doing so, someone came along and found a problem in the foundations (admittedly, I don't know this to be true...I don't know the history of math well enough to know. I am just familiar with the 20th century foundational crisis). Why should we expect it to be any different with our current foundations?

    It seems you have shown that mathematicians will continue their work regardless of foundational issues, but you have not shown that mathematicians are correct in doing so.

    This bothers me a lot. The purpose, in my mind, of mathematics is not to 'get my work done.' For me, there is a lot more to it than that. Granted, it is fun, and challenging. But there is something deeper to it, and if it weren't for that, I would not care to do it; I might as well clean toilets. To quote poincare:
    "The scientist does not study nature because it is useful to do so. He studies it because he delights in it, and he delights in it because it is beautiful."
    If math is neither true nor beautiful, screw it.
    Last edited: May 27, 2012
  7. May 27, 2012 #6
    Perhaps you're reading too much into this.

    When Russell came out with Russell's paradox, that the "set of all sets not members of themselves" leads to a contradiction, what happened? Did the entire mathematical world stop work until this problem could be resolved? Of course not. Math had been doing quite well for thousands of years without a rigorous foundation.

    What happened is that Zermelo and others came up with ZF, which made set theory rigorous. So the foundationalists were happy. But nobody else even noticed. Of course the analysts and algebraists and geometers kept up with the news; but they didn't stop doing their work..

    In your opinion, what should have happened when Russell pointed out that naive set theory was inadequate?

    And for that matter, what should have happened when Newton realized he didn't have a good answer to the question, "Exactly what sort of mathematical object is the difference quotient when numerator and denominator are both zero?" Should Newton have abandoned math because he couldn't drill down into the logic?

    What would be your response? It's clear from the history of math right up to the present day that mathematics sometimes leads foundations by decades or centuries. Do you regard that as a reason people should stop doing math?

    My point is that the answer is no. Mathematicians should keep doing math; and the foundationalists should try to keep up as best they can.
  8. May 27, 2012 #7
    All of mathematics is a giant If-Then statement. One assumption you make is that your axioms are consistent.
  9. May 27, 2012 #8


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Assume P if and only if not P.
    * Assume P
    ** Therefore not P
    ** Therefore (P and not P)
    * Since P implies contradiction, conclude not P
    * Therefore P
    * Therefore (P and not P)
    Since (P if and only if not P) implies contradiction, conclude not(P if and only if not P).

    Another proof would be via truth table:
    Code (Text):

    P | not P | P if and only if not P | not(P if and only if not P)
    T |   F   |           F            |              T
    F |   T   |           F            |              T

    I can't really answer, but I can weigh in with my own particular observation on the topic that I feel is important: there is nothing to gain by using intuitionistic logic in your foundations, for three reasons:
    • Classical logic is a form of intuitionistic logic -- classical logic obeys all of the same laws of deduction as intuitionistic logic does, it just has some more
    • Classical logic embeds into intuitionistic logic -- if you work exclusively with negated propositions, all of the classical laws of deduction hold. e.g. not(not(not(P))) is a synonym for not(P).
    • With classical logic as our foundation, we can still develop and work with formal intuitionistic logic
    So, IMO, even if you want to work with intuitionistic logic, it's still better to take classical foundations because they're simpler.

    Because I was interested in topos theory! I was curious and had ideas about the foundations of non-standard analysis, which led me to desire an algebraic version of set theory, which led to topos theory.

    But also, I'm not doing intuitionism. Many intuitionists would reject intuitionistic logic on a variety of reasons; e.g. I've seen some that prefer something more constructivist, and others that reject the law of noncontradiction. Even those that accept intuitionistic logic would still probably reject the classical foundation upon which I base my understanding and usage of it.

    A topos, more or less, turns out to be a good intuitionistic set-theoretic universe. This has useful application to things like differential geometry, because the category of all sheaves on a topological space X is a topos. This gives a very precise explanation to the fact that things behave well when we can treat if we treat sheaves like they were sets.

    The main observation to get started with actually using this idea is that, in the internal logic of a topos (of sheaves), the truth values are the open sets of X. So you can immediately see that
    • P and not P is identically false
    • P or not P is (usually) not a tautology
    because they translate into
    • If O is an open set, then O intersect exterior(O) are disjoint
    • If O is an open set, then O union exterior(O) is usually not X

    In fact, it turns out that as simple of a space as R2 is 'generic' for intuitionistic propositional logic, in the sense that a sentence is a tautology in every intuitionistic logic if and only if it is true in the algebra of open sets of R2.

    Another useful fact is that the Dedekind construction of the real numbers done internally to the topos says that an "internal real number" is the same thing as a continuous function X -> R, so the internal ideas really do correspond to useful external ideas.

    (Interestingly, the Cauchy construction doesn't work as well. IIRC, an "internal Cauchy real number" is the same thing as a locally constant function X -> R)
    Last edited: May 27, 2012
  10. May 29, 2012 #9
    I guess I agree with you. It would be ridiculous for mathematicians to wait around for solid foundations...you may never get anywhere. I guess I am just unsettled by the recognition that we do not ultimately know whether the mathematics we are doing is true; on that question, we basically make a shot in the dark.

    It is probably important to make a distinction between which question we are trying to answer: "What should mathematicians do (in light of any presently unresolved foundational issues or philosophical questions)?" and "How do we know that the math that we as mathematicians do actually corresponds to anything real/true?" I think your answer to the former is a good answer. To the latter, I don't think there is an answer. We can believe it is true/real because we want to, but that does not guarantee anything...I guess that is a limitation inherent being human :frown:.

    It is just that, often times, mathematics gives me extreme joy, but then doubt comes along and deals a painful blow by suggesting that the math I am doing may not be true. All of the stuff that is out there...philosophers who object to the infinite cardinals, ultrafinitists, intuitionists, etc. all poison my joy by injecting uncertainty. Maybe the philosophers are right and the infinite cardinals cannot exist, maybe intuitionism is correct and many of our theorems are invalid or ultimately meaningless, etc. The only way I can recover from this is to say that I like math as it is, I like the infinite cardinals, I like classical mathematics, etc. But this is a pretty pathetic justification and it leaves me with no certainty as to whether the math I like is true, which is a kill-joy because part of what I like about mathematics stems from the belief that it corresponds to something real (however abstract it may be).
    Last edited: May 29, 2012
  11. May 29, 2012 #10
    Can you explicitly lay out how you translated the statement used in the liar paradox ("this statement is false") into "P if and only if not P"? I am a little confused because you mentioned before that the liar paradox is not a statement. However, "P if and only if not P" is a statement, albeit a false one as shown by your proof.
  12. May 29, 2012 #11
    First of all to say that mathematics does not have a solid foundation is just not true. Our theorems rest on deductions that ultimately rest on empirical facts which we call postulates or axioms. At this point, the basic axioms have been pushed all the way deep into set theory/logic, but all the same they are essentially empirical observations. Ultimately, all of science is empirical and that is what makes it "real".

    Consider Greek mathematics. Euclid's postulates are empirical in nature. These observations were refined and reduced to the maximum extent possible, but with those postulates, they managed to construct a huge body of knowledge. (Ok, so Hilbert and others corrected some minor errors, but otherwise the Greeks did an awesome job developing an extremely useful model for space which most people still use exclusively) The point is that you start with some very fundamental empirical observations and run with them. Anyway, that is my "philosophy" of math.

    I guess you have to ask yourself whether the "Law of Excluded Middle" makes you uncomfortable. Why are mathematicians happy to accept it? Why shouldn't we? It is "obvious" and there is no compelling reason not to use it. Also, rejecting it means we have to throw out many of our favorite theorems. And we prove theorems for a living, why would we want to make that harder?!? So until a logician gives me a very good reason to stop, I will continue to assume that double negatives are positive.
  13. May 29, 2012 #12
    On the contrary, we are not in the dark. We are absolutely certain that at least some of the math we are doing is not true!

    Consider Euclidean geometry. Consider non-Euclidean geometry. Both are logically consistent, hence valid math. But they can't both be true of the physical universe.

    So in this case, at least one important branch of mathematics is definitely false. But still interesting, and still useful! What say you?

    Logical validity is not the same as truth.

    An example I've given in other threads is to consider a work of fiction. The novel Moby Dick is fiction. Yet it's highly useful; it reminds us not to chase our obsessions to our doom. So a work of complete fiction can still be interesting and useful. It's often helpful to think of math the same way. That way you won't fall into the trap of wondering if math is true.
  14. May 29, 2012 #13


    User Avatar
    Science Advisor

    I don't think OP is talking about whether a branch of mathematics is "true" in the sense of physical reality. Rather whether it is "true" in the sense of logical consistency. More along the lines of "If ZFC is proven to be inconsistent, what happens to all the mathematics built on top of it?"
  15. May 30, 2012 #14
    Though we chose our axioms so as to attempt to capture the behavior observed in the physical world, I do not think that the physical world contains the mathematics we come up with. Take euclidean geometry for instance. I do not think there are any such things as lines, circles, or spheres in the physical world.

    This is not just because "every sphere we attempt to create has some imperfection" (i.e. is not perfectly symmetric), but rather because there are no such things as geometric objects in the physical world. Geometric objects have a precisely defined boundary. You can try to make what you think is a 'sphere' in the physical world and you may initially think that, though it is not a perfect sphere, it at least has some definite shape. But I say that it does not have any such thing as shape at all:

    You look closer and find that the original boundaries of your 'shape' are not definite boundaries at all, but instead are composed of discreet atoms. You can try to further nuance your position by saying that your 'shape' is actually composed of a bunch of small shapes (atoms) with definite locations. But who is to say that the same thing that happened to your original sphere will not again happen to the atoms? (i.e. that you will find that the atoms do not have any such thing as a definite boundary).

    So, while our axioms are loosely empirical, I would say that they in fact describe something quite different from our physical world and so I find it difficult to say that our world empirically justifies our mathematics.

    (If you want to believe the physical world justifies our mathematics, you may begin to question where you went wrong when you encounter things like the Banach-Tarski paradox)
  16. May 30, 2012 #15
    By 'true' I definitely do not mean descriptive of physical reality. I am also not sure I want to restrict my definition to logical consistency. The best way I can get at what I mean by 'true' is by explaining what I mean by 'not true.' Mathematics that is 'not true' would be ontologically equivalent to mere symbol manipulation. It would have no existence or reality beyond the markings on the chalk board, which would reduce to mere orientations of chalk atoms, each orientation bearing no more significance than the dust I sweep from my floor. All of its orderings of symbols would have no correspondence to any form of meaning (they would not correspond to any structural order, nor have any rational significance either because there is no such thing as structural order, or simply because mathematics does not correspond to real structural order). When you write [tex] \forall x\in\mathbb{R} [/tex]
    you would have said absolutely nothing.

    I am pretty certain that most mathematicians believe that what they are doing is not 'not true' as I defined above. They at least believe that there is something real about what they are doing, that somehow they are exploring the world of rationality, or pure structure, or at least something that is not ontologically equivalent to null. Even if you posit that you are just exploring the realm of 'logical consistency', you still have to believe that 'logical consistency' is something meaningful/real. This is what I mean by 'true'/'real'.
  17. May 30, 2012 #16
    I am a big proponent of intuitionist logic. Intuitionist logic corresponds nicely to computation, and it makes it easy to formalize mathematics in a way that can be checked and verified by a computer.

    In my opinion, ZFC isn't really that important to working mathematicians. The theory is worked out enough that there is confidence it serves as a legitimate foundation for general mathematics, but most people use very loose language for everyday proofs. It's hard work to go from a proof you might see in a paper to a formal proof in any formal system.

    Paradoxes are often seen as mystical things to beginners.

    They aren't.

    There is a principle called the Howard-Curry Correspondence. It is a Rosetta Stone that allows you to translate concepts in formal logical to concepts in functional programming.

    Under the Howard-Curry Correspondence, logical paradoxes correspond to infinite loops.

    So when you set up a system attempts to avoid "paradox", what you're really doing is defining a programming language which makes it impossible to write infinite loops.

    Of course, most programming languages are inconsistent, by this definition. This is usually because it's possible to write unbounded loops (unrestricted "while" statements) or to use general recursion (functions can call themselves without any restrictions).

    However, there are specialty languages that avoid these things. BLOOP in Hofstadter's GEB book is one example. More practical examples are Coq, Agda, Epigram, as well as formal systems such as the Simply Typed Lambda Calculus, System F, and the Calculus of Constructions.

    Writing programs in these languages is more difficult. You have to figure out how many times you need to loop before you start executing the loop, and you have to show your recursion will eventually hit a base case. However, because these systems are consistent, it becomes possible to use them as theorem provers, as well as programming languages.

    The reason is because it makes a certain "intuitive" sense. All things must be true or false, right? While strangely, you can't prove this fact from the other ZFC axioms, there is no apparent) contradiction from assuming it as an axiom itself.

    However, computer scientists don't like it at all. And they have good reasons for it.

    Intuitionist logic relies exclusively on "proof by show me an example". Under Howard-Curry, proofs corresponds to programs, and propositions correspond to the type of a program (Thing String, int, boolean, etc from Java, but on super steroids). If I want to "prove" a proposition, I just need to exhibit a concrete program that has the correct type.

    The proof of a statement beginning with "for all" is a (computable) function. The proof of a statement beginning with "there exists" is an ordered pair. A proof of a conjunction "A and B" is an ordered pair of proofs: one of A and one of B. A proof of a disjunction "A or B" is either a proof of A or a proof of B -- BUT you can ask WHICH one you have! The Empty type has no proofs -- if you have a proof of Empty, you have derived a contradiction, and you can use a proof of Empty to prove any statement. A proof of "not A" is a function from A to Empty.

    So how would I prove the identity function id(x) = x is continuous?

    For all x and e, where e > 0, there exists a d such that d > 0 and for all y, whenever |x - y| < d, then |id(x) - id(y)| < e.

    The way you'd do it in Coq is define a function that takes three parameters, x, e, and a PROOF that e > 0. Then, you would return an ordered pair, with e as the first element and a function taking y and PROOFS that |x - y| < e, and it returns the proof that |x - y| < e.

    That might be a bit strange. I wouldn't expect you to fully understand it, but I assure you, it maps incredibly cleanly to a computer program. Beginning students get tripped up over this definition all the time in calculus. But most of the confusion comes from the fact that "for all x" - "where e > 0" - and "whenever |x - y| < d" are all just different ways to spell "function" in English.

    Anyway --- Given that, you might wonder, what's wrong with the Law of Excluded Middle (LEM)?

    The problem is that in intuitionist logic, you always need to prove things by example, and LEM doesn't do that. it asserts "A or not A", but it:

    1) Doesn't tell you WHICH you get: Did you get "A"? Or did you get "not A".

    2) Even if it did, it doesn't actually give you a proof of "A", nor a proof of "not A".

    The Law of Excluded Middle can be added to languages like Coq or Agda as an axiom, and while the systems remain consistent, it breaks the computability of these systems. You end up with programs which don't "loop", but they *do* get "stuck". They become useless as programming languages.

    It's true that when you omit LEM, you lose some theorems. It's true, but only technically true. Most theorems that depend on LEM have a slightly weaker variants that are still true. For example, the fundamental theorem of algebra (if I recall correctly) says that all non-constant complex polynomials have a point x where f(x) = 0. This theorem depends on LEM. Without LEM, though, you can still assert that for all such polynomials, and for all e > 0, there's a point x such that f(x) < e. Basically, you can't find exact roots, but you can prove there are "arbitrarily accurate approximations" of roots.

    Fun fact about LEM. LEM is very closely related to the notion of decidability in computer science. For proofs that rely on "A or not A", if A is a decidable proposition (meaning, there is an algorithm that can find an a proof of "A" or a proof of "not A"), then LEM is actually redundant, and can be replaced by the proof that A is decidable. This lovely fact rescues many important theorems you might fear losing, as most of the mundane sets you work with in life are decidable.

    Philosophy is good for invoking a feeling of awe in your mind, but little else.

    If ZFC was found to be inconsistent, that would be interesting. Just as Naive Set Theory was found to have flaws, ZFC very well could, as far as anyone knows. It would be the work of logicians to figure out why it breaks down.

    Even in the worst case, if all logical systems had some paradox buried deep within them, they would still be interesting. You could ask yourself questions like "what is the shortest proof of a paradox"? "Under what conditions do paradoxes show up?" And "how does the complexity of the system influence the shortest proofs of paradox".

    Ultimately, what we can prove with logic is limited to our computational ability. Even with the most powerful computer available to us on the planet, and all the computing power of all of mankind's mental power, we are unable to figure out many "solved" problems. Which side has a dominant strategy in, say, Go? This question is well-formed. It is decidable, meaning we know an algorithm to tell us the answer. But yet, no one knows the answer.

    Even in the purest form of mathematics and logic, you shouldn't worry so much about "truth" or "reality". Practicality is king ;)

    Again, you dwell on aimless philosophy too much.

    If you've studied analysis, many "infinities" found in the study of real and complex numbers can easily be reduced to finite, computable concepts. An "infinite series of rationals" is just a function from the natural numbers to rationals. Limits of series use epsilon-delta definitions which can be reduced to computer programs under the Howard-Curry Correspondence. Assuming an intuitionist approach, we only care about "computable functions", which kindly removes many awkward infinities (and everything of interest suddenly becomes countable).

    Do not confuse the axioms of logic and math with the laws of physics. Physics works at many levels of abstraction. Classical physics has its own laws that are different from quantum physics that are different from relativity. And if you apply the wrong set of axioms to your problem, you get the wrong results.

    Furthermore, science tends to be deductive while math is inductive. In physics, you see the special cases and infer the general principle. Math outright forbids this. You can show P(x) is true for a quintillion different values of x, but you still haven't proved "for all x, P(x)".

    Also, it is good to point out, in a physical universe with different laws (say, a different mass for various particles, or a different coupling constant, or whatever), if life could exist in such a place, they would almost surely find themselves working with equivalently powerful logical systems. Intelligent life would eventually create a Turing-complete computer, with all the garbagety software bugs we have.

    A beach ball only serves as an intuition for our understanding of a sphere. It's helpful for giving our monkey brain enough hints to get the job done. Our minds are evolved to work with pictures, not with equations.

    Banach-Tarski says something that's very true. If you have a sphere and you divide it properly using a certain method that relies on a property you have asserted (without justifying), you can apply rotations to the parts and create two identical copies of the sphere.

    Some important things to note about this "paradox". It relies on the Axiom of Choice, which is non-intuitionist -- meaning you can't actually compute the answer to verify it. It is also talking about a set of points. There is no talk of mass, unlike our beach ball model, so there is no issue there. It's also nowhere stated that you have any intention of maintaining volume or surface area of any object. Remember that the area of a face (or the volume of a cell) is not determined by the number of points it contains (they are both infinite, of course, with equal cardinality). Again, in our beach ball model, the area (or volume) is determined by the number of "molecules" that make it up.

    Just remember, the theorem never said **** about a beach ball. It was talking about a sphere the whole time :)

    Again, this is where intuitionist logic is nice. When you write "for all natural numbers n, blah", you are actually talking about a computable function. While this may not have any "absolute" meaning in the philosopher's sense, it *does* mean you can write a computer program that can be run on any IBM-compatible system, running Windows, Mac OS, or Linux. You can actually DO something with it :) And that's nice.

    Just to give you a practical example (for certain values of practical), there is a ton of research going on into the realm of software verification. Basically, the goal is to find ways to prove theorems about your programs. Instead of relying on good engineering principles, testing, (and honestly, blind luck) to write correct programs, verified languages allow you to encode the properties you desire in the program into the type system. When you write your program, the compiler simply rejects any programs that don't guarantee that they conform to those properties.

    Anyway, intuitionist logic is cool, philosophy is mostly a waste, and you should check out Agda and Coq.


Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Justifying Proof by Contradiction
  1. Proof by contradiction? (Replies: 12)

  2. Proof by contradiction (Replies: 1)