Blog Entries: 3

## Logic in Computation

Since I know that this will get lost in the Liar's paradox thread I wanted to separate it out from the thread:

"
 Of course, there are other variations on logic than classical logic. I mentioned computability theory: that the alternatives are {true, false, infinite loop} is rather important to the theory. e.g. it gives a way out to the specific construction of the Liar's paradox I mentioned earlier: to refresh: Q(R) := not R(R) P := Q(Q) The naive implementation of the predicate Q and the sentence P clearly results in P evaluating to "infinite loop".
and in post #47 he writes:

 Quote by Hurkyl Third, I think it would be interesting to point out that in the logic of computation, there is no problem with there being a sentence P satisfying "P = P is not true". Here's an implementation in python: Code: def P(): return not P() however, your computer probably cannot do this computation: an equivalent implementation that will not overflow your stack is: Code: def P(): while True: pass

I would like to know where he is going with this line of thought. A related concept I've seen before is the concept of bottom:

"In type theory, a theory within mathematical logic, the bottom type is the type that has no values. It is also called the zero or empty type, and is sometimes denoted with falsum (⊥).
A function whose return type is bottom cannot return any value. In the Curry–Howard correspondence, the bottom type corresponds to falsity.
"
http://en.wikipedia.org/wiki/Bottom_type

So while bottom acts like a third truth value, it is actually a subtype of both true and false. So both true and false can be of type Bottom. However, normally both True and False should not be of type bottom unless their is an error in the computation.

To summarize the intent of this thread: How do the laws of logic relate to computation and what deviations might their be from the law of logic (for instance: The law of the excluded middle).

P.S. Also in the Liar's paradox thread the question of the Halting problem came up:
http://en.wikipedia.org/wiki/Halting_problem

 PhysOrg.com science news on PhysOrg.com >> 'Whodunnit' of Irish potato famine solved>> The mammoth's lament: Study shows how cosmic impact sparked devastating climate change>> Curiosity Mars rover drills second rock target
 One thing that might be of interest to you is a number theory approach to computation. Gregory Chaitin developed a way to take code and convert it to a number theoretic problem. I see this kind of thinking as a critical way to systematically and analytically assess whether something is computable, because if you have a number theory system expression to solve for a particular collection of variables, then if you can show that no solution exists, then you have indirectly (or directly) shown that it can not be computed. We know that trying to compute 2x = (1 MOD 2) for any x is impossible, and such a statement shows that x is completely un-computable. This kind of formalism provides a nice way for showing uncomutability properties for a general algorithm, if it can be turned into a number theory system expression (could be a collection of constraints like you have in the Chinese Remainder Theorem calculations, or one constraints like a simple one line congruence relationship). If each computation was done in this manner and a standard mechanism was used to find the solutions, then we know that the program will not halt if there is no solution to the diophantine system and this is of course very useful. Not only does the above approach standardize computation, it tells us about halting if we are able to show that the system has no solution (trying to use a computer to solve the diophantine system using a search algorithm that only terminates when it finds the solution is the approach I would use). Most of this post just takes Gregory Chaitin's work and uses the idea of having standardized algorithms be put in Diophantine form where no-solution implies that under a standardized algorithm that only returns when it finds a solution, it will only halt if one exists and not halt if there are no solutions. Some food for thought.

Blog Entries: 3
 Quote by chiro This kind of formalism provides a nice way for showing uncomutability properties for a general algorithm, if it can be turned into a number theory system expression (could be a collection of constraints like you have in the Chinese Remainder Theorem calculations, or one constraints like a simple one line congruence relationship).
I bet that this approach has high complexity both because the Chinese remainder theory is for linear equations and computers can deal with quite large numbers.

## Logic in Computation

 Quote by John Creighto I bet that this approach has high complexity both because the Chinese remainder theory is for linear equations and computers can deal with quite large numbers.
It does, but it's beneficial to have it in this kind of form because it means that when the field is developed enough to the point where one can check if solutions exist, then from computer science of view this is very useful in terms of finding whether algorithms ever halt.

Number theory in its current form is very young, but I imagine later that this kind of specification (or a similar kind) is going to be probably the way people look at computation because not only does it standardize it, it offers the opportunity given further research to find whether the algorithm will terminate.