|Jul2-12, 05:44 PM||#1|
Logic in Computation
Since I know that this will get lost in the Liar's paradox thread I wanted to separate it out from the thread:
In post 18 of the liar's paradox thread Hurkyl writes:
I would like to know where he is going with this line of thought. A related concept I've seen before is the concept of bottom:
"In type theory, a theory within mathematical logic, the bottom type is the type that has no values. It is also called the zero or empty type, and is sometimes denoted with falsum (⊥).
A function whose return type is bottom cannot return any value. In the Curry–Howard correspondence, the bottom type corresponds to falsity."
So while bottom acts like a third truth value, it is actually a subtype of both true and false. So both true and false can be of type Bottom. However, normally both True and False should not be of type bottom unless their is an error in the computation.
To summarize the intent of this thread: How do the laws of logic relate to computation and what deviations might their be from the law of logic (for instance: The law of the excluded middle).
P.S. Also in the Liar's paradox thread the question of the Halting problem came up:
|Jul5-12, 12:21 AM||#2|
One thing that might be of interest to you is a number theory approach to computation.
Gregory Chaitin developed a way to take code and convert it to a number theoretic problem.
I see this kind of thinking as a critical way to systematically and analytically assess whether something is computable, because if you have a number theory system expression to solve for a particular collection of variables, then if you can show that no solution exists, then you have indirectly (or directly) shown that it can not be computed.
We know that trying to compute 2x = (1 MOD 2) for any x is impossible, and such a statement shows that x is completely un-computable.
This kind of formalism provides a nice way for showing uncomutability properties for a general algorithm, if it can be turned into a number theory system expression (could be a collection of constraints like you have in the Chinese Remainder Theorem calculations, or one constraints like a simple one line congruence relationship).
If each computation was done in this manner and a standard mechanism was used to find the solutions, then we know that the program will not halt if there is no solution to the diophantine system and this is of course very useful.
Not only does the above approach standardize computation, it tells us about halting if we are able to show that the system has no solution (trying to use a computer to solve the diophantine system using a search algorithm that only terminates when it finds the solution is the approach I would use).
Most of this post just takes Gregory Chaitin's work and uses the idea of having standardized algorithms be put in Diophantine form where no-solution implies that under a standardized algorithm that only returns when it finds a solution, it will only halt if one exists and not halt if there are no solutions.
Some food for thought.
|Jul8-12, 10:32 PM||#3|
|Jul9-12, 01:22 AM||#4|
Logic in Computation
Number theory in its current form is very young, but I imagine later that this kind of specification (or a similar kind) is going to be probably the way people look at computation because not only does it standardize it, it offers the opportunity given further research to find whether the algorithm will terminate.
|Similar Threads for: Logic in Computation|
|Inconsistency In Sentence Logic and in Predicate Logic||Set Theory, Logic, Probability, Statistics||6|
|Differences between Logic Function and Logic Circuit||Engineering, Comp Sci, & Technology Homework||2|
|Logic Problem – test your logic||Brain Teasers||21|
|is 'Fuzzy' logic an appropiate Quantum logic ??||Quantum Physics||2|
|Logic & COmputation||General Discussion||17|