Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Computing a Paradox

  1. May 5, 2015 #1
    Simple question: Can a computer be taught to recognize a paradox?

    This assumes the computer has no cognizant reasoning or "self awareness".

    It is a plot device used in a lot of science fiction, that the intrepid "flawed" human hero defeats the computer or robot with a simple paradox making it smoke and spark and such nonsense because it can't escape a paradox.

    Can a computer be taught or programmed to recognize and simply brush aside such a thing? Or does that actually require reasoning skills?
  2. jcsd
  3. May 6, 2015 #2

    Filip Larsen

    User Avatar
    Gold Member

    Presenting logical contradicting data to a computer in hope of making it go up in smoke is mostly a story telling device that I would guess stem from the "golden years of AI" [1] where future intelligent machines where thought to be based on logical reasoning. There is no fundamental problem for software to handle any kind of data, including logical contradicting data, but it of course requires that the software is designed to do so.

    At best you can compare it to present days vulnerabilities in software that allow special designed data to perform a denial-of-service attack [2] or similar exploits. However, those vulnerabilities are mostly due to subtle mistakes and flaws in the software and not because of any fundamental inability to handle logical inconsistent data.

    [1] http://en.wikipedia.org/wiki/History_of_artificial_intelligence
    [2] http://en.wikipedia.org/wiki/Denial-of-service_attack
  4. May 6, 2015 #3
    I've actually thought about this one a bit. The key thing about paradoxes is that they are inconsistent given assumptions you throw at them. So take some statement that's not a paradox: 2+2=4. Assume it is true, no contradiction will result, everything works. Assume its false, contradictions will result. Now take a paradox, for instance the simple statement "this statement is a lie". If you assume the statement is true, it follows that its false. If you assume its false, it follows (via excluded middle) that its true. Logical contradictions with either assumption. You can also imagine statements which are consistent under any assumption like geometry is with or without the parallel postulate, basically logical tautologies instead of contradictions.

    But the point is, you can see all truth values as unary logical functions acting on their assumptions: True outputs True regardless of your assumptions, False outputs False regardless of your assumptions. But a paradox is equivalent to the NOT function, reversing your assumptions either way, and a tautology is equivalent to an identity function, confirming your assumptions either way with a consistent system. So using this method to test assumptions, a computer should be able to identify them.
  5. May 6, 2015 #4


    User Avatar
    Homework Helper

    You could think of a process to deal with a paradox as a means to test possible results and eliminate the results that don't work, and if all possible results are eliminated, then conclude the problem statement is self conflicting. So in the case of "this statement is a lie", the process eliminates both "true" and "false", and then determines that the statement is self conflicting. So the potential weakness here is not programming the process to check for inconsistencies like self-conflicting statements.
  6. May 6, 2015 #5
    Yeah, good point, I guess that's why the scifi works: No matter what a computer can theoretically do you can never rule out programmer error. But its an interesting question, and I don't know the answer rigorously. Is there some way to hijack the system we're talking about, and inject some set of statements that would mess up it? I don't know.
  7. May 6, 2015 #6


    User Avatar
    Gold Member

    The idea of a computer smoking and sparking because it encounters a programming problem would be a fatally hackneyed element in your storywriting - equivalent to rockets dangling from strings or hearing explosions in space.

    Be original.

    Does the computer have any control over its environment? What if it had to activate the sprinkler system to save itself from a fire, knowing it would get shorted out? (Even this is hackeyed, as well as flawed: no systems designer would set up a system like this.)

    Asimov was adept at creating logical flaws that paralyzed his robots. For him, with his 3 laws, all he had to do was set up a circumstance where the robot thought a person was going to get hurt if it didn't act by sacrificing itself.
  8. May 6, 2015 #7
    Thanks for the replies. The idea about testing inputs for flawed logic makes sense. Bad data can be filtered and ignored. I don't suppose a computer/robot could be programmed to recognize a paradox as such, since that seems to require abstract thinking and the ability to consider something illogical, but it seems sensible that it could just ignore what it doesn't understand and move on.

    Dave - I am not writing any scifi stories, but if I ever did I would never dream of adding the sparking and smoking computer thing. I just used that as an example of (admittedly old) sci fi elements that demonstrate the whole paradox vs computer thing. I've only seen smoke from a computer once, and it was due to a power supply failure.

    My curiosity on the topic is mostly in the question of where the boundaries are between what programming can do, and what it cannot.
  9. May 6, 2015 #8
    First Controller

    In response to your concern that our autonomous exploration space-craft could be disabled by a logical paradox, I must assure you that the Autonomous Control System (ASC) of all Frob Empire Scout Ships consists of multiple flight control computers, none of which are identical, and that these computers vote on which actions to take. An enemy or anomalous event would therefore have to disable multiple computers of different designs and software.

    In addition there exists a fallback mode that is activated when the system suffers unrecoverable failure of the ACS. Each ACS computer periodically sends a keep-alive signal to the Recovery Mode Controller. In the event of an ACS computer being lost or hung, it will no longer be able to transmit this signal. The Recovery Mode Controller is therefore able to detect when the ACS is no-longer viable, and will attempt to reset and reboot ACS computers as required. If the ACS cannot be restored because fewer than 3 computers are available, it will automatically take action, using un-alterable stored procedures, to return the vessel to the nearest starbase, or self-destruct if such an action proves impossible.
  10. May 8, 2015 #9
    Yeah, the main thing I see is it can recognize the damage a paradox does to a set of statements and discard it, which is pretty similar to what our brains do. Consider Curry's paradox, in the context of a conversation:

    "If what I'm saying is true, Obama is Elvis."
    "You're full of it, Obama is not Elvis."
    "I'm not saying he is, I'm just saying if what I'm saying is true, Obama is Elvis"
    "But what you're saying is not true"
    "I'm not even saying that what I'm saying is true, I'm saying IF what I'm saying is true, than Obama is Elvis. Can you admit that simple statement is true?"
    "Okay, big IF there, but sure."
    "So you admit that statement is true, and therefore that Obama is Elvis!"

    Would that argument ever convince you? No, because it contradicts a load of other things you know to be true about the world, its not consistent with other facts. A computer system can always do the same, and see when the logical consequences of a statement break a bunch of other things, and decide therefore to discard the statement.
  11. May 9, 2015 #10


    User Avatar
    Science Advisor

    ... but mathematicians can! A paradox is the basis for Gödel's incompleteness theorem.
  12. May 9, 2015 #11
    If your mind can escape paradox, a computer can make it too. Because a computer can simulate a physical system then can simulate a brain, then escape the paradox.
  13. May 10, 2015 #12
    That sort of creeps into the central point of why I am asking my question. If a computer can be built that can simulate a physical brain, does it then cease to be limited by its programming and is now sentient? How far can programming actually go? Can it really simulate thinking? Can a machine be programmed to consider illogical things? Can it be programmed to "think outside of the box"? Is a simulation of the human mind simply a matter of processing power, or is there a hard limit to what a machine could ever be programmed to do?
  14. May 10, 2015 #13


    User Avatar
    Science Advisor

    A collection of good questions. My answer is: It depends on the system architect/chief programmer.
  15. May 10, 2015 #14
    We are very very very very very very very far of doing something intelligent with a computer.
    Trust me, computer, programming tools and processing power are not ready.
    I propose to wait a century and speak about again.
  16. May 10, 2015 #15
    According to wikipedia, a fly has 105 neurons and 107 synapses. AI is not my field but computing is, and I know we can build that, and have built larger simulations. The fact that it doesn't work is more a lack of understanding than a lack of computing power.

    On the original topic, we have formal logic systems that can correctly identify a paradox. If an AI is a computer program, even if it can't program itself, then there's nothing stopping it using a second computer to run existing software. More likely it'll just be given a bigger computer with the option to use part of it to run whatever existing programs it needs.
  17. May 10, 2015 #16
    That's a really good point. Another open question is if someone did find the actual complete simulation of brain activity, would it have a computational reduction that could dramatically speed it up, just a little too complex to have been arrived at by evolution? Its totally possible, and that means incredible AI could exist on current hardware.
  18. May 10, 2015 #17
    Computer intelligence can occur by evolution, a program compile it's own source code, you introduce a mutation factor, with a very good luck you evolve to perfect intelligence. This process have nothing comparable to number of neuron, the probability, and the bootstrap of the optimisation need a extreme computer power. Not compatible with our computer. We are not ready for emergent life on device.
  19. May 10, 2015 #18


    User Avatar
    Gold Member

    http://www.artificialbrains.com/ [Broken]
    BlueBrain Project:
    As of November 2011 the largest simulations were of mesocircuits containing around 1 million neurons and 1 billion synapses. A full-scale human brain simulation of 86 billion neurons is targeted for 2023.

    Might be a while before it is real time.

    http://www.gizmag.com/openworm-nematode-roundworm-simulation-artificial-life/30296/ [Broken]
    Last edited by a moderator: May 7, 2017
  20. May 10, 2015 #19


    User Avatar
    Gold Member

    It's pretty difficult to define the line between non-sentience and sentience, but there are practical ways to answer your question.

    Any program that is complex enough is able to make decisions such as "I choose not to follow this illogical line of thought." Or more simply: "I choose not to obey you."
  21. May 11, 2015 #20


    User Avatar
    Gold Member

    As we delve into the philosophical..... Choosing to not obey can be a simple reasoning process, like "If it is Monday, ignore input saying to turn off"

    Saying "I choose", accent on "I", now that is significant.

    I think a significant milestone would be awareness of being aware.

    Going back to the OP, I initially thought "hey that's simple - just keep track of logical conclusions and if you come back to a previous one, take a different path or conclude it is circular or contradictory." But, how to program that in a general sense is difficult to visualize.

    THis page http://www.philosophybasics.com/branch_logic.html has a section on paradoxes.

    " It can be argued that there are four classes of paradoxes:

    • Veridical Paradoxes: which produce a result that appears absurd but can be demonstrated to be nevertheless true.
    • Falsidical Paradoxes: which produce a result that not only appears false but actually is false.
    • Antinomies: which are neither veridical nor falsidical, but produce a self-contradictory result by properly applying accepted ways of reasoning.
    • Dialetheias: which produce a result which is both true and false at the same time and in the same sense.
    Paradoxes often result from self-reference (where a sentence or formula refers to itself directly), infinity (an argument which generates an infinite regress, or infinite series of supporting references), circular definitions (in which a proposition to be proved is assumed implicitly or explicitly in one of the premises), vagueness (where there is no clear fact of the matter whether a concept applies or not), false or misleading statements (assertions that are either willfully or unknowingly untrue or misleading), and half-truths (deceptive statements that include some element of truth)."
  22. May 11, 2015 #21
    Is it though? If a computer comes to a decision through a reviewable deterministic process, that can't be called a choice. But if the process leading to the decision is unknown, than it can be called a choice. If we logically conclude that we must have the salad not the doughnut to stay within our diet goals, we say "I have no choice but to take the salad". If there are no such goals but a random whim makes us pick the salad, we say we chose it. Is it so different for a computer? I think as you start bringing in things are mysterious in their nature, like the collapse of a qubit through observation, to drive computational processes, than the language of choice becomes inevitable.

    Quantum computers will choose outcomes, because there's no other language to describe an observable deterministic process that dictates their actions, because none exists.
  23. May 11, 2015 #22


    User Avatar
    Gold Member

    This is a bit off-topic.

    I think maybe you misunderstand quantum computing. Have you ever programmed one? They are perfectly deterministic.

    BTW, "the collapse of a qubit through observation" is pretty old school (try Copenhagen)
  24. May 12, 2015 #23
    Copenhagen hasn't yet been invalidated.
  25. May 12, 2015 #24


    User Avatar
    Gold Member

    I never suggested Copenhagen might be invalid, just that, conceptually, "collapse", as a term, is a bit old school compared to decoherence.

    I wouldn't be so bold as to say that ANY interpretation is invalid. Just in-favor, out-of-favor, etc. Subjective opinions.
  26. May 12, 2015 #25
    Huh? From the article on quantum computing:
    "Quantum algorithms are often non-deterministic, in that they provide the correct solution only with a certain known probability."
    Quibits can be prepared in states where they collapse in one way or another with high probability, but they can also be prepared in states where they collapse in one way or another with any probability. Quantum computing is fundamentally probabilistic, and deeply tied to randomness. One of my favorite reads on it:
    Quantum computing includes gates which who's outputs are completely uncorrelated with their inputs, pure randomness. Chaining two of these gates makes a classical not gate.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook