Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Searle's Chinese Room revisited

  1. Nov 10, 2013 #1
    The concerns Searle's notorious "Chinese Room" argument (summary: a computer cannot be conscious because it is all syntax).
    http://web.archive.org/web/20071210043312/http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html
    Although I find Searle's arguments flawed (aka wrong), I noticed in Wikipedia that it said that Searle's argument was only applicable to digital computers. Does this mean only deterministic computers? After all, if we extend Searle's analogy (a man in a closed room receiving input of Chinese characters gives output with cards of Chinese characters following some instructions, without that man actually understanding Chinese) by giving the man a stack of cards, some dice, and/or a coin, we could also extend his argument to a quantum computer or some other non-deterministic Turing Machine, no? (Don't tell me why Searle's argument is incorrect; I know that. I am just interested in the domain of his argument.) Thanks for any insight.
     
  2. jcsd
  3. Nov 10, 2013 #2

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I think the point is he is claiming that the set of rules under which digital computers operate is not sufficient to generate true understanding. He can't make an argument that no machine can generate understanding because nobody knows what kind of rules an arbitrary machine of the future can operate under. I could be incorrect about this though.
     
  4. Nov 10, 2013 #3
    Thanks, Office_Shredder, but Searle is saying that a program per se cannot give consciousness, no matter what the program. For example, he wrote
    "computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modelled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases." Consciousness and Language, p. 16
    However, I am not interested in debating whether he is right or wrong. (I agree with you that he is wrong, but that is beside the point.) My question is merely whether his argument only applies to deterministic computers, or whether the argument also applies to non-deterministic ones. ("The moon is made of green cheese; ergo a rock is made of green cheese.": the domain of "rock" here is only rocks on the moon.)
     
  5. Nov 10, 2013 #4

    phyzguy

    User Avatar
    Science Advisor

    Along these lines, why can't you extend Searle's argument to neurons in your brain? Since clearly no single neuron in your brain understands Chinese, you can use Searle's argument to argue that your brain can't understand Chinese either, which is clearly false. The argument ignores the possibility of emergent properties. Or is this not the kind of thing you wanted to discuss?
     
  6. Nov 10, 2013 #5
    Thanks, phyzguy, but you are also missing my point. Again, I agree that there are a number of counter-arguments to Searle's arguments, and I know them well, and agree with a lot of them, but again I am not interested in discussing these. I am only interested in the argument's domain: deterministic computers only, or also non-deterministic ones?
     
  7. Nov 10, 2013 #6

    Evo

    User Avatar

    Staff: Mentor

    Sorry, but philosophy is not allowed here.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Searle's Chinese Room revisited
  1. The Room (Replies: 68)

  2. John Searle's China Room (Replies: 201)

  3. Science Revisited (Replies: 17)

  4. Stonehenge revisited (Replies: 4)

Loading...