Searle's Chinese Room revisited

  • Thread starter nomadreid
  • Start date
In summary, the conversation revolves around Searle's "Chinese Room" argument, which claims that a computer cannot be conscious because it is purely based on syntax. The argument has been deemed flawed by many, but some wonder if it only applies to deterministic computers or if it can also be extended to non-deterministic ones. However, the main focus of the conversation is not to debate the validity of Searle's argument, but rather to discuss its domain and whether it can be applied to other entities, such as neurons in the brain.
  • #1
nomadreid
Gold Member
1,668
203
The concerns Searle's notorious "Chinese Room" argument (summary: a computer cannot be conscious because it is all syntax).
http://web.archive.org/web/20071210043312/http://members.aol.com/NeoNoetics/MindsBrainsPrograms.html
Although I find Searle's arguments flawed (aka wrong), I noticed in Wikipedia that it said that Searle's argument was only applicable to digital computers. Does this mean only deterministic computers? After all, if we extend Searle's analogy (a man in a closed room receiving input of Chinese characters gives output with cards of Chinese characters following some instructions, without that man actually understanding Chinese) by giving the man a stack of cards, some dice, and/or a coin, we could also extend his argument to a quantum computer or some other non-deterministic Turing Machine, no? (Don't tell me why Searle's argument is incorrect; I know that. I am just interested in the domain of his argument.) Thanks for any insight.
 
Physics news on Phys.org
  • #2
I think the point is he is claiming that the set of rules under which digital computers operate is not sufficient to generate true understanding. He can't make an argument that no machine can generate understanding because nobody knows what kind of rules an arbitrary machine of the future can operate under. I could be incorrect about this though.
 
  • Like
Likes 1 person
  • #3
Thanks, Office_Shredder, but Searle is saying that a program per se cannot give consciousness, no matter what the program. For example, he wrote
"computational models of consciousness are not sufficient by themselves for consciousness. The computational model for consciousness stands to consciousness in the same way the computational model of anything stands to the domain being modeled. Nobody supposes that the computational model of rainstorms in London will leave us all wet. But they make the mistake of supposing that the computational model of consciousness is somehow conscious. It is the same mistake in both cases." Consciousness and Language, p. 16
However, I am not interested in debating whether he is right or wrong. (I agree with you that he is wrong, but that is beside the point.) My question is merely whether his argument only applies to deterministic computers, or whether the argument also applies to non-deterministic ones. ("The moon is made of green cheese; ergo a rock is made of green cheese.": the domain of "rock" here is only rocks on the moon.)
 
  • #4
Along these lines, why can't you extend Searle's argument to neurons in your brain? Since clearly no single neuron in your brain understands Chinese, you can use Searle's argument to argue that your brain can't understand Chinese either, which is clearly false. The argument ignores the possibility of emergent properties. Or is this not the kind of thing you wanted to discuss?
 
  • Like
Likes 1 person
  • #5
Thanks, phyzguy, but you are also missing my point. Again, I agree that there are a number of counter-arguments to Searle's arguments, and I know them well, and agree with a lot of them, but again I am not interested in discussing these. I am only interested in the argument's domain: deterministic computers only, or also non-deterministic ones?
 
  • #6
Sorry, but philosophy is not allowed here.
 

1. What is the Chinese Room thought experiment and who proposed it?

The Chinese Room is a thought experiment proposed by philosopher John Searle in 1980 to challenge the idea of artificial intelligence. It poses the question of whether a computer program can truly understand language or just simulate understanding.

2. How does the Chinese Room thought experiment work?

In the thought experiment, a person is placed in a room with a book of instructions for translating Chinese characters into English. The person receives Chinese characters as input, follows the instructions to produce appropriate responses, but does not understand the meaning of the characters. This is meant to demonstrate that a computer program can simulate understanding without actually understanding.

3. What are the main criticisms of the Chinese Room thought experiment?

One major criticism is that it relies on the assumption that understanding is solely based on syntax and symbol manipulation, while many argue that true understanding involves more than just following rules. Additionally, the experiment does not account for the possibility of a computer program developing true understanding through machine learning and adaptation.

4. How has Searle responded to these criticisms?

Searle has acknowledged the limitations of the thought experiment, and has clarified that it was not meant to prove that computers cannot understand language but rather to challenge the idea that they can. He also argues that consciousness and understanding cannot be reduced to computational processes.

5. Has the Chinese Room thought experiment been resolved?

No, the debate surrounding the Chinese Room thought experiment and the nature of artificial intelligence continues. Some argue that recent advancements in artificial intelligence, such as natural language processing and deep learning, have challenged Searle's original argument. Others maintain that true understanding and consciousness cannot be achieved through purely computational processes.

Similar threads

Replies
201
Views
16K
  • Biology and Medical
Replies
16
Views
5K
  • Art, Music, History, and Linguistics
Replies
1
Views
1K
  • Computing and Technology
Replies
24
Views
8K
  • Biology and Medical
Replies
26
Views
7K
Replies
11
Views
4K
  • General Discussion
Replies
33
Views
5K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
7
Views
3K
Back
Top