Searle's Chinese Room Argument Against AI

  • Medical
  • Thread starter TheStatutoryApe
  • Start date
  • Tags
    Ai Argument
In summary, there has been an ongoing debate on the likelihood of achieving A.I. in the general philosophy forum. The main opponent of A.I. in this argument relies on The Chinese Room Argument of John Searle, which claims that computers can only understand syntax but not semantics. However, this argument fails to consider that semantic understanding is developed through experience, and even a human mind must learn a language before understanding its semantics. Additionally, the argument does not account for the possibility of a sophisticated machine being able to develop semantic understanding through syntax. Overall, the argument against A.I. based on the Chinese Room is not solid and does not fully consider the capabilities of artificial intelligence.
  • #1
TheStatutoryApe
296
4
There has been an on going debate on the likelihood of achieving A.I. in the general philosophy forum. The major opponent of A.I. in this particular argument has been relying heavily on The Chinese Room Argument of John Searle. I won't get into the rest of the person's argument since it deals with the "soul" and "freewill" which are better left to the philosophy forums.

At any rate, as I understand it the basis for the Chinese Room Argument is that a computer can only "understand" syntax which is not a meaningful understanding. Searle seems to contend that semantics (language meaning as opposed to syntax: language structure or patterns) are necessary to meaningful understanding. Here is where I have found what I believe to be the problem with his argument.

To suppose that semantic understanding is a baseline for cognition would seem to me to propose a platonic essence to language only accessible to a cognitive mind. This would be a dualistic argument yet Searle apparently states that his argument does not come from a dualistic perspective.

The way I see it is that semantic understanding is developed by the human mind through experience. Obviously, even by the logic of Searle's thought experiment, not even a human mind can understand a language unless it has learned the language. For the purpose of language(at the very least), and hence semantics, we would have to consider then the human mind a blank sheet at the outset. In this case the human mind must not have anything at it's disposal with which to decifer information but syntax correct?

If I'm right in my assumptions then this is a major flaw in Searle's argument. If the development of semantic understanding is really based in syntax then the notion that computers are limited to syntactic understanding would be false so long as an A.I. can develope a semantic understanding through syntax.

Is my criticism of this argument solid? Am I mistaken in believing that the root of semantic understanding is syntactic?
 
Biology news on Phys.org
  • #2
I think I somewhat agree with your reasoning. Searle is basically claiming that being able to answer questions is different from “understanding” them. I do not understand why that distinction would argue that machines could not do the “understanding” part. I view the things happening inside the “Chinese Room” as an analogy of what happens inside the brain when it is answering a question. This may however not be sufficient to get those “feelings of understanding”. The brain does more things, like associating the incoming words with previous experiences, and those things may attribute to said feelings, but a sophisticated machine might do those things too.

The “Chinese Room” story may be useful as an argument against the “Turing Test”, but I do not see how it argues against the possibility of artificial intelligence.
 
  • #3
TheStatutoryApe said:
There has been an on going debate on the likelihood of achieving A.I. in the general philosophy forum. The major opponent of A.I. in this particular argument has been relying heavily on The Chinese Room Argument of John Searle. I won't get into the rest of the person's argument since it deals with the "soul" and "freewill" which are better left to the philosophy forums.
Yes, I've been following that thread loosely (haven't had time lately to go in-depth) and have been meaning to start a thread here about Searle's Chinese Room. I'm glad you've done it for me. :smile: For the record, I believe Searle locates the distinction between AI and human intelligence for which he argues not in the soul, but in some (unelaborated) property of biological systems.

TheStatutoryApe said:
To suppose that semantic understanding is a baseline for cognition would seem to me to propose a platonic essence to language only accessible to a cognitive mind. This would be a dualistic argument yet Searle apparently states that his argument does not come from a dualistic perspective.
Could you rephrase this? I don't quite know what you mean by "baseline for cognition," and though I haven't read Searle in a while, I don't know that he claims AI cannot carry out cognition of some sort.

TheStatutoryApe said:
The way I see it is that semantic understanding is developed by the human mind through experience. Obviously, even by the logic of Searle's thought experiment, not even a human mind can understand a language unless it has learned the language. For the purpose of language(at the very least), and hence semantics, we would have to consider then the human mind a blank sheet at the outset. In this case the human mind must not have anything at it's disposal with which to decifer information but syntax correct?
In what sense do you mean the human mind is a blank sheet at the outset? There is of course a lot of structure and function built into the human brain even at birth.

In any case, I'm not sure this objection works for a couple of reasons.

First, it's not obvious that even a newborn doesn't have some in-built semantic understanding, i.e. it's not clear that a newborn doesn't process sensory information in such a way that that information is not meaningful in some sense for the infant. In fact, a good argument could probably be made that the infant does indeed experience the world in some kind of meaningful way.

Second, it seems to me that experience in and interaction with the world could be imported into the Chinese Room (CR) scenario without affecting the core argument. All we have to do is suppose that some of the symbols sent into the occupant of the CR correspond to external signals (in the same sort of way that neural signals coming from the eyes correspond to visual information, from the ears aural information, and so on) and that some of the symbols the CR occupant sends out correspond to something like motor commands. Then we have the occupant shuffling symbols in the service of some agent that is interacting with and experiencing the external world, but of course the occupant is none the wiser to the meaning of the symbols and the symbol shuffling. In a similar way, we could suppose that some of the symbol shuffling the occupant does corresponds to some learning algorithm by which the artificial agent he controls actually learns from worldly experience. So although intuitively we seem to come to understand things by means of experience and learning, we can import those phenomena into the CR without denting the main objection.

-----

The Chinese Room argument is frustrating, sometimes maddening. The argument is simple and compelling (almost deceptively so) but really touches on deep and complex issues on further analysis. Many (such as you and I) have a sense that there is something wrong with the argument, but nonetheless it is hard to refute. If nothing else, the argument is useful in drawing attention to certain concepts that are still rather poorly understood or at least poorly defined or conceived, even if our everyday use of these concepts seems unproblematic.

I think it's important to ask what exactly we mean when we say we "understand" something. What is it to understand? As a rough first approximation, I would say that understanding is a kind of functional coherence existing among some set of perceptual, conceptual, and behavioral processes within some cognitive agent, such that this mesh of processes enables the agent to navigate through the physical and conceptual environment successfully, even in novel situations. Essentially, to understand X might just be to have a good cognitive model of X, a model that corresponds well to salient features of the actual X in question.

On this interpretation, I don't think we can say that the CR occupant understands the symbols being sent into him, or what the meaning is of the symbol shuffling he does, or what is the meaning of the symbols he sends out. However, I do think it is plausible to say that the CR as a whole understands its internal symbol shuffling processes in some sense, to the extent that those symbols and symbol manipulation correspond to an accurate and fecund model of whatever environment the CR finds itself in.

So one trick here might be to question precisely what system it is that we expect to be properly attributed understanding. The CR occupant does not seem well disposed to be the system doing the understanding for a couple of reasons. To me, the main reason seems to be that all the occupant does is enforce symbol shuffling-- if we were to draw an analogy from this scenario over to what happens in the human mind/brain, the CR occupant would be kind of like the laws of physics, ensuring that all the symbols (neural signals) get shuffled (processed/propogated) in the correct manner. But of course we don't say that it is the laws of physics of somesuch that understands, we say it is us who understand. Who are "we"? We are the entire system in question that takes in, processes, and sends out signals. Thus "we" seem to correspond better to the CR as a whole than to the CR occupant.

But how plausible is it that something like a CR, taken as a whole, could be said to understand something? It seems entirely plausible from an external view-- after all, we do the same sort of things with animals. We observe the behavior of the system in question in response to various kinds of stimuli in various kinds of environments, and if this behavior indicates that the animal has a good cognitive model of the situation it is in, we say the animal understands its situation. So external, third person attribution of understanding seems unproblematic here. But one still might object that it's just not plausible that the CR as a whole really experiences itself as an understanding agent, and this gets into tricky questions about consciousness and first person perspectives as opposed to third person perspectives. I won't get much further into that other than to say that I think understanding is primarily a functional concept (rather than a qualitative/experiential one), and so concerns such as these should be regarded perhaps as an interesting sidebar but not primarily constitutive of the main issue at hand.
 
Last edited:
  • #4
gerben said:
I think I somewhat agree with your reasoning. Searle is basically claiming that being able to answer questions is different from “understanding” them. I do not understand why that distinction would argue that machines could not do the “understanding” part. I view the things happening inside the “Chinese Room” as an analogy of what happens inside the brain when it is answering a question. This may however not be sufficient to get those “feelings of understanding”. The brain does more things, like associating the incoming words with previous experiences, and those things may attribute to said feelings, but a sophisticated machine might do those things too.

The “Chinese Room” story may be useful as an argument against the “Turing Test”, but I do not see how it argues against the possibility of artificial intelligence.
When I first read the argument as related by the person I was discussing this with in GP I thought that he may have misapplied an argument specifically geared toward the "Turing Test". I was suprised to see that I was wrong. It would appear though that Searle back peddled to some degree later on when he said that his argument was directed solely at the idea that Strong A.I. is a matter for software only. He emphasized this it seems in 1990 ten years after the original argument was formulated. If I'm not mistaken nueral nets and the like became more popular around that time as well.

The other facet of his argument he refers to as intrinsic intentionality. This idea seems a bit vague, something akin to "freewill" I'd assume. This part of the argument would collapse with the rest though since it seems Searle believes that intentionality equates to meaningful output which is the product of true comprehension in the semantic sense.
 
  • #5
As a reply to hypnagogue’s post:
I think the Chinese room argument mainly shows that one can think of a system that can do (at least) one of the things that we can do, while it obviously cannot do other things that we can. So why would it be strange that there are major differences between that system and us? (that system only emulates a small part of our abilities)
 
  • #6
Hypnagogue said:
In what sense do you mean the human mind is a blank sheet at the outset? There is of course a lot of structure and function built into the human brain even at birth.
I restricted my thoughts on this to the understanding of linguistics since it is obvious that no person can understand a language is they have never learned it. I could not say that there is no understanding what so ever in an infant but all of the structure that you (and I as well) would attribute to an infant is easily described as part of the operating system. The homunculus in the CR has an operating system as well but apparently lacks understanding. The OS for the homunculus and the infant both would rate at about the same level of cognition as far as I can tell. The semantic understanding would be a higher rank of complexity in the system processes.

Hypnagogue said:
In fact, a good argument could probably be made that the infant does indeed experience the world in some kind of meaningful way.
I agree. I believe that the capacity for learning from syntactic experience creates this meaningfulness.
My point isn't to down grade the infants capacity but to elevate the potential of the computer from hopelessly stuck in the syntactic to having the ability to move past that just as I believe a human does.

I have more to say but I have to get going in a minute.
Thank you for the responses.
 
  • #7
TheStatutoryApe said:
When I first read the argument as related by the person I was discussing this with in GP I thought that he may have misapplied an argument specifically geared toward the "Turing Test". I was suprised to see that I was wrong. It would appear though that Searle back peddled to some degree later on when he said that his argument was directed solely at the idea that Strong A.I. is a matter for software only. He emphasized this it seems in 1990 ten years after the original argument was formulated. If I'm not mistaken nueral nets and the like became more popular around that time as well.
Apologies for my fuzzy memory, but I believe Searle contends that neural net models also fall prey to his CR argument. Artificial neural networks are just so much more symbol shuffling after all. He has also addressed questions of physical structure by contending that a complex system of pipes selectively letting water through (akin to a complex system of neurons selectively propogating signals) also would not be up to the task of defeating his argument. Again, I think Searle identifies some property of biological systems that is the crucial difference, though I think that identification is dubious.

TheStatutoryApe said:
The other facet of his argument he refers to as intrinsic intentionality. This idea seems a bit vague, something akin to "freewill" I'd assume. This part of the argument would collapse with the rest though since it seems Searle believes that intentionality equates to meaningful output which is the product of true comprehension in the semantic sense.
In philosophy, "intentionality" basically means "about-ness." So if something has intentional content, it has a kind of 'pointer' to something other than itself, it has a sort of representational property. For instance, my visual experience of these letters has a certain intentional content insofar as the words and sentences they compose point to/represent/are about certain concepts or ideas. Also, my visual experience of (say) a fire truck has intentional content because (in normal conditions) it represents or is about an acual fire truck in the external world. Similarly, beliefs have intentional content because a belief is a kind of attitude towards certain ideas or propositions. And so on.

So I don't think Searle's intrinsic intentionality has anything to do with free will at all. Rather, I think he conceives of it as the solution to the CR conundrum. For the CR occupant, the symbols he manipulates have no intentional content (or at least, their intentional content for him is minimal or trivial); they don't stand for anything for him, but rather they're just meaningless squiggles. If something were to have "intrinsic intentionality," then in virtue of some properties of that something itself, it would sort of automatically have intentional content. That is, it would not have to attain intentional content from something other than itself (e.g. a system of functional relationships in a cognitive system, like the CR), but rather it would have its intentional content inherently built into itself, somehow. So that's Searle's way out of his CR conundrum, although it does seem rather poorly conceived and ad hoc.
 
Last edited:
  • #8
gerben said:
As a reply to hypnagogue’s post:
I think the Chinese room argument mainly shows that one can think of a system that can do (at least) one of the things that we can do, while it obviously cannot do other things that we can. So why would it be strange that there are major differences between that system and us? (that system only emulates a small part of our abilities)
What can we do that the CR can't?

The main disanalogy, I think, comes from timing considerations-- obviously the CR (at least as originally conceived) is going to much slower in computing things than the human brain. But that doesn't seem to be a really salient factor with respect to what the CR argument tries to get across. In principle, given the proper algorithms and inputs and sufficient time, the CR can compute whatever a brain can compute-- can it not? (I think Roger Penrose has some sort of argument against the notion that a Turing computer can compute whatever a brain can, but I'm not familiar with the details, or how well received it is in general.)
 
  • #9
hypnagogue said:
What can we do that the CR can't?
Many more things, we can clearly do much more than just answer questions.
We can walk, talk, smell, hear, eat, etc.
When we are asked a question we do more then just producing an answer for example when reading the question we may be making plans to smack the person that is asking such an indecent question.

You could think of another CR that would pronounce the question, one that would output definitions of the words that make up the question or one that would ring an alarm bell when the question is improper. These rooms would simulate other abilities of ours. I think a brain is more like a large town of such rooms in which the outputs of the different rooms are input to one or more other rooms. Some specialized rooms will finally send something out of the town.

hypnagogue said:
In principle, given the proper algorithms and inputs and sufficient time, the CR can compute whatever a brain can compute-- can it not?
Yes, you can think of a CR that can emulate certain abilities of a human, but I do not see why that idea argues against the possibility of a system that would emulate us more fully.
 
  • #10
Hypnagogue said:
Apologies for my fuzzy memory, but I believe Searle contends that neural net models also fall prey to his CR argument. Artificial neural networks are just so much more symbol shuffling after all. He has also addressed questions of physical structure by contending that a complex system of pipes selectively letting water through (akin to a complex system of neurons selectively propogating signals) also would not be up to the task of defeating his argument. Again, I think Searle identifies some property of biological systems that is the crucial difference, though I think that identification is dubious.
I can't say that I have read a whole lot about Searle just a few articles on some websites in regard to his Chinese Room. AS far as I remember he states that his argument is directed specifically at the idea that strong AI is solely a software problem which would leave open the potential for a hardware solution. He says "Brains make minds" so what you remember is likely true but I thought that he may have conceeded the fact that brain-like hardware may yield results. He also makes part of his argument that Strong AI is supposed to specifically be a recreation of the human condition when he is confronted with the idea of entities that are cognisant in a different manner than humans and asked whether or not this would be considered cognition to him. He has refined his argument to such a narrow parameter that it is more or less useless. Why would he have even decided that the point of Strong AI is to recreate the human condition? Not to just be a thinking entity but to be human.

Hypnagogue said:
So I don't think Searle's intrinsic intentionality has anything to do with free will at all. Rather, I think he conceives of it as the solution to the CR conundrum. For the CR occupant, the symbols he manipulates have no intentional content (or at least, their intentional content for him is minimal or trivial); they don't stand for anything for him, but rather they're just meaningless squiggles. If something were to have "intrinsic intentionality," then in virtue of some properties of that something itself, it would sort of automatically have intentional content. That is, it would not have to attain intentional content from something other than itself (e.g. a system of functional relationships in a cognitive system, like the CR), but rather it would have its intentional content inherently built into itself, somehow. So that's Searle's way out of his CR conundrum, although it does seem rather poorly conceived and ad hoc.
This is more or less what I was meaning by the "platonic essence" earlier. That there is something somehow intrinsic in the information being processed that the CR homunculus doesn't have access to or that there is something somehow intrinsic in a human that shows a deeper layer of the information presenting itself. This seems very dualistic even though he states that his argument isn't dualistic.
I equate the intentionality to "freewill". The intentionality has to come from a thinking entity. This entity must will an intentionality into a communication. Even this is a very weak notion and the idea of willing something is rather weak aswell. All it means is that the information being communicated came from a conscious thinking entity, it says nothing for the validity or usefulness of the information. Conversely I could just happen to derive by chance some valid and useful information from tea leaves but this doesn't lend the tea leaves any intrinsic intentionality. The Chinese Room itself proves that there is no useful quality to intentionality since you can not even be sure that any given communication came from a conscious thinking entity that actually gave intentionality to the information.

The baseline for cognition I mentioned earlier...
Hmm... cognition as I understand it would be what Searle claims computers are incapable of. Knowing, reasoning, awareness, and perception as dictionary.com defines it. The base line he uses in his argument is semantic understanding. An entity is not cognisant in his argument unless it can produce a semantic understanding. He states further that computers only have syntactic understanding and this is not enough to be considered meaningful understanding.
This is the idea that I am really attacking. I can not prove that semantic understanding is not intrinsic to the human mind from birth but neither can it be proved that it is. Searle treats this semantic understanding as being irreducible, therefore my stating it to be a base line, but semantics are reducible. It appears irreducible in his argument because he is using human linguistics which is easily taken for granted by humans who use it daily as a relatively simple procedure without thinking of just how complex a system of communicating and understanding information it is.
As I stated earlier the understanding of language requires a prerequisite level of experience and learning. I see this learning process to be syntactic in nature. It probably seems difficult to conclude this based on personal experience but I think I can demonstrate to some degree.
Consider mathematics. I think we can all agree that math is a language. Math is actually one of the most simple languages since it is entirely self referencial (excepting when used to describe something else but this isn't required to learn and understand the language). Or another way of saying it is that math is an entirely syntactic language that requires an understanding of nothing but the language itself. One plus one is always two. But one doesn't have to be described as one for this to make sense nor two described as two. We can easily say A plus A equals B and the language loses none of it's rationality. It doesn't matter what you use to represent the numbers the meaning is always the same. The numbers 1 and 2 are just arbitrary symbols that have no meaning except when placed into the context of the language that they belong to, and the context or syntax alone is all that is required to give them meaning.
Would anyone argue that computers don't "understand" math? By the CR scenario a computer should be able to understand math since it is completely syntactic. So if we agree here than we agree that a computer can understand a linguistic.

I have to get going again... I'll try to finish this up in about an hour.
 
  • #11
The mathematic language proves that we can take information and derive meaning and understanding based solely on learning through syntax/context/the structure of the information.
If language requires learning and experience to understand then that learning and experience is based off of our senses. We need to understand the information coming to us through our senses though. Is this a semantic sort of understanding?
At it's base the information our brains receive is very simple in nature and is translated rather quickly without any meaningful thought, much less so at least than what goes into the translation of spoken language. I would also argue that this information is very syntactic in nature. It's all based on patterns and contexts of the information. All of this is codensed down into a homogenous piece of information such as a sound, a smell, a picture. This takes no meaningful thought what so ever so I have no idea why it would present a difficulty for the CR homunculus. Recognizing what this single condensed piece of information is on the other hand is much more a problem. Not so much though really if the homunculus has a prior memory of what it is currently experiencing or something similar to it. Now based off of the syntax of it's experiences it can draw conclusions about the current experience. To sum up I don't see why this semantic understanding is anything more than a collection of syntactic understandings.


Searle misleads us with the simplicity of his argument. He leads us to identify with the man in the box when in reality there is no man in the box. The man in the box simply represents the process of "shuffling" the information. He creates a physical tangible focal point for us, in the form of a man, when in reality there isn't one and now that we identify with him we recognize his supposed plight.
We are presented with a problem with exagerated enormity. How do we understand chinese when locked in this room with no reference except this rule book. We are made to think of this language which is viewed as so very complex by english speakers. Never mind how complex learning english must be. Don't ask the question of how difficult it may have been to learn english in the first place or the manner in which we did so. Never mind the fact that the man in the room does understand the language that his rule book was written in either.
Looking on this situation it seems like coming to a meaningful understanding of complex information based on syntactic rules is entirely futile. But we have done it since the day we were born and continue to do it today. We just aren't aware that it is happening and the Chinese Room allows us to stay frustratingly ignorant of it by misdirecting our attention to some vaguely defined property that supposedly makes us special.
 
  • #12
TheStatutoryApe said:
Searle misleads us with the simplicity of his argument. He leads us to identify with the man in the box when in reality there is no man in the box. The man in the box simply represents the process of "shuffling" the information. He creates a physical tangible focal point for us, in the form of a man, when in reality there isn't one and now that we identify with him we recognize his supposed plight.
I agree with you 100%. The main reason that Searle’s argument has been able to survive this long (and is still debated) is because the focus of his argument is on the “man in the box” rather than on the computer or the rule book. It seems that most people have the Leibniz-type intuition that there must be an “homunculus” somewhere that does all the “knowing”, take away that homunculus and what else is there that can “know” or “understand”? The reasoning is false, and I think you have explained very well why it is false.

In fact in the version of the Searle argument with the rule-book, the rule-book would itself necessarily “understand chinese” if it was complex enough to be able to provide rational chinese answers to a sufficiently wide range of rational chinese questions, homunculus or no homunculus. But that would have to be one hell of a complex rule-book!

MF
 
  • #13
(I think Roger Penrose has some sort of argument against the notion that a Turing computer can compute whatever a brain can, but I'm not familiar with the details, or how well received it is in general.)

I think his argument is based around the tesellation of shapes. Designing an algorithm that can decide whether an arbitrary shape tesellates or not is hard. However, humans are very good at being able to deduce whether or not a shape will tesselate or not. So, if the brain is a computer, it's doing something that a Turing machine cannot, which is a problem. (This is all from my recollections of someone else describing Penrose's problem, I haven't read his book myself).

The main problem that I have with Searle's argument (it may have already been touched upon in this thread) is that he only focuses on the man in the room, and not on the system itself. It's obvious that the man in the room doesn't understand Chinese, but I see no reason why the whole system, the man, the book, the room etc. cannot be said to understand Chinese.
 
  • #14
Dominic Mulligan said:
I think his argument is based around the tesellation of shapes. Designing an algorithm that can decide whether an arbitrary shape tesellates or not is hard. However, humans are very good at being able to deduce whether or not a shape will tesselate or not. So, if the brain is a computer, it's doing something that a Turing machine cannot, which is a problem. (This is all from my recollections of someone else describing Penrose's problem, I haven't read his book myself).
How does one leap from the statement "designing an algorithm that can decide whether an arbitrary shape tesellates or not is hard" to the conclusion "a Turing machine cannot (do it)"?

Dominic Mulligan said:
The main problem that I have with Searle's argument (it may have already been touched upon in this thread) is that he only focuses on the man in the room, and not on the system itself. It's obvious that the man in the room doesn't understand Chinese, but I see no reason why the whole system, the man, the book, the room etc. cannot be said to understand Chinese.
Agreed

MF
 
  • #15
TheStatutoryApe said:
It's all based on patterns and contexts of the information. All of this is codensed down into a homogenous piece of information such as a sound, a smell, a picture. This takes no meaningful thought what so ever so I have no idea why it would present a difficulty for the CR homunculus. Recognizing what this single condensed piece of information is on the other hand is much more a problem. Not so much though really if the homunculus has a prior memory of what it is currently experiencing or something similar to it. Now based off of the syntax of it's experiences it can draw conclusions about the current experience. To sum up I don't see why this semantic understanding is anything more than a collection of syntactic understandings.
I broadly agree. Hence symbol manipulation (which is what the CR is doing, and is what humnan brains do), along with a sufficient database of information and knowledge, gives rise to both syntactic and semantic understanding. The difference between them (syntax and semantics) is largely a matter of degree and complexity in the information processing and symbol manipulation. Hence Searle's belief that symbol manipulation can give rise only to "syntactic understanding", and not to "semantic understanding", is incorrect.

TheStatutoryApe said:
Searle misleads us with the simplicity of his argument. He leads us to identify with the man in the box when in reality there is no man in the box. The man in the box simply represents the process of "shuffling" the information. He creates a physical tangible focal point for us, in the form of a man, when in reality there isn't one and now that we identify with him we recognize his supposed plight.
Agreed. It's amazing how many people fall for it though.

MF
 
  • #16
How does one leap from the statement "designing an algorithm that can decide whether an arbitrary shape tesellates or not is hard" to the conclusion "a Turing machine cannot (do it)"?

I have no idea. As I said, I haven't read the book and I'm only passing on what I can remember of a secondhand account. If you want to know the details, I suppose you'll have to pay a visit to the library :biggrin:
 
  • #17
Dominic Mulligan said:
I have no idea. As I said, I haven't read the book and I'm only passing on what I can remember of a secondhand account. If you want to know the details, I suppose you'll have to pay a visit to the library :biggrin:
s'ok.

I have Penrose's 3 major books. Needless to say I don't agree with his opinions

MF
 

1. What is Searle's Chinese Room Argument Against AI?

Searle's Chinese Room Argument is a thought experiment created by philosopher John Searle to challenge the idea of artificial intelligence (AI). It questions whether a computer program can truly understand and have consciousness, or if it is simply simulating intelligence.

2. How does the Chinese Room Argument work?

The Chinese Room Argument involves a person (the "computer") who does not understand Chinese, receiving Chinese characters and using a book of instructions to manipulate the characters and produce a response. This response may appear to be an intelligent conversation, but the person inside the room does not actually understand the meaning of the characters or the conversation.

3. What is the main criticism of the Chinese Room Argument?

One of the main criticisms is that it relies on the assumption that understanding language is solely based on syntactic manipulation, rather than a combination of syntax and semantics. Some argue that if a computer program is able to understand the meaning behind the symbols it manipulates, then it can be considered to have true intelligence.

4. How does the Chinese Room Argument relate to the Turing Test?

The Chinese Room Argument is often used as a counterargument to the Turing Test, which is a test designed to determine if a computer program can exhibit intelligent behavior indistinguishable from a human. The Chinese Room Argument suggests that passing the Turing Test does not necessarily mean that a computer program truly understands or has consciousness.

5. What is the significance of the Chinese Room Argument in the field of AI?

The Chinese Room Argument has sparked much debate and discussion about the nature of intelligence and consciousness, and has challenged the idea of creating true artificial intelligence. It has also prompted researchers to explore the concept of "strong AI," which refers to the idea of creating a computer program that is truly conscious and self-aware.

Similar threads

  • General Discussion
Replies
5
Views
2K
Replies
3
Views
2K
Replies
10
Views
2K
  • Quantum Interpretations and Foundations
Replies
1
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
  • General Discussion
Replies
2
Views
4K
  • General Discussion
Replies
12
Views
1K
Replies
201
Views
16K
  • Biology and Medical
Replies
26
Views
7K
Back
Top