Can Artificial Intelligence ever reach Human Intelligence?

AI Thread Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #101
neurocomp2003 said:
tishammer: heh i think u may need to define your definition of a rule. Is it like a physics based rule-where interaction, collisions and forces are domintate or a math/cs based rule where logic is more prevalent.

In the case of machines and understanding, it's more like a metaphysical principle, like ex nihilo nihil fit.
 
Physics news on Phys.org
  • #102
Tisthammerw said:
Actually, I'm primarily using it to argue against strong AI. But I suppose it might be used to argue in favor of the soul. Still, there doesn't seem to be anything wrong with the Chinese argument: the person obviously does not understand Chinese.

Your analogy has holes in it. Regardless of weather the man can understand chinese, machines CAN understand us. It may not be able to empathize, but it understands the structure of things to the same degree as we do. You have to define for me exactly what it doesn't understand- exactly what it is that cannot be taught, because by my definition, you can teach a machine anything that you can teach a human. Give me one example of something you can't teach a machine. The chinese room springs from the notion that if something isn't inherently human by design, it cannot "understand" humanistic behavior- I think this is false. There is purpose behind each human emotion- it doesn't follow logic, but a computer can be taught to disregard logic when faced with an emotional situation.

Probably the physical brain (at least, that's where it seems to interact).

The brain is composed of synapses, dendrites, and action potentials- so I think we can agree that no where in the brain has any scan ever revealed a particular region of of the brain that is the "soul". That is spirituality. I'm taking this from a totally scientific POV, which means that you can no more prove you have a soul than you can prove the machine doesn't have one.



I doubt it. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?

What does literal understanding encompass? Are you simply stating that to be us is to know us? That regardless of any superior intellect capabilities it is beyond anyone to truly understand us unless they are us? If so, that's very presumptive and not realistic. If you put a CPU into a human body--to reverse the notion--it will be able to fully comprehend what it is to be human. It's said that the sum total of a person's memories can fit onto about 15 petabytes of drive space. That's about 10 years off-maybe. When you can transfer someone's entire mind to a computer, does that memory loose it's soul? If an advanced AI computer analyzes this, would it not understand? All humanistic understand requires is a fram of reference to be understood.

Great, but that doesn't help the strong AI thesis. It's easy to say understanding is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?

Well if you can't teach him chinese, you will just have to take him to china :wink:

I suppose so, given that this is a brain of a human child. But this still doesn't solve the problem of the Chinese room, nor does it imply that a computer can be sentient. Computer programs use a complex set of instructions acting on input to produce output. As the Chinese room illustrates, that is not sufficient for understanding. So what else do you have?

That's the way current AI operates. In the future this may not always be the case. I've been reading some of Jeff Hawkin's papers- interesting stuff. If you change the way a computer processes the information, it may be capable of learning the same way we do, through association. The Chinese room is a dilema. I'm not suggesting that the chinese room is wrong exactly. I'm just saying we need to change the rules of the room so that he can learn in chinese(human). The funny part is that this debate is a step backwards in evolution. We can teach a machine to understand why humans behave the way we do, but why would we want to teach them to "BE" human? Humans make mistakes. Humans do illogical things that don't make sense. Humans get tired, humans forget, humans get angry and jealous. Machines do none of those things. The purpose of machines is to assist us, not to take our place.

That being said I believe that if we change the way machines process inpu progress can be made. As far as how we get from point a to point b, that I can't answer.
 
  • #103
Zantra said:
Your analogy has holes in it.

Please tell me what they are.


Regardless of weather the man can understand chinese, machines CAN understand us.

That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):


Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

The Chinese room thought experiment shows that a computer program clearly requires something other than a complex set of rules manipulating input. But what else could a computer possibly have to make it have understanding?

Feel free to answer that question (I haven't received much of an answer yet).


You have to define for me exactly what it doesn't understand

Assuming "it" means a computer, a computer cannot understand anything. It may be able to simulate conversations etc. via a complex set of rules manipulating input, but it cannot literally understand the language anymore than the person in the Chinese room understands Chinese.


- exactly what it is that cannot be taught

It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.


The brain is composed of synapses, dendrites, and action potentials- so I think we can agree that no where in the brain has any scan ever revealed a particular region of of the brain that is the "soul".

Well, duh. The soul is a nonphysical entity. I'm saying that is where the soul interacts with the physical world.


I'm taking this from a totally scientific POV, which means that you can no more prove you have a soul than you can prove the machine doesn't have one.

I wouldn't say that.


What does literal understanding encompass?

I use no special definition. Understanding means "to grasp the meaning of."


Are you simply stating that to be us is to know us? That regardless of any superior intellect capabilities it is beyond anyone to truly understand us unless they are us?

The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?


It's said that the sum total of a person's memories can fit onto about 15 petabytes of drive space. That's about 10 years off-maybe. When you can transfer someone's entire mind to a computer, does that memory loose it's soul?

As for a person's literal mind, I don't even know if that's possible. But if you're only talking about memories--raw data--than I would say the person's soul is not transferred.


If an advanced AI computer analyzes this, would it not understand?

If all this computer program does is use a set of instructions to mechanically manipulate the 1s and 0s, then my answer would be "no more understanding than the man in the Chinese room understands Chinese." Again, you're going to need something other than rules manipulating data here.


If you change the way a computer processes the information, it may be capable of learning the same way we do, through association.

But if you're still using the basic principle of rules acting on input etc. that won't get you anywhere. A variant of the Chinese room also changes the way input is processed. But the man still doesn't understand Chinese.


I'm not suggesting that the chinese room is wrong exactly. I'm just saying we need to change the rules of the room so that he can learn in chinese(human).

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?
 
  • #104
Tist, You seem to make quite a few assumtions without much other reason than "There must be something more."
Why exactly must it be that there is something more?
Why is a complex, mutable, and rewritable system of rules not enough to process information like a human? What does the soul do that is different than this?
Your soul argument is just a homunculus. There is a small insubstancial being inside of us that does all of the "understanding" or real information processing.
Lets hit that first. What is "understanding" if not a manner of processing information? You say that "understanding" is required for meaningful output. Then please illucidate us as to the process that is understanding. How is understanding different than processing of information via complex rules?
To this I am sure that you will once again invoke your chinese room argument, but your chinese room does not allow the potential AI any of the freedoms of a human being. The man in the chinese room is just another homunculus scenario except that you have made the assumption that this one is apparently incapable of this magical metaphysical property that you refer to as "understanding" (a magic ball of yarn in the human mind?).
You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why? Simply because the AI homunculus doesn't possesses the same magical ball of yarn that your soul homunculus has? Your soul homunculus is in the same position that the homunculus in the chinese room is. The chinese room homunculus is that function which receives input and formulates a response based on complex rules received from the outside world. Your soul homunculus is in it's own box receiving information from the outside world in some sort of language that the brain uses to express what it sees and hears but you say that it's decision making process is somehow fundamentally differant. Does the soul homunculus not have a set of rule books? Does it somehow already supernaturally know how to understand brainspeak? What is the fundamental difference between the situations of the chinese room homunculus and the soul homunculus?
 
  • #105
Tisthammerw said:
Please tell me what they are.

Ape seems to have beaten me to the punch. If we show the man in the room how to translate chinese that says to me that he is able to understand the language he is working with. No further burden of proof is required. Furthermore, let us assume we not only teach the man how to read chinese, but what the purpose of language is, how it allows us to communicate, etc. The man is capable of learning chinese- that's the assumption. Your assumption is that the rules are static and can't be changed.

That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):

And in response to that I say that the human mind is nothing more than an organic mirror of it's cpu conterpart: processing input, interpreting the data and outputting a response. You're trying to lure me into a "soul debate".

The Chinese room thought experiment shows that a computer program clearly requires something other than a complex set of rules manipulating input. But what else could a computer possibly have to make it have understanding?

Essentially a CPU emulates the human brain in terms of processing information. If AI can learn the "why" behind answers to questions, that to me satisfies the requirement. The better question would be: what is the computer lacking that makes it incapable of understanding to your satisfaction?"

It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

The room understands that his name is Bob. What more needs to be known about bob? That's an example of a current AI program. I can problably find something like that online. But what if the conversation went a little differently-ie:

Human: how are you today?
Room: I'm lonely. What is your name?
Human: My name is Bob. Why are you lonely?
Room: Nice to meet you Bob. You are the first person I have met in 2 years.
HUMAN: I can understand why you are lonely. Would you like to play a game with me?
Room: I would like that very much.

The computer appears to have more of a "soul". In fact, If we take away the nametags, we could easily assume this is a conversation between 2 people.

Well, duh. The soul is a nonphysical entity. I'm saying that is where the soul interacts with the physical world.

And I'm saying that if a computer has enough experience in the human condition, weather it has a soul or not, doesn't matter- it still understands enough.

I use no special definition. Understanding means "to grasp the meaning of."

Ok then by that definition, a computer is fully capable of understanding. Give me any example of something that a computer can't "understand" and I will tell you how a computer can be taught this, weather by example, experience or just plain old programming. I'm talking about a computer that learns on it's own without being prompted. A computer that sees something it doesn't understand, and takes it upon itsself to deduce the answers using it's available resources. That's true AI.

The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

You keep alluding to your own magic ball of yarn. What is this magical property that you keep hinting at but never defining? What is this thing that humans have that machines cannot posess? Are you talking about a soul? What is a soul exactly? How about curiosity? If we design a machine that is innately curious, doesn't that make him strikingly human in nature?

If all this computer program does is use a set of instructions to mechanically manipulate the 1s and 0s, then my answer would be "no more understanding than the man in the Chinese room understands Chinese." Again, you're going to need something other than rules manipulating data here.

Then what do humans use to process data? How do we interact with the world around us? We process sensory input(IE data) and we process the information in our brains (CPU) then react to that processed data accordingly(output). What did I miss about the human process?

But if you're still using the basic principle of rules acting on input etc. that won't get you anywhere. A variant of the Chinese room also changes the way input is processed. But the man still doesn't understand Chinese.

But yet you still refuse to teach the guy how to read chinese. Man he must be frustrated. Throw the guy a bone :wink:

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?

You have to change your way of thinking. sentience can be had in a machine. Can I tell you how to go out and build one? Can I tell you how something like this will be accomplished? No. But this isn't science fiction, it is science future. It's hard to see how to launch rockets into space when we've just begun to fly- we're still at Kitty Hawk. But in time it will come.
 
Last edited:
  • #106
TheStatutoryApe said:
Tist, You seem to make quite a few assumtions without much other reason than "There must be something more."
Why exactly must it be that there is something more?
Why is a complex, mutable, and rewritable system of rules not enough to process information like a human?

I'll try again.

The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.

The Chinese room shows that having a complex system of rules acting on input is not sufficient for literal understanding to exist. We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

(Remember, variants of the Chinese room include the system of rules being complex, rewritable etc. and yet the man still doesn’t understand a word of Chinese.)


What does the soul do that is different than this?

I believe that literal understanding (in addition to free will) requires something fundamentally different--to the extent that the physical world cannot do it. The soul is and provides the incorporeal basis of oneself.


Lets hit that first. What is "understanding" if not a manner of processing information?

Grasping the meaning of the information. It is clear from the Chinese room that merely processing it does not do the job.


To this I am sure that you will once again invoke your chinese room argument, but your chinese room does not allow the potential AI any of the freedoms of a human being.

By all means, please tell me what else a potential AI has other than a complex set of instructions to have literal understanding.


You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why?

I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.


Note: the text below goes off topic into the realm of the soul

Simply because the AI homunculus doesn't possesses the same magical ball of yarn that your soul homunculus has?

Actually, my point is that the soul is the figurative "magical ball of yarn." Physical processes seem completely incapable of producing real understanding; something fundamentally different is required.


Does it somehow already supernaturally know how to understand brainspeak?

This is one of the reasons why I believe God is the best explanation for the existence of the soul; the incorporeal would have to successfully interact with a highly complex form of matter (the brain). The precise metaphysics may be beyond our ability to discern, but I believe that this how it came to be.


What is the fundamental difference between the situations of the chinese room homunculus and the soul homunculus?

The soul provides that “something else” that mere computers don't have.
 
Last edited:
  • #107
Zantra said:
Ape seems to have beaten me to the punch. If we show the man in the room how to translate chinese that says to me that he is able to understand the language he is working with. No further burden of proof is required.

I've already responded to this. While your idea may sound good on paper, watch what happens when we try to instantiate this analogy into a real computer.

You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.


Your assumption is that the rules are static and can't be changed.

Not at all. Variants of the Chinese room include learning algorithms and the creation of different procedures (the man has extra paper to write down more information etc.) as I illustrated before (when the Chinese room "learns" a person's name).


That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):

And in response to that I say that the human mind is nothing more than an organic mirror of it's cpu conterpart: processing input, interpreting the data and outputting a response.

...

Essentially a CPU emulates the human brain in terms of processing information.

And is still question begging based on what we've learned from the Chinese room, and still doesn't answer my question of "what else" a computer has besides using a complex set of rules acting on input in order to literally understand.


It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

The room understands that his name is Bob. What more needs to be known about bob? That's an example of a current AI program. I can problably find something like that online. But what if the conversation went a little differently-ie:

Human: how are you today?
Room: I'm lonely. What is your name?
Human: My name is Bob. Why are you lonely?
Room: Nice to meet you Bob. You are the first person I have met in 2 years.
HUMAN: I can understand why you are lonely. Would you like to play a game with me?
Room: I would like that very much.

The computer appears to have more of a "soul".

And so does the room. Nonetheless, the person in the room doesn't know the man's name is Bob, isn't necessarily feeling lonely, doesn't even understand Bob's words at all etc. We still just have a complex set of rules operating on input, which I've shown is insufficient for literal understanding to exist.


I use no special definition. Understanding means "to grasp the meaning of."

Ok then by that definition, a computer is fully capable of understanding.

The Chinese room thought experiment would seem to disprove that statement--unless you can show me what else a computer has besides a complex set of rules etc. that would make it literally understand.


The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

You keep alluding to your own magic ball of yarn. What is this magical property that you keep hinting at but never defining?

I have repeatedly pointed out that computers manipulating input via a set of instructions is not sufficient to produce understanding. My question: "what else do you have?" That's for you to answer, not me. I claim there is nothing you can add to the computer to make it literally understand.


Note: going off topic to the soul realm

What is this thing that humans have that machines cannot posess?

A soul.


Are you talking about a soul?

Yes.


What is a soul exactly?

The incorporeal basis of oneself.


How about curiosity? If we design a machine that is innately curious, doesn't that make him strikingly human in nature?

I believe we can make a machine "strikingly human in nature" in the sense that the machine can mimic human behavior--just as the Chinese room can mimic a person fluent in Chinese. But that does not imply the existence of literal understanding.


You have to change your way of thinking. sentience can be had in a machine.

Rather question begging in light of the Chinese room, especially when you can't answer my question: what else could a computer possibly add for it to possesses literal understanding?

Apparently nothing.
 
Last edited:
  • #108
what complex rule? Learnign algorithms dont' use logic rules in the sense of language.
 
  • #109
neurocomp2003 said:
what complex rule? Learnign algorithms dont' use logic rules in the sense of language.

Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.
 
  • #110
Your entire argument still revolves around, "There must be something more."
You still limit the homunculus in the chinese room even though your soul of which you speak is the same thing except you have given it a magic ball of yarn. Just a magic ball of yarn with no explination as to what the ball of yarn does, what it's made of, or how it works. So if I told you that all I have to do is give a computer "AI" that it would be sentient would you believe me? Wouldn't you ask me what this "AI" does and how it does it? If I simply told you that it's the fundamental element of computer sentience that gives it "free will" and "understanding" would you be satisfied?
This is as much information as you have given us regarding this soul. You simply say that it must exist for there to be "free will" and "understanding" hence since humans have "free will" and "understanding" this soul obviously exists! This argument is completely useless and a classic example of bad logic.

Do you realize that Searle, who came up with the chinese room, didn't argue for a soul? He argued what he calls intrinsic intentionality which it seems is just as vague a notion as the soul which you argue for. You would call it "free will" most likely but Searle doesn't postulate that a soul is necessary for free will.

But what about current AI computers that outpreform what most people ever thought they would be able to do? Deep Blue beat Kasparov (the world champion chess player). How does a machine do that without being genuinely intelligent? It would have to make decisions and produce meaningful output wouldn't it?
I have a cheap computer program that plays Go. That's a complex (even more so than chess) Japanese/Chinese strategy game. One day I was playing the computer and found that I had gotten the better of the computer pretty well in a certain part of the board. I decided to back track the moves and play that bit over again and see if there were possibly any better moves to be made. After playing with different options and being satisfied that I had made the most advantageous moves in that situation I tried playing the original sequence to get to where I was before I had back tracked. The computer though decided it was going to do something completely different than it had the first time. If the computer has no "understanding" what so ever of what is going on then how does it make decisions to make differing responses to the same set of circumstances? And this is just a cheap program that isn't very good.
 
  • #111
ah so your definition fo a rule is any rule...fair enough. I was under the impression your def'n of rule was something grandeur than bit logic. Anyways going back to consciousness...isn't it just the product of interactions/collisions of the physical objects that creates it ...or are you saying that there exists this mysticism known as a soul that exists outside of any physical object(known/unknown to man) that exist in our universe.
 
  • #112
Tisthammerw said:
Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.
How is this any different than the human body and brain? The signals that our brain receives isn't in english nor are the outputs that it gives. Like I've been trying to show you just put a little man inside the brain, you can even call it a soul if you'd like, and you will have the exact same situation that you have been giving us regarding the chinese room.

---edit---

I wouldn't be suprised if those who try to negate the idea of freewill and a human being more than the some of it's parts would use a version of the chinese room argument to make their case.
 
Last edited:
  • #113
Here is a short simple essay discussing the way humans think. One of Searle's arguments is that the homunculus in the chinese room can only learn syntactic rules but not semantic rules based on it's situation. After reading this article it occurred to me that Searle is, perhaps unintentionally, proposing an underlying essence to the chinese words and characters by bringing in the element of semantic rules as a base line for comprehension. If you go a layer or two deeper on the matter of semantic rules though you'll quickly realize that even the semantic rules are based on a form of syntactic rule. That is to say the syntax of experiencial information creates the semantic rule.
Semantics are in reality rooted in the syntax which Searle contends is the only thing that computers "understand". The computers capacity of only being able to "understand" syntax is the very basis of his argument. THAT is the gaping hole in Searle's chinese room argument. At it's base all cognition comes from syntax.

HA! I feel so much better now that I was finally able to pin point what it is that made the argument seem so illogical to me.
 
  • #114
TheStatutoryApe said:
Your entire argument still revolves around, "There must be something more."

Yes, and the Chinese room thought experiment (see post post #106) would seem to illustrate that point rather nicely. You still haven’t found a way to overcome that problem.


You still limit the homunculus in the chinese room even though your soul of which you speak is the same thing except you have given it a magic ball of yarn.

Not quite. The soul is the incorporeal basis for the self, consciousness, understanding, and sentience. Using our yarn metaphor, the soul is the “magic ball of yarn.”


Do you realize that Searle, who came up with the chinese room, didn't argue for a soul?

Yes I do. Searle was a physicalist. But that doesn't alter my points. It still seems that a computer lacks the means to possesses literal understanding, and it still seems that the Chinese room thought experiment is sound.


But what about current AI computers that outpreform what most people ever thought they would be able to do? Deep Blue beat Kasparov (the world champion chess player). How does a machine do that without being genuinely intelligent?

In this case, it did so using a complex set of rules (iterative deepening search algorithms with Alpha-Beta pruning etc.) acting on input. I myself have made an AI that could beat many players at a game called Nim. Nonetheless, it still doesn't overcome the point the Chinese room makes: a complex set of rules operating on input is insufficient for literal understanding. So what else do you have?


It would have to make decisions and produce meaningful output wouldn't it?
I have a cheap computer program that plays Go. That's a complex (even more so than chess) Japanese/Chinese strategy game. One day I was playing the computer and found that I had gotten the better of the computer pretty well in a certain part of the board. I decided to back track the moves and play that bit over again and see if there were possibly any better moves to be made. After playing with different options and being satisfied that I had made the most advantageous moves in that situation I tried playing the original sequence to get to where I was before I had back tracked. The computer though decided it was going to do something completely different than it had the first time. If the computer has no "understanding" what so ever of what is going on then how does it make decisions to make differing responses to the same set of circumstances?

Like many programs, it uses a complex set of instructions acting on input. Don't forget that the Chinese room can emulate these very same features (e.g. making different responses with the same question etc.) given the appropriate set of rules.
 
  • #115
neurocomp2003 said:
ah so your definition fo a rule is any rule...fair enough. I was under the impression your def'n of rule was something grandeur than bit logic. Anyways going back to consciousness...isn't it just the product of interactions/collisions of the physical objects that creates it

I believe the answer is no.


...or are you saying that there exists this mysticism known as a soul that exists outside of any physical object(known/unknown to man) that exist in our universe.

I can only speculate as to the precise metaphysics behind it, but it seems clear to me that the mere organization of matter is insufficient for producing consciousness and free will. Therefore, such things having an incorporeal basis is the only logical alternative.
 
  • #116
TheStatutoryApe said:
Tisthammerw said:
Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.

How is this any different than the human body and brain?

If you recall, I believe there is an incorporeal basis for consciousness and understanding for human beings. Otherwise I think you're right; there really is no fundamental difference. If the Chinese room thought experiment is sound, it would seem to rule out the possibility of physicalism.

One could make this argument

  1. If physicalism is true, then strong AI is possible via complex sets of rules acting on input
  2. Physicalism is true
  3. Therefore such strong AI is possible (from 1 and 2)

But premise 2 is a tad question begging, and the Chinese room seems to refute the conclusion. Therefore I could argue (if premise 1 were true)

  1. If physicalism is true, then strong AI is possible via complex sets of rules acting on input
  2. Such strong AI is not possible (Chinese room)
  3. Therefore physicalism is not true (from 1 and 2)

So the first premise doesn't really establish anything for strong AI unless perhaps one can do away with the Chinese room, and I haven't seen a refutation of it yet.
 
  • #117
TheStatutoryApe said:
Here is a short simple essay discussing the way humans think. One of Searle's arguments is that the homunculus in the chinese room can only learn syntactic rules but not semantic rules based on it's situation. After reading this article it occurred to me that Searle is, perhaps unintentionally, proposing an underlying essence to the chinese words and characters by bringing in the element of semantic rules as a base line for comprehension. If you go a layer or two deeper on the matter of semantic rules though you'll quickly realize that even the semantic rules are based on a form of syntactic rule. That is to say the syntax of experiencial information creates the semantic rule.

It is true that we humans can pick up semantic rules based on experience. It is also evident that we humans can "learn by association." Nonetheless, this type of learning presupposes consciousness etc. and it is evident from the Chinese room that a complex set of rules acting on input is insufficient for literal understanding to exist. Even when a computer "learns by association" through audio-visual input devices, literal understanding does not take place.

Note that we already discussed something similar: a computer learning by what it sees and hears. Even when based on sensory experience, it didn't work, remember? You said:

You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why?

I replied:

I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.

Obviously, something else is required besides a complex set of rules acting on input.


Semantics are in reality rooted in the syntax which Searle contends is the only thing that computers "understand". The computers capacity of only being able to "understand" syntax is the very basis of his argument. THAT is the gaping hole in Searle's chinese room argument. At it's base all cognition comes from syntax.

If you're claiming that all knowledge is ultimately based on a complex set of rules acting on input, I wouldn't say that--unless you wish to claim that the man in the Chinese room understands Chinese. It's true that we humans learn the rules of syntax for words, but it's more than that; we can literally understand their meaning. This is something a mere complex set of rules etc. can't do, as I've illustrated with the Chinese room thought experiment.


HA! I feel so much better now that I was finally able to pin point what it is that made the argument seem so illogical to me.

Start feeling bad again. The Chinese room still shows that a set of rules--however complex and layered--acting on input is insufficient for literal understanding to exist. Adding additional layers of rules still isn't going to do the job (we could add additional rules to the rulebook, as we did before in this thread with the variations of the Chinese room, but the man still doesn't understand Chinese). Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.
 
Last edited:
  • #118
so understanding lies outside the physcality of our universe...but is contained within our brain/body? so the firing of billion neurons feeding form vision/audition to memory/speech will not form understanding?
 
  • #119
neurocomp2003 said:
so understanding lies outside the physcality of our universe...but is contained within our brain/body?

If you want my theory, I believe the soul is parallel to the physical realm, acting within the brain.


so the firing of billion neurons feeding form vision/audition to memory/speech will not form understanding?

By itself, no (confer the Chinese room) since it seems that mere physical processes can't do the job.
 
  • #120
Tisthammerw said:
If you're claiming that all knowledge is ultimately based on a complex set of rules acting on input, I wouldn't say that--unless you wish to claim that the man in the Chinese room understands Chinese. It's true that we humans learn the rules of syntax for words, but it's more than that; we can literally understand their meaning. This is something a mere complex set of rules etc. can't do, as I've illustrated with the Chinese room thought experiment.
But the question is why and how do we understand. The chinese room shows that both machines and humans will be unable to understand a language without an experiencial syntax to draw from. This is how humans learn, through syntax. Through syntax we develope a semantic understanding. We do not know inately what things mean. There is no realm of platonic ideals that we tap from birth. We LEARN TO UNDERSTAND MEANING. How do you not get that? Your necessity for a magic ball of yarn is not a valid or logical argument since I might as well call your soul a magic ball of yarn and it holds about as much meaning. Tell me what the soul does, not just that it is the incorporial manifestation of self because that's entirely meaningless as well. It doesn't tell me what it does. "Freewill" and "Understanding", these things don't tell me what it does or how it does it either. You're going to have to do a hell of a lot better than that.
 
  • #121
Tisthammerw said:
Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.
My point is that nothing else is required. Just the right hardware and the right program. I denounce your need for a magic ball of yarn until you can give me some concrete property that belongs to it that helps process information. "Freewill" and "true understanding" are just more vague philosophical notions without anything to back them up or even any reason to believe that a soul is necessary for them.
I contend that a human mind starts out with nothing but it's OS and syntactic experience as a base from which it developes it's "meaningful understanding" and that a computer has the capacity for the same.
 
  • #122
Pengwuino said:
I'm pretty sure my cell phone has more intelligence then some of the people I have met...

...and I am sure that who created the concept of cp for it to be realized, is much more intellegence that any models of cp that exists... without human intellegence cp can't possibly exist.
 
  • #123
TheStatutoryApe said:
But the question is why and how do we understand. The chinese room shows that both machines and humans will be unable to understand a language without an experiencial syntax to draw from. This is how humans learn, through syntax.

Partially. The Chinese room shows that a complex set of instructions is insufficient for understanding. Real understanding may include the existence of rules, but a set of rules is not sufficient for understanding.


We LEARN TO UNDERSTAND MEANING. How do you not get that?

I understand that we humans can learn to understand meaning. My point is that something other than a set of instructions is required (see above), and the Chinese room thought experiment proves it. Note the existence of learning algorithms on computers. If the learning algorithms are nothing more than another set of instructions, the computer will fail to understand (note the variant of the Chinese room that had learning algorithms; learning the person's name and so forth).


Your necessity for a magic ball of yarn is not a valid or logical argument

My argument is that something else besides a complex set of instructions is required, and my argument is logical since I have the Chinese thought experiment to prove it. Here we have an instance of a complex set of instructions acting on input to produce valid output, yet no understanding is taking place. Thus, a set of instructions is not enough for understanding.


Tell me what the soul does

This is going off topic again but here it goes: the soul interacts with the corporeal world to produce effects via agent-causation (confer the agency metaphysical theory of free will) as well as receiving input from the outside world.


TheStatutoryApe said:
Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.

My point is that nothing else is required.

The Chinese room thought experiment disproves that statement. Here we have an instance of a complex set of rules acting on input (questions) to produce valid output (answers) and yet no real understanding is taking place.


Just the right hardware and the right program.

Suppose we have the "right" program. Suppose we replace the hardware with Bob. Bob uses a complex set of rules identical to the program. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run the program, get valid output etc., and yet no real understanding is taking place. So even having the “right” rules and the “right” program is not enough. So what else do you have?

You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?


I denounce your need for a magic ball of yarn until you can give me some concrete property that belongs to it that helps process information.

The magical ball of yarn was just a metaphor, as in when I asked the question "What else do you have besides a complex set of rules manipulating input? A magical ball of yarn?"

That last question may have been somewhat rhetorical (though the first one was not).


"Freewill" and "true understanding" are just more vague philosophical notions without anything to back them up or even any reason to believe that a soul is necessary for them.

That's not entirely true. One thing to back up the existence of “true understanding” is everyday experience: we grasp the meaning of words all the time. We have reason to believe a soul is necessary for free will (click here to see this article on that).
 
  • #124
The bottom line is that you have nothing to counter with. "something more" is not a valid argument. Define what you're referring to, or the argument is done. I know you can't. And the reason you don't know specifically is because that "something more" doesn't exist, except in our minds. If there were a human-like robot with AI advanced enough to imitate human speech and behavior, it would be indecipherable from a true human. What you're saying to me, is that even if you were fooled into believing it was a human initially, if it was then revealed that it was actually a machine, you would deem it not enough of a human to be human. You would think this because you "percieve" something that isn't there. A magical component that only human beings possesses which cannot be duplicated. However, you can't name this thing, because it's in your mind. It does not exist. You are referring to in essence a "soul" which is an ideal. Ideals can be programmed. Nothing exists in use which cannot be duplicated.

As I've already stated, in my version of the chinese room, the man is taught chinese, and so he understands the information he is processing. You refuse to accept that analogy, but it still stands. I'm satisfied this discussion is resolved. Everything else at this point is refusal to accept the truth, unless you can tell me exactly what this "something more" is. You keep referring to "understanding" but we've already defined understanding. For instance, mathmatics. I think we can generally agree that there is no room for interpretation there-you understand math, or you don't. You are right, or you are wrong. There's no subtle undertones, no underlying philosophy. Yet you claim computers cannot understand it the way you do. I didn't realize we as humans possessed some mathematical reasoning which is beyond that of a machine.

So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.
 
Last edited:
  • #125
zantra/ape: outta curiosity are you suggesting that searle's argument is only capable of
rendering the view of child/toddler learning/development (the whole syntax/semantic thing) and that it is to naive an argument to compete wtih teh complexity of the adult brain? or rather i should say the computational complexity of the brain.
 
  • #126
Zantra said:
The bottom line is that you have nothing to counter with. "something more" is not a valid argument.

You're right that "something more" is not a valid argument. But the Chinese room thought experiment is a valid argument in that it demonstrates the need for something more.

Recapping it again:

The Chinese Room thought experiment

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.

Here we have an instance of a complex set of rules acting on input (questions) yielding valid output (answers) without real understanding. (Do you disagree?) Thus, a complex set of rules is not enough for literal understanding to exist.


If there were a human-like robot with AI advanced enough to imitate human speech and behavior, it would be indecipherable from a true human.

The man in the Chinese room would be indecipherable from a person who understands Chinese, yet he does not understand the language.


As I've already stated, in my version of the chinese room, the man is taught chinese, and so he understands the information he is processing.

Except that I'm not claiming a person can't understand Chinese, I'm claiming that a machine can't. You're argument "a person can be taught Chinese, therefore a computer can too" is not a valid argument. You need to provide some justification, and you haven't done that at all.

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

TheStatutoryApe claimed just having “the right hardware and the right program” would be enough. Clearly having the “right” program doesn't work. He mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?


Everything else at this point is refusal to accept the truth, unless you can tell me exactly what this "something more" is.

That's ironic. It is you who must tell me what this "something more" a computer has for it to literally understand. The Chinese room proves that a complex set of rules acting on input isn't enough. So what else do you have?


So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.

I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).

What about your burden of proof? You haven’t justified your claim of “if a human can learn Chinese, so can a computer,” for instance. Let's see you prove your theory: show me something else (other than a complex set of instructions acting on input) a computer has that enables it to literally understand. I've made this request repeatedly, and have yet to here a valid answer (most times it seems I don't get an answer at all).
 
  • #127
tishammerw: but you see i think our proof is in the advancement of ADAPTIVE learning techniques. THat is our something more...however your something more still remains a mysticism to us and i think that was zantra's point...

as for the chinese searle room problem.
I will be arguing that the chinese searle room also argues that humans have no extra "understanding" as you suggest...that the understanding is a mere byproduct

lets say there are 3 people.2 are conversing over the phone in chinese.
One only understands chinese the other(westerner) is learning chinese. the 3rd person is an english2chinese teacher and is only allowed to converse with the westerner for
5min and cannot converse with the chineses person. How much comprehension of chinese do you think the westerner can get within 5 mins?
 
  • #128
Tisthammer said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis. It's sensory information which is syntactic. The man's brain takes in syntactic information, that is information that has no more meaning than it's pattern structure and context with no intrinsic meaning to be understood, and it decifers the information without any meaningful thought and understanding what so ever in order to produce those chinese characters that he's looking at. The understanding of what the "picture" represents is an entirely different story but just ataining the "picture" that is sensory information is easily done by the processes the mans brain is already doing that are not requiring meaningful thoughts or output of him as a human. So I don't see the problem with allowing the man sensory input from outside. This is all that the man in the box has access to is the syntax of the information being presented. So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books? It all depends on the complexity of the language being used. Any spoken human language is incredibly complex and takes a vast reserve of experiencial data (learned rules of various sorts) to process, and experiencial data is syntactic as well.
Give the man in the room a more simple language to work with then. Start asking the man in the room math questions. What is one plus one? What is two plus two? The man in the room will be able to understand math given enough time to decifer the code and be capable of applying it.

Tisthammer said:
I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).
The use of spoken human language in this thought experiment is cheating. The man in the box obviously hasn't enough information to process by which to gain an understanding. If, as I stated earlier, you used math instead which is entirely selfreferencial and sytactic then the man would have all the information he needed to understand the mathematical language right there in front of him.

[quote="Tisthammerw]What about your burden of proof? You haven’t justified your claim of “if a human can learn Chinese, so can a computer,” for instance. Let's see you prove your theory: show me something else (other than a complex set of instructions acting on input) a computer has that enables it to literally understand. I've made this request repeatedly, and have yet to here a valid answer (most times it seems I don't get an answer at all).[/quote]
I contend that all of the information processing that a human does is at it's base syntactic and that we learn from syntactic information in order to build a semantic understanding. I say that if a computer is capable of learning from syntactic information, which are the only rules in the chinese room that the man is allowed to understand, that the computer can eventually build a semantic understanding in the same manner which a human does. NO "SOMETHING MORE" NEEDED.

The chinese room is far too simple and very much misleading. It forces the man in the box to abide by it's rules without establishing that it's rules are even valid.
 
  • #129
neurocomp2003 said:
zantra/ape: outta curiosity are you suggesting that searle's argument is only capable of
rendering the view of child/toddler learning/development (the whole syntax/semantic thing) and that it is to naive an argument to compete wtih teh complexity of the adult brain? or rather i should say the computational complexity of the brain.
Yes, that's more or less my point. The argument is far too simple and jumps orders of magnitude in complexity of a real working system as if they don't exist.
 
  • #130
neurocomp2003 said:
tishammerw: but you see i think our proof is in the advancement of ADAPTIVE learning techniques. THat is our something more

But if these adaptive learning algorithms are simply another complex set of instructions, this will get us nowhere. Note that I also used a variant of the Chinese room that had learning algorithms that adapted to the circumstances, and still no understanding took place.


as for the chinese searle room problem.
I will be arguing that the chinese searle room also argues that humans have no extra "understanding" as you suggest

Please do.


lets say there are 3 people.2 are conversing over the phone in chinese.
One only understands chinese the other(westerner) is learning chinese. the 3rd person is an english2chinese teacher and is only allowed to converse with the westerner for
5min and cannot converse with the chineses person. How much comprehension of chinese do you think the westerner can get within 5 mins?

This really doesn't prove that a complex set of rules (as for a program) is sufficient for understanding. Note that I'm not claiming a person can't learn another language. We humans can. My point is that this learning requires something other then a set of rules. Rules may be part of the learning process, but a set of instructions is not sufficient for understanding as the Chinese room indicates (we have a set of instructions, but no understanding).
 
  • #131
TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.

People are capable of understanding; no one is disputing that. However, my claim is that a complex set of instructions--while perhaps necessary--is not sufficient for understanding. Searle for instance argued that our brains have unique causal powers that go beyond the execution of program-like instructions. You may doubt the existence of such causation, but notice the thought experiment I gave. This is a counterexample proving that merely having the "right" program is not enough for literal understanding to take place. Would you claim, for instance, that this man executing the program understands binary when he really doesn't?


Tisthamemrw said:
TheStatutoryApe said:
So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.

I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).

Your reply:

The use of spoken human language in this thought experiment is cheating.

I don't see how. You asked, and I answered. Spoken human language appears to be something a computer cannot understand.


I contend that all of the information processing that a human does is at it's base syntactic and that we learn from syntactic information in order to build a semantic understanding.

Syntax rules like the kind a program runs may be necessary, but as the Chinese room experiment shows it is not sufficient--unless you wish to claim that the man in the room understands Chinese. As I said, rules may be part of the process, but they are not sufficient. My thought experiments prove this: they are examples of complex sets of instructions executing without real understanding taking place.

You could claim that the instructions given to the man in the Chinese room are not of the right sort, and that if the “right” program were run on a computer literal understanding would take place. But if so, please answer my questions regarding the robot and program X (see below).


I say that if a computer is capable of learning from syntactic information, which are the only rules in the chinese room that the man is allowed to understand, that the computer can eventually build a semantic understanding in the same manner which a human does. NO "SOMETHING MORE" NEEDED.

But if this learning procedure is done solely by a complex set of instructions, merely executing "right" program (learning algorithms and all) is not sufficient for understanding. By the way, you haven't answered my questions regarding my latest thought experiment (the robot and program X). Let's review:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the “right” program with learning algorithms etc. (let's call it “program X”) there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

You claimed that just having “the right hardware and the right program” would be enough. Clearly, just having the “right” program doesn't work. You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

I await your answers.


The chinese room is far too simple and very much misleading. It forces the man in the box to abide by it's rules without establishing that it's rules are even valid.

The rules are indeed valid: they give correct and meaningful answers to all questions received. In other words, the man has passed the Turing test.

And it isn't clear why the thought experiment is too “simple.” The man is using a complex set of instructions to do his work after all.
 
  • #132
I think the major premise of the Searle argument has been bypassed. He argued that semantics was essential to conciousness and that syntax could not generate semantics. The Chinese room was just an attempt to illustrate this position. At the time, decades ago, it was a valid criticism of AI, which had focussed on more and more intricate syntax.

But the AI community took the criticism to heart and has spent those decades investigating the representation of semantics, they have used more general systems than syntactic ones to do it, such as neural nets. So the criticism is like some old argument against Galilian dynamics; whatever you could say for it in terms of the knowledge of the time, by now it's just a quaint historical curiosity.
 
  • #133
selfAdjoint said:
I think the major premise of the Searle argument has been bypassed.

How so?


He argued that semantics was essential to conciousness and that syntax could not generate semantics. The Chinese room was just an attempt to illustrate this position. At the time, decades ago, it was a valid criticism of AI, which had focussed on more and more intricate syntax.

But the AI community took the criticism to heart and has spent those decades investigating the representation of semantics, they have used more general systems than syntactic ones to do it, such as neural nets.

The concept of neural networks in computer science is still just another complex set of instructions acting on input (albeit formal instructions of a different flavor than the days of yore); so it still doesn't really answer the question of "what else do you have?" Nor does it really address my counterexample of running the "right" program (the robot and program X; see post #131).

But perhaps you're thinking of something else: are you proposing the following:
Creating a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands questions in Chinese and gives answers to them. Surely then we would have to say that the computer understands then...?
 
  • #134
Tisthamemrw said:
TheStatutoryApe said:
So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.
I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).
Your reply:
TheStatutoryApe said:
The use of spoken human language in this thought experiment is cheating.
I don't see how. You asked, and I answered. Spoken human language appears to be something a computer cannot understand.
For one you have misquoted me, the first quote there was from someone else, and while that doesn't make much difference the fact that you don't seem to be paying attention and the fact that you conveniently don't quote any of my answers to the questions you claim I am not answering constitutes a problem with having any real discussion with you. If you don't agree with my answers that's quite alright but please give me a response telling me the issues that you have with them and it would also help if you stopped simply invoking the Chinese Room as your argument when I am telling you that I do not agree with the chinese room and I do not agree that a complex set of instructions isn't enough.

Learn to make a substancial argument rather than lean on someone else's as if it were a universal fact.

I gave you answers to your questions. If you want to find them and make a real argument against them I will indulge you further in this but not until then.
Thank you for what discussion we have had so far. I was not aware of the chinese room argument until you brought it up and I read up on it.
 
Last edited:
  • #135
TheStatutoryApe said:
For one you have misquoted me, the first quote there was from someone else

I apologize that I got the quote mixed up. Nonetheless the second quote was yours.


and while that doesn't make much difference the fact that you don't seem to be paying attention and the fact that you conveniently don't quote any of my answers to the questions you claim I am not answering constitutes a problem with having any real discussion with you.

Please tell me where you answered the following questions found the end of the quote below:

Tisthammerw said:
By the way, you haven't answered my questions regarding my latest thought experiment (the robot and program X). Let's review:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the “right” program with learning algorithms etc. (let's call it “program X”) there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

You claimed that just having “the right hardware and the right program” would be enough. Clearly, just having the “right” program doesn't work. You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

I await your answers.

Where did you answer these questions?

Note what happened below:

TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.

I responded that while people are obviously capable of understanding (there's no dispute there) my claim that a complex set of instructions--while perhaps necessary--is not sufficient for understanding (as this example proves: we have the “right” program and still no understanding).

But notice that you cut out the part of the thought experiment where I asked the questions. See post #128 for yourself if you don’t believe me. You completely ignored the questions I asked.

I will however answer one of your questions I failed to answer earlier.

So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books?

Part of it is that he can't learn binary code the same way he can learn English. Suppose for instance you use this rule:

If you see 11101110111101111
replace with 11011011011101100

And you applied this rule many times. How could you know what you what the sequence 11101110111101111 means merely by executing the instruction over and over again? How would you know, for instance, that you're answering “What is 2+2?” or “What is the capital of Minnesota?” It doesn’t logically follow that Bob would necessarily know the meaning of the binary code merely by following the rulebook any more than the man in the Chinese room would necessarily know Chinese. And ex hypothesi he doesn't know what the binary code means when he follows the rulebook. Are you saying such a thing is logically impossible? If need be, we could add that he has a mental impairment that renders him incapable of learning the meaning of binary code even though he can do fantastic calculations (a similar thing is true in real life for some autistic savants and certain semantics of the English language). So we still have a clear counterexample here (see below for more on this) of running the “right” program without literal understanding.


If you don't agree with my answers that's quite alright but please give me a response telling me the issues that you have with them and it would also help if you stopped simply invoking the Chinese Room as your argument when I am telling you that I do not agree with the chinese room and I do not agree that a complex set of instructions isn't enough.

The reason I use the Chinese room (and variants thereof) is because this is a clear instance of a complex set of instructions giving valid answers to input without literal understanding. I used what is known as a counterexample. A counterexample is an example that disproves a proposition or theory. In this case, the proposition that having a complex set of instructions is enough for literal understanding to exist. Note the counterexample of the robot and program X: we had the “right” set of instructions and it obviously wasn't enough. Do you dispute this? Do you claim that this man executing the program understands binary when he really doesn't?

You can point to the fact that humans can learn languages all you want, claim they are using syntactic rules etc. but that still doesn't change the existence of the counterexample. Question-begging and ignoratio elenchi is not the same thing as producing valid answers.


I gave you answers to your questions.

Really? Please tell me where you answered the questions I quoted.
 
Last edited:
  • #136
Neural networking directly addresses these issues.
 
  • #137
tishammerw: i don't think your argument against learning algorithms is conclusive...when you discuss such techniques you are not thinking along the lines of serial processing like if-then logic, rather parallel processing. And with that you are not discussing the simple flow of 3-4 neurons like in spiking neurons but on a numerous system of billions of interaction whether it be nnets or GAs or RL.

On another thing...we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding. However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument...what is your "what else" that will support your argument. Heh we shouldn't have to come up with your side of the argument.
 
  • #138
pallidin said:
Neural networking directly addresses these issues.

Addresses what issues? And how exactly does it do so?
 
  • #139
neurocomp2003 said:
tishammerw: i don't think your argument against learning algorithms is conclusive...when you discuss such techniques you are not thinking along the lines of serial processing like if-then logic, rather parallel processing.

Even parallel processing can do if-then logic. And we can say that the man in the Chinese room is a multi-tasker when he follows the instructions of the rulebook; still no literal understanding.


And with that you are not discussing the simple flow of 3-4 neurons like in spiking neurons but on a numerous system of billions of interaction whether it be nnets or GAs or RL.

One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.


On another thing...we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding.

And I have provided you with a counterexample, remember? Learning algorithms, sensors, etc. and still no understanding.


However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument

My counterexample proved that not even the existence of learning algorithms in a computer program is sufficient for literal understanding. The man in the Chinese room used the learning algorithms of the rulebook (and we can make them very complex if need be) and still there was no literal understanding. Given this, I think it's fair for me to ask "what else"? As for what I personally believe, I have already given you my answer. But this belief is not necessarily relevant to the matter at hand: I provided a counterexample--care to address it?
 
  • #140
tishammerw: what counterexample? that searle's argument says that there is no literal understanding by the brain without this "something else" tha tyou speak of? I'm still lost with your counterexample...or is it that if something else can imitate the human and clearly not understand...and then doesn't this imply that humans may not "understand" at all? what makes us so special? why do you believe that humans "understand"? and where is this proof...wouldn't searles argument also argue against human understanding?

It is fair for you to ask "what else" but you must also answer the question...because to us all that is needed are learning algorithms that emulate the brain nothing more.
If we were to state this "what else" then we would go against our beliefs? so is it fair for you to ask us to state this "what else" that YOU believe in? NO! and thus you must provide us with this explanation
 
  • #141
neurocomp2003 said:
tishammerw: what counterexample?

I have several, but I'll list two that seem to be the most relevant. Remember it was said earlier:

However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument

One of the counterexamples is the instance of a complex set of instructions including learning algorithms without literal understanding taking place. From post #103 (with a typo correction):

the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he doesn’t understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

So even a program with learning algorithms is not sufficient for literal understanding to exist.

It was said earlier:

we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding.

The other counterexample be found in post #126 where I talk about the robot and program X. This is an instance in which the "right" program (you can have it possessing complex learning algorithims etc.) is run and yet there is still no literal understanding.

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

TheStatutoryApe claimed just having “the right hardware and the right program” would be enough. Clearly having the “right” program doesn't work. He mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

So here we have an instance of the "right" program--learning algorithms and all--being run in a robot with sensors, and still no literal understanding. There is no real understanding even when this program is being run.

One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?


that searle's argument says that there is no literal understanding by the brain without this "something else" tha tyou speak of?

Searle argued that our brains have unique causal powers that go beyond the simple (or even complex) manipulation of input.


I'm still lost with your counterexample...or is it that if something else can imitate the human and clearly not understand...and then doesn't this imply that humans may not "understand" at all? what makes us so special?

Because we humans have that "something else."


why do you believe that humans "understand"?

Well, I'm an example of this. I am a human, and I am capable of literal understanding whenever I read, listen to people, etc.


wouldn't searles argument also argue against human understanding?

No, because we humans have that "something else."


It is fair for you to ask "what else" but you must also answer the question

Fair enough, but I have answered this question before. I personally believe this "something else" is the soul (Searle believes it is the brain’s unique causal powers, but I believe the physical world cannot be the source of them). Whether you agree with my belief however is irrelevant to the problem: you must still find a way out of the counterexamples if you wish to rationally maintain your position. And I don't think that can be done.
 
  • #142
tishammerw: cool i see yor argument now...taking aside what we know from physics...do you believethe soul is made out of some substance in our universe? does it exist in some form of physicality(not necessarily what we understand of physics today) or do you believe it exists from nothing?

also do you really think you can "understand" what can be written ...or can you see it as a complex emergence behaviour that gives you this feel for having a higher cognitive path then robots? The instincts to associate one word form to some complex pattern of inputs?

as for you support arguements to searle(the counterexamples)...how can you take a finite fragment of life(that is t0-t1) and state that a computer clearly cannot
understand because of this finite time frame...i could do the same thing with children.
And they nod their heads in agrreement though they will literally not understand...though as time goes forth they will grasp that concept. You do not think that a computer can do the same and grasp this concept over time? Do children not imitate their adult surroundings? I think you have neglected the true concept of learning by imitation and learning by interaction with the adults around you.
 
  • #143
neurocomp2003 said:
tishammerw: cool i see yor argument now...taking aside what we know from physics...do you believethe soul is made out of some substance in our universe?

No.


does it exist in some form of physicality(not necessarily what we understand of physics today) or do you believe it exists from nothing?

I believe the soul is incorporeal. Beyond that there is only speculation (as far as I know).


also do you really think you can "understand" what can be written ...or can you see it as a complex emergence behaviour that gives you this feel for having a higher cognitive path then robots?

The ability to understand likely relies on a number of factors (including learning “algorithms”). So the answer is “yes” if you're asking me if the mechanics is complex, "no" if you're asking me if it magically “emerges” through some set of physical parts.


as for you support arguements to searle(the counterexamples)...how can you take a finite fragment of life(that is t0-t1) and state that a computer clearly cannot
understand because of this finite time frame

I'm not sure what you're asking here. If you're asking me why I believe that computers (at least with their current architecture: complex system of rules acting on input etc.) given my finite time in the universe, my answer would be "logic and reason"--the variants of the Chinese room thought experiment as my evidence.


...i could do the same thing with children.
And they nod their heads in agrreement though they will literally not understand...though as time goes forth they will grasp that concept. You do not think that a computer can do the same and grasp this concept over time?

No, because it lacks that "something else" humans have. Think back to the robot and program X counterexample. Even if program X (with its diverse and complex set of learning algorithms) is run for a hundred years, Bob still won’t understand what's going on. The passage of time is irrelevant because it still doesn't change the logic of the circumstances.
 
  • #144
Tisthammerw said:
Where did you answer these questions?
Note what happened below:
TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.
I responded that while people are obviously capable of understanding (there's no dispute there) my claim that a complex set of instructions--while perhaps necessary--is not sufficient for understanding (as this example proves: we have the “right” program and still no understanding).
Note what happened above.
You conveniently did not quote my full answer...
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis. It's sensory information which is syntactic. The man's brain takes in syntactic information, that is information that has no more meaning than it's pattern structure and context with no intrinsic meaning to be understood, and it decifers the information without any meaningful thought and understanding what so ever in order to produce those chinese characters that he's looking at. The understanding of what the "picture" represents is an entirely different story but just ataining the "picture" that is sensory information is easily done by the processes the mans brain is already doing that are not requiring meaningful thoughts or output of him as a human. So I don't see the problem with allowing the man sensory input from outside. This is all that the man in the box has access to is the syntax of the information being presented. So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books? It all depends on the complexity of the language being used. Any spoken human language is incredibly complex and takes a vast reserve of experiencial data (learned rules of various sorts) to process, and experiencial data is syntactic as well.
Give the man in the room a more simple language to work with then. Start asking the man in the room math questions. What is one plus one? What is two plus two? The man in the room will be able to understand math given enough time to decifer the code and be capable of applying it.
Do you see that you have not adressed all of this?
I understand that it's a bit hodge podge so let me condense it down to my point that I don't think you have adressed.
You say that giving the homunculus sensory input via program X will only give the homunculus more script that it cannot meaningfully understand. The basis of this is that the homunculus can only draw conclusions based on the syntax of the information.
First off let's cut out the idea of the homunculus understanding what it sees since this is not what I am trying to prove yet. I am only trying to prove that it can actually see the outside world utilizing this program X.
Human sensory input itself is syntactic information which the brain translates into visual data (I'm just going to use vision as an example for the argument to keep this simple). The human brain acomplishes this feat in a minute fraction of a second without any meaningful understanding taking place. There are other parts of the brain that will give the data meaning but I am not bothering with going that far yet. Based on the rules of the C.R. and relating it to the manner in which humans receive sensory information we should be able to deduce that the homunculus in the CR should be capable of at least "seeing" if not understanding what it is seeing.
Can we agree on this?
As I already stated there remains the matter of understanding what is seen but if we could let's put that and all other matters on the back burner for the moment and see if we can agree on what I have proposed here. Let's also dispense with the idea of the homunculus formulating output based on the input since sensory input does not necesitate output. Let's say that the sensory input is solely for the benefit of the homunculus and it's learning program. Instead of throwing it directly into converations in chinese and asking of it "sink or swim" in regard to it's understanding of the conversation let us say that we are going to take it to school first and give it the opertunity to learn beforehand.
Perhaps if we can move through this point by point it will make it easier to communicate. We'll start with whether or not the CR homunculus can "see", not understand but just see, and formulate the CR environment so that it is in "learning mode" instead of being forced to respond to input.

One other thing though...

Tisthammerw" said:
Fair enough, but I have answered this question before. I personally believe this "something else" is the soul (Searle believes it is the brain’s unique causal powers, but I believe the physical world cannot be the source of them). Whether you agree with my belief however is irrelevant to the problem: you must still find a way out of the counterexamples if you wish to rationally maintain your position. And I don't think that can be done.
Perhaps to help you understand a bit more where I am coming from in this I do consider the idea of there being a sort of "something more" but not in the same manner that you do. Instead of "soul" I simple call it a "mind". The difference is that I do not believe that this is a dualistic thing. A more appropriate name for it might be "infospace", a sort of holographic matrix of information that has no tangible substance to it. My perception of it is not dualistic because I believe that it is wholely dependant upon a physical medium whether that be a brain or a machine. I believe that the processes of computers exist in "infospace". I see the difference between the "mind-space" and the purely computational "infospace" of a computer as nothing but a matter of structure and complexity.
I'm sure you don't agree with this idea, at least not completely, but hopefully it will help you understand better the way I perceive the AI problem and the comparison of human to machine.
 
  • #145
TheStatutoryApe said:
Note what happened above.
You conveniently did not quote my full answer...

Initially, I (wrongfully) dismissed it as not adding any real substance to the text I quoted.

Do you see that you have not adressed all of this?

In post #135 I addressed the question you asked, and responded (I think) to the gist of the text earlier.


I understand that it's a bit hodge podge so let me condense it down to my point that I don't think you have adressed.
You say that giving the homunculus sensory input via program X will only give the homunculus more script that it cannot meaningfully understand. The basis of this is that the homunculus can only draw conclusions based on the syntax of the information.
First off let's cut out the idea of the homunculus understanding what it sees since this is not what I am trying to prove yet. I am only trying to prove that it can actually see the outside world utilizing this program X.

That doesn't seem possible given the conditions of this thought experiment. Ex hypothesi he doesn't see the outside world at all; he is only the processor of the program.


Human sensory input itself is syntactic information which the brain translates into visual data (I'm just going to use vision as an example for the argument to keep this simple). The human brain acomplishes this feat in a minute fraction of a second without any meaningful understanding taking place. There are other parts of the brain that will give the data meaning but I am not bothering with going that far yet. Based on the rules of the C.R. and relating it to the manner in which humans receive sensory information we should be able to deduce that the homunculus in the CR should be capable of at least "seeing" if not understanding what it is seeing.
Can we agree on this?


The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.


In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.


As I already stated there remains the matter of understanding what is seen but if we could let's put that and all other matters on the back burner for the moment and see if we can agree on what I have proposed here. Let's also dispense with the idea of the homunculus formulating output based on the input since sensory input does not necesitate output.

Bob (the homunculus in the robot and program X scenario) does indeed formulate output (based on program X) given the input.


Lets say that the sensory input is solely for the benefit of the homunculus and it's learning program. Instead of throwing it directly into converations in chinese and asking of it "sink or swim" in regard to it's understanding of the conversation let us say that we are going to take it to school first and give it the opertunity to learn beforehand.

Again, while we can teach the homunculus a new language this doesn't have any bearing on the purpose of the counterexample: this (the robot and program X experiment) is a clear instance in which the “right” program is being run and yet there is still no literal understanding. And you still haven't answered the questions I asked regarding this thought experiment.

You can modify the thought experiment all you want, teach the homunculus a new language etc. but it still doesn't change the fact that I've provided a counterexample. "The right program with the right hardware" doesn't seem to work. Why? Because I provided a clear instance in which the "right program" was run on the robot and still there was no literal understanding. To recap:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.

One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?
 
  • #146
tishammerw- so the soul isn't made of anything detectable or not yet detectable but still persists as a single classifiable object. that is what your saying?

you stated "understanding" is not magically emergent yet you say the soul is not physically detectable."incorporeal" Are these statements not contradictory?

as for the finite time...i didn't mean your time but the time of the counterexample...which btw you referred to as bob as the robot though in your example bob was the human. Such a finite example of a robots life...
but isn't human "understanding" built through many years of learning. And thus you would need to take a grandeur example(many pages rather then 5 lines) inorder to give me an idea of what you are talking about with the pseudo-understanding because if i captured that instance of two humans rather than human-robot then i could say that both could be robots.

as for the learning that i describe in my last post..i wasn't talking about learning algorithms of programming techniques but the concept of learning from sociological/psychological standpoint...
 
  • #147
neurocomp2003 said:
tishammerw- so the soul isn't made of anything detectable or not yet detectable but still persists as a single classifiable object. that is what your saying?

Er, sort of. It is indirectly detectable; we can rationally infer its existence. The soul exists, but the precise metaphysical properties may be beyond our current understanding.


you stated "understanding" is not magically emergent yet you say the soul is not physically detectable."incorporeal" Are these statements not contradictory?

I don't see why they would be contradictory.


as for the finite time...i didn't mean your time but the time of the counterexample...which btw you referred to as bob as the robot though in your example bob was the human.

No, I was referring to Bob the human. Any other implication was unintentional. And in any case it is as I said; even if the counterexample were run for a hundred years Bob wouldn't understand anything.


but isn't human "understanding" built through many years of learning.
And thus you would need to take a grandeur example(many pages rather then 5 lines) inorder to give me an idea of what you are talking about with the pseudo-understanding because if i captured that instance of two humans rather than human-robot then i could say that both could be robots.

Huh?


as for the learning that i describe in my last post..i wasn't talking about learning algorithms of programming techniques but the concept of learning from sociological/psychological standpoint...

Well, yes we humans can learn. But learning algorithms for computers seem insufficient for the job of literal understanding.
 
  • #148
Tisthammerw said:
That doesn't seem possible given the conditions of this thought experiment. Ex hypothesi he doesn't see the outside world at all; he is only the processor of the program.
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole. The homunculus is supposed to represent the processing power of whole system not just a lone processor amidst it. Even a human's capacity for understanding is based on it's whole system acting as a single entity. If a human had never experienced eye sight this would leave a large gap in it's ability to understand human language. If your stripped a human down to nothing but a brain it would be in the same exact situation that you insist that a computer is in because it is now incapable of developing meaningful understanding of the outside world. Any sensory system that you give a computer should be treated exactly as the ones for a human, as part of the whole rather than just another source of meaningless script, because those tools are part of the systems corpus as a whole, just like a human.

Tisthammerw said:
The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.

In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.
It is true that if you lay down a bunch of binary in front of a human they are not likely to understand it. This does not mean though that the human brain is incapable of decifering raw syntactic information. As a matter of fact it translates syntactic sensory information at a furious pace continuously and that information is more complex than binary. The problem is that the CR asks the human to translate it with a portion of his brain illsuited to the task. You might as well ask your pacman to preform calculus or your texas instruments to play pacman. If you are intending to ask the man in the CR to interpret syntactic sensory data as fast and efficiently as possible you may as well let him use the portions of his brain that are suited to the task and give him a video feed. This would only be fair and the information he would be receiving would still be syntactic in nature.

I either missed it or didn't understand it but we do agree that translation of sensory data is a purely syntactic process right? Not the recognition but just the actual "seeing" part right?

How about some experimental evidence that may back this up. In another thread Evo reminded me of an experiment that was run where the subjects were given eyewear that inverted their vision. After a period of time their eyes adjusted and they began to see normally with the eyewear on. There was no meaningful understanding involved, no intentionality, no semantics. The brain simply adjusted the manner in which it interpreted the syntactic sensory data to fit the circumstances without the need of any meaningful thought on the part of the subjects.

How about one that involves AI. A man created small robots on wheels that were capable of using sensors to sense their immediate surroundings and tell if there was a power source near by. They were programmed to have "play time" where they scurried about the room and then "feeding time" when they were low on power where they sought out a power source and recharged. They were capable of figuring out the layout of they room they were in so as to avoid running into objects when they "played" and seeking out and remembering where the power sources were that they had sought out for when it was time to "feed". The room could get changed around and the robots would adapt.
Even with just regard to this last bit would you still contend that a computer would be unable to process syntactic sensory data, learn from it, and utilize it?

Tisthammerw said:
Bob (the homunculus in the robot and program X scenario) does indeed formulate output (based on program X) given the input.
A computer only produces output when it's program suggests that it should, AI or not. It isn't necessary so I see no need to continue with forcing the AI to produce output when ever it receives any kind of input here in the CR.

I'll have to finish later I need to get going.
 
  • #149
TheStatutoryApe said:
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole.

Ah, the old systems reply. The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese? That doesn’t strike me as plausible. Second, Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language. So the systems reply doesn't seem to work at all.


The homunculus is supposed to represent the processing power of whole system not just a lone processor amidst it.

Well, in the Chinese room he is the processing power of the whole system.


The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.

In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.

It is true that if you lay down a bunch of binary in front of a human they are not likely to understand it. This does not mean though that the human brain is incapable of decifering raw syntactic information. As a matter of fact it translates syntactic sensory information at a furious pace continuously and that information is more complex than binary.

That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.

BTW, don't forget the brain simulation reply:

One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.

So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.


I either missed it or didn't understand it but we do agree that translation of sensory data is a purely syntactic process right?

No (see above and below for more info).


Not the recognition but just the actual "seeing" part right?

The "seeing" of objects I do not believe to be purely syntactic (though I do believe it involves some syntactic processes within the brain).


How about some experimental evidence that may back this up. In another thread Evo reminded me of an experiment that was run where the subjects were given eyewear that inverted their vision. After a period of time their eyes adjusted and they began to see normally with the eyewear on. There was no meaningful understanding involved, no intentionality, no semantics. The brain simply adjusted the manner in which it interpreted the syntactic sensory data to fit the circumstances without the need of any meaningful thought on the part of the subjects.

Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive. One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.


How about one that involves AI. A man created small robots on wheels that were capable of using sensors to sense their immediate surroundings and tell if there was a power source near by. They were programmed to have "play time" where they scurried about the room and then "feeding time" when they were low on power where they sought out a power source and recharged. They were capable of figuring out the layout of they room they were in so as to avoid running into objects when they "played" and seeking out and remembering where the power sources were that they had sought out for when it was time to "feed". The room could get changed around and the robots would adapt.
Even with just regard to this last bit would you still contend that a computer would be unable to process syntactic sensory data, learn from it, and utilize it?

I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).

And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
 
  • #150
Tisthammerw said:
Ah, the old systems reply. The systems reply goes something like this:
This does not adress my objection what so ever. I am not saying that the whole system understands chinese. I'm not saying that combining the man with the book and pen and paper will make him understand chinese. The situation would be a bit more accurate with regard to paralleling a computer though.
The objection I had was in regard to the manner in which you are seperating the computer from the sensory input. My entire last post was in regard to sensory input. I told you in the post before that this is what I wanted to discuss before we move on. Pay attention and stop detracting from the issues I am presenting.
If I were to rip your eye balls out, somehow keep them functioning, and then have them transmit data to you for you to decifer you wouldn't be able to do it. Your eyes work because they are part of the system as a whole. You're telling me that the "eyes" of the computer are separate from it and just deliver input for the processor to formulate output for. In your argument it's "eyes" are a separate entity processing data and sending information on to the man in the room. Are there little men in the eyes processing information just like the man in the CR? Refusing to allow the AI to have eyes is just a stuborn manner by which to preserve the CR argument.
This is no where near an accurate picture. This is one of the reasons I object to you stating that the computer must produce output based on the sensory input. You're distracting from the issue of the computer absorbing and learning by saying that it is incapable of anything other than reacting when this isn't even accurate. Computers can "think" and simply absorb information and process it without giving immediate reactionary output. As a matter of fact most computers "think" before they act now a days. Computers can cogitate information and analyze it's value, I'll go into this more later.
Are you really just unaware of what computers are capable of now a days?
With the way that this conversation is going I'm inclined to think that you are a chinese man in an english room formulating output based on rules for arguing the Chinese Room Argument. Please come up with your own arguments instead of pulling out stock arguments that don't even adress my points.

Tisthammerw said:
Well, in the Chinese room he is the processing power of the whole system.
He should be representative of the system as a whole including the sensory aperati. If you were separated from your sensory organs and made to interpret sensory information from an outside source you would be stuck in the same situation the man in the CR is. You are not a homunculus residing inside your head nor is the computer a homunculus residing inside it's shell.

Tisthammerw said:
That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.

BTW, don't forget the brain simulation reply:
No. Your hypothetical system does not allow the man in the room to use the portions of his brain suited for the processing of the sort of information you are sending him. Can you read the script on a page by smelling it? How easily do you think you could tell the difference between a piece by Beethoven and one by Motzart with your finger tips? How about if I asked you to read a book only utilizing the right side of your brain? Are any of these a fair challenge? The only one that you might be able to pull of is the one with your finger tips but either way you are still not hearing the music are you?
It has nothing to do with not having the "right program". The human brain does have the right program but you are refusing to allow the man in the room to use it just like you are refusing to allow the computer to have "eyes" of it's own but rather it's outsourcing the job to another little man in another little room somewhere who only speaks chinese.

Tisthammerw said:
One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.
So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.
Even here yet again you fail to adress my objection while using some stock argument. My objection was that you are not allowing the man in the room to properly utilize his own brain. Yet again you force us to devorce the man in the room from the entirety of the system by creating some crude mock up of a nueral net rather than allowing him to utilize the one already in his head. Why create the mock up when he has the real thing with him already? Creating these intermediaries only hinders the man. You continually set him up to fail by not allowing him to reach his goal in the most well suited and efficient manner at his disposal. If anyone were to actually design computers like you(or Searle) design your rooms which are supposed to parallel them they'd be fired.

Tisthammerw said:
Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive.
Here you seem to misunderstand the CR argument. The property of the information that the man in the CR room is able to understand is the syntax; the structure, the context, the patterns. This isn't just the manner in which it arrives it is the manner in which he works with it and perceives it. He lacks only the semantic property. Visual information is nothing but syntactic. There is no further information there except the structure, context, and pattern of the information. You do not have to "understand" what you are looking at in order to "see" it. The man in the box does not understand what the chinese characters are that he is looking at but he can still perceive them. He lacks only the ability to "see" the semantic property, that is all.

Tisthammerw said:
One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.
You do understand why the brain surgeon can not perceive colour right? It's a lack of the proper hardware, or rather wetware in this case. The most common problem that creates colour blindness is that the eyes lack the proper rods and cones (I forget exactly which ones do what but the case is something of this sort none the less.). If she were to undergo some sort of operation to add the elements necessary for gathering colour information to her eyes, a wetware upgrade, then she should be able to see in colour assuming that the proper software is present in her brain. If the software is not present then theoretically she could undergo some sort of operation to add it, software upgrade for her nueral processor. Funny enough your own example is perfect in demonstrating that even a human needs the proper software and hardware/wetware to be capable of perception! So why is it that the proper software and hardware is necessary for a human to do these special processes that you attribute to it but the right software and hardware is not enough to help a computer? Does the human have a magic ball of yarn? What? LOL!
And I already know what you are going to say. You'll say that the human does have a magic ball of yarn which you have dubbed a "soul". Yet you can not tell me the properties of this soul and what exactly it does without invoking yet more magic balls of yarn like "freewill" or maybe Searle's "Causal Mind" or "Intrinsic Intentionality". So what are these things what do they do? Will you invoke yet more magic balls of yarn? Maybe even the cosmic magic ball of yarn called "God"? None of these magic balls of yarn prove anything. Ofcourse you will say that the CR proves that there must be "Something More". So what if I were to just take a cue from you and say that all we need to do is find a magic ball of yarn called "AI" and embue a computer with it. I can't tell you what it does except say that it gives the computer "Intrinsic Intentionality" and/or "Freewill". Will you except this answer to your question? If you won't then you can not expect me to accept your magic ball of yarn either, so both arguments are then uselss and invalid for the purpose of our discussion since they yield no results.

Tisthammerw said:
I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).

And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
Obviously it doesn't understand things the way we do but what about understanding things the way a hamster does? You seem to misunderstand the way AI works in instances such as these. The AI is not simply following instructions. When the robot comes to a wall there is not an instruction that says "when you come to a wall turn right". It can turn either right or left and it makes a decision to do one or the other. Ofcourse this is a rather simplistic example so let's bring it up a notch.
Earlier in this discussion Deep Blue was brought up. You responded in a very similar manner to that as you did this but I never got back to discussing it. You seem to think that a complex set of syntactic rules is enough for Deep Blue to have beaten Kasperov. The problem though is that you are wrong. You can not create such rules for making a computer play chess and have the computer be successful. At least not against anyone who plays chess well and especially not against a world champion such as Kasperov. You can not simply program it with rules such as "when this is the board position move king's knight one to king's bishop three". If you made this sort of program and expected it to respond properly in any given situation you would have to map out the entire game tree. Computers can do this far faster than we can and even they at current max processing speed will take at least hundreds of thousands of years to do this. By that time we will be dead and unable to write out the answers for every single possible board position. So we need to make short cuts. I could go on and on about how we might accomplish this but how about I tell you the way that I understand it is actually done instead.
The computer is taught how to play chess. It is told the board set up and how the pieces move, take each other, and so on. Then it is taught strategy such as controling the center, using pieces in tandum with one another, hidden check, and so forth. So far the computer has not been given any set of instructions on how to respond to any given situation such as the set up in the CR. It is only being taught how to play the game, more or less in the same fashion that a human learns how to play the game except much faster. The computer is then asked based on the rules presented for the game and the goals presented to it to evaluate possible moves and pick one that is the most advantageous. This is pretty much what a human does when a human plays chess. So since the computer is evaluating options and making decisions would you still say that it can not understand what it is doing and is only replacing one line of code with another line of code as it says to do in it's manual?
 

Similar threads

Replies
26
Views
2K
Replies
1
Views
2K
Replies
21
Views
2K
Replies
40
Views
5K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
2K
Back
Top