Can Artificial Intelligence ever reach Human Intelligence?

Click For Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #91
tisthammer: are you saying that there are physical laws to which adhere only to carbon based systems that silicon-based systems cannot ever acheive?

are you familiar with the terms "selfsimilar fractals on multiple scales"
also at the end of post70(you told me to lok) i am unsure what that has to do with adaptive techniques

also this concept of "understanding" do you believe it lies outside the brain? if so do you believ that the "soul" lies outside the rbain? and thus if one removes the brain the soul/understanding continue to funciton.

and remember we are not talkign abotu a desktop PC though we could if we were using wirless signals to transmit to arobotic body. We are talking about a robot with sensory informantion that a human child would have.

Lastly you speak of hte concept of a soul...if it travels from body to body why does a child not instantly speak outta the womb? Do you believe it remembers from a past existence...
if so that what physical realm(not necessarily ares) does this soul exist in?
If not then what does a soul represent if its transferance to another body does not bring with it knowledge, languages,emotions,artistic talents what exactly is the purpose of a "soul" the way you would define it?
if it does not travel from body to body then does it exist only when a child exists yet you still believe it not to hav e aphysical presence in our known physics?

IMO, i blieve we must discuss your idea of a soul(i believe you said its for a different thread) here because it it relevant to the discussion at hand. Firstly
we must clarify certain terminology(if you had posted your terms above please refer me to them). Awareness, Consciousness, UNderstanding, Soul. Since the terms soul and understanding seem to be a big part of your argument that AI(lets use the proper term now, rather than just computer) can never have this spirtual existence that you speak of and thus will only be mimicking a human no matter how real it could be. Which leads me to also ask isn't it possibel that human consciousness is only a byproduct of it fundamental structures of networks that lie within the brain? The constant flow of information to your language zones allows them to produce the words in your head making you "believe" that you are actually thinking and this continues for all time?
 
Physics news on Phys.org
  • #92
neurocomp2003 said:
tisthammer: are you saying that there are physical laws to which adhere only to carbon based systems that silicon-based systems cannot ever acheive?

No, but I am saying there are principles operating in reality that seem to prevent a computer from understanding (e.g. one that the Chinese room illustrates). Computer programs just don't seem capable of doing the job.


are you familiar with the terms "selfsimilar fractals on multiple scales"

I can guess what it means (I know what fractals are) but I'm unfamiliar with the phrase.


also this concept of "understanding" do you believe it lies outside the brain?

Short answer, yes. I believe that understanding cannot exist solely in the physical brain, because physical processes themselves seem insufficient to create understanding. If so, an incorporeal (i.e. soul) component is required.


if so do you believ that the "soul" lies outside the rbain?

The metaphysics are unknown, but if I had to guess I'd say it lies "within" the brain.


and thus if one removes the brain the soul/understanding continue to funciton.

Picture a man in a building. He has windows to the outside world, and a telephone as well. Suppose someone comes along and paints the windows black, cuts telephone lines, etc. But once the building is gone, he can get up and leave. I think the same sort of thing is true for brain damage. The person can't receive the "inputs" from the physical brain and/or communicate the "outputs." If the physical brain is completely destroyed, understanding (which requires inputs) might be possible but would seem to require another mechanism besides the physical brain. This may be possible, and thus so is an afterlife.


and remember we are not talkign abotu a desktop PC though we could if we were using wirless signals to transmit to arobotic body. We are talking about a robot with sensory informantion that a human child would have.

That maybe true, but the same principles apply: manipulating input through a system of complex rules to produce "valid" output. This doesn't and can't produce understanding as the Chinese room demonstrates. Visual data is still represented as 1s and 0s, rules of manipulation are still being applied etc.


Lastly you speak of hte concept of a soul...if it travels from body to body why does a child not instantly speak outta the womb? Do you believe it remembers from a past existence...

I don't think I believe it travels from "body to body," and I do not believe in reincarnation.

Why does the baby not speak outside of the womb? Well, it hasn't learned how to yet.


if it does not travel from body to body then does it exist only when a child exists yet you still believe it not to hav e aphysical presence in our known physics?

I don't know when the soul is created; perhaps it is only when the brain is sufficiently developed to provide inputs. BTW, here's my metaphysical model:

Inputs(sensory perceptions, memories etc.) -> Soul -> Outputs (actions etc.)

The brain has to be advanced enough to provide adequate input. In a way, the physical body and brain provides the “hardware” for the soul to do its work (storing memories, providing inputs, a means to do calculations, etc.).


IMO, i blieve we must discuss your idea of a soul(i believe you said its for a different thread) here because it it relevant to the discussion at hand.

As you wish. Feel free to start a thread in the metaphysics section of this forum. I'll be happy to answer any questions.


Firstly
we must clarify certain terminology(if you had posted your terms above please refer me to them). Awareness, Consciousness, UNderstanding, Soul.

The soul is the incorporeal basis of oneself; in my metaphysical theory it is the "receiver" of the inputs and the ultimate "initiator" of outputs. Awareness, consciousness, and understanding are the "ordinary" meanings as I use them (i.e. if you don't know what they mean, feel free to consult your dictionary; as I attach no "special" meaning to them).


Since the terms soul and understanding seem to be a big part of your argument that AI(lets use the proper term now, rather than just computer) can never have this spirtual existence that you speak of and thus will only be mimicking a human no matter how real it could be. Which leads me to also ask isn't it possibel that human consciousness is only a byproduct of it fundamental structures of networks that lie within the brain?

No, because there would be no "receiver" to interpret the various chemical reactions and electrical activity occurring in the brain. (Otherwise, it would sort of be like the Chinese room.)


The constant flow of information to your language zones allows them to produce the words in your head making you "believe" that you are actually thinking and this continues for all time?

The words (and various other inputs) may come from the physical brain, but a soul would still be necessary if real understanding is to take place.
 
  • #93
that is correct, the speech software does make choices, that have been taught to it,by a teacher! the programmers only job was to write general learning software, not to teach it how to behave. Unless my job has been a fantasy for the last 15 years.
 
  • #94
if you believe the soul exists & is undefinable within the chemical processes going on in the brain, then the answer is no, but if you believe the brain is the sum of its parts then the answer is yes
 
  • #95
tishammer: heh i think u may need to define your definition of a rule. Is it like a physics based rule-where interaction, collisions and forces are domintate or a math/cs based rule where logic is more prevalent.
 
  • #96
hypnagogue said:
What exactly do you mean by this? In fact, computation in the human brain is essentially digital-- either a neuron undergoes an action potential or it does not. In principle, there is nothing about the way the human brain computes that could not be replicated in an artificial system.

They have a step by step processes, we have parallel thinking.
 
  • #97
wrong...there is pseudo parallel, (multithreading,parallel computing)granted it may be slower then realtime but it still exists. and i believe that certain companies are in the midst of developing parallel computers...look at your soundcard/videocard/cpu they run on separate hardware.
 
  • #98
The problem with the chinese room example is that you're trying to argue in favor of a "soul". Could you please show me on a model of the human body where the soul is? Which organ is that attached to?

It's hard to replicate or emulate something that doesn't exist. We as humans are influenced by emotions. Emotions can be programmed. That is our "soul" as keeps being tossed about. However to go with it, we could suppose that it is our "soul" that allows us to feel empathy, pity, joy, sadness, etc. That's the "soul" you refer to, and it's possible to duplicate emotions. We use all of our senses to "understand" the world we live in. We learn from birth how the world works, when it is appropriate to be happy, sad, angry, etc. I believe that given sufficient technology if a machine were "born" with the same senses as human beings, they could so closely replicate human behavior, intuitiveness, and intelligence as to be indistinguishable from the real thing.

An argument was made that a computer couldn't emulate human behavior because a machine can't exceed it's programming. Well a computer can have a "soul" if we program it with one. I agree that we still do not fully understand our own thought processes and how emotions affect our decisions, but that doesn't mean we won't someday. And if we can understand it, we can duplicate it. Someone else computers said can't "understand" human behavior. I have to repeat hypnagogue-

What does it mean to "understand"?

If we teach them what we want them to know, and give them the same faculties as we possess, they will inevitably "understand". If we tell a sufficiently advanced computer something like "when someone dies, they are missed- this is sad", eventually they would understand. Teaching through example is fundamental to human understanding.

I think there is a chasm here, but it's a spiritual one, not a technological one. If you leave the chinese man in the room with the alphabet and the ruleset he may never learn chinese. But if you translate one sentence into english for him, and give him sufficient time, eventually he will read chinese fluently. Does the fact that he needed to be helped to read the chinese change anything? In some things a machine is lacking (ie has to be taught emotions instead of being born with them.) But in some instances it is more advanced (doesn't get tired, doesn't forget, etc.) A machine will never actually "be" a himan being because one is created naturally, the other artificially. However, this does not mean that a computer can't "understand" what it is to be human.

Let's narrow it down. If you could attach a functioning human brain to a humanistic robot, with all the 5 senses, and allow that brain to operate all those senses, does this "machine" have a soul? Let's say it was the brain of a human child, without any experience as a human being- How would that creation develop? Would it learn the same way it would in a human body? If a machine could process touch, taste, smell, sight, in the same way humans do, wouldn't it basically have the same "understanding" as a human?

I believe the problem is that most people have trouble with the concept of a machine that can duplicate the human experience- It may be sci-fi today, but in 100 or 200 years, may be childsplay. People in their minds conceptualize a machine as incapable of human understanding because regardless of the cpu, it does not have the 5 senses. Because the AI of today is so childlike. When AI has advanced to the point where if you give it a translation of english to latin, it can not only understand latin, but every other language in the world, and then create it's own linguistics, that will be a machine capable of understanding. And I think that type of intelligence scares people. Because then, we are the children.

EDIT: I believe the original question has been answered- machines can exceed humans in intelligence- why? because you can always build a better computer- we still haven't been able to improve the human brain. Not only that, but last I checked you couldn't connect multiple brains to process info simultaneously.

Therefore, the prominent questions remain: "can machines feel? can machines have a soul?"

EDIT 2: I've been thinking about this gap of emotional understanding. We can program a computer to show mercy, but will it understand why it shows mercy? The answer is a complexed one. We have to show it, through example, why showing mercy is compassion. We have to teach it why there are benefits to themselves to do such things. Things have to be taught to machines which to us are beyond simplistic. However a machine would not kill except in self defense. Emotions are simultaneously our strengths and our weaknesses. But they can be taught.
 
Last edited:
  • #99
you should perhaps read up on jeff hawkins theory of intelligence, and also should read his book "on intelligence"
i plan on designing something according to those lines
 
  • #100
Zantra said:
The problem with the chinese room example is that you're trying to argue in favor of a "soul".

Actually, I'm primarily using it to argue against strong AI. But I suppose it might be used to argue in favor of the soul. Still, there doesn't seem to be anything wrong with the Chinese argument: the person obviously does not understand Chinese.

Could you please show me on a model of the human body where the soul is? Which organ is that attached to?

Probably the physical brain (at least, that's where it seems to interact).


Well a computer can have a "soul" if we program it with one.

I doubt it. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?

If we teach them what we want them to know, and give them the same faculties as we possess, they will inevitably "understand".

But given the story of the Chinese room, that doesn't seem possible in principle.

I think there is a chasm here, but it's a spiritual one, not a technological one. If you leave the chinese man in the room with the alphabet and the ruleset he may never learn chinese. But if you translate one sentence into english for him, and give him sufficient time, eventually he will read chinese fluently.

Great, but that doesn't help the strong AI thesis. It's easy to say understanding is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?


Let's narrow it down. If you could attach a functioning human brain to a humanistic robot, with all the 5 senses, and allow that brain to operate all those senses, does this "machine" have a soul?

The human brain does.

Let's say it was the brain of a human child, without any experience as a human being- How would that creation develop? Would it learn the same way it would in a human body? If a machine could process touch, taste, smell, sight, in the same way humans do, wouldn't it basically have the same "understanding" as a human?

I suppose so, given that this is a brain of a human child. But this still doesn't solve the problem of the Chinese room, nor does it imply that a computer can be sentient. Computer programs use a complex set of instructions acting on input to produce output. As the Chinese room illustrates, that is not sufficient for understanding. So what else do you have?

People in their minds conceptualize a machine as incapable of human understanding because regardless of the cpu, it does not have the 5 senses.

It's difficult to see why that would make a difference. We already have cameras and microphones which can be plugged into a computer, for instance. Machines can convert the sounds and images to electrical signals, 1s and 0s, process them according to written instructions etc. but we still have the same problem that the Chinese room points out.
 
  • #101
neurocomp2003 said:
tishammer: heh i think u may need to define your definition of a rule. Is it like a physics based rule-where interaction, collisions and forces are domintate or a math/cs based rule where logic is more prevalent.

In the case of machines and understanding, it's more like a metaphysical principle, like ex nihilo nihil fit.
 
  • #102
Tisthammerw said:
Actually, I'm primarily using it to argue against strong AI. But I suppose it might be used to argue in favor of the soul. Still, there doesn't seem to be anything wrong with the Chinese argument: the person obviously does not understand Chinese.

Your analogy has holes in it. Regardless of weather the man can understand chinese, machines CAN understand us. It may not be able to empathize, but it understands the structure of things to the same degree as we do. You have to define for me exactly what it doesn't understand- exactly what it is that cannot be taught, because by my definition, you can teach a machine anything that you can teach a human. Give me one example of something you can't teach a machine. The chinese room springs from the notion that if something isn't inherently human by design, it cannot "understand" humanistic behavior- I think this is false. There is purpose behind each human emotion- it doesn't follow logic, but a computer can be taught to disregard logic when faced with an emotional situation.

Probably the physical brain (at least, that's where it seems to interact).

The brain is composed of synapses, dendrites, and action potentials- so I think we can agree that no where in the brain has any scan ever revealed a particular region of of the brain that is the "soul". That is spirituality. I'm taking this from a totally scientific POV, which means that you can no more prove you have a soul than you can prove the machine doesn't have one.



I doubt it. Given what we've learned from the Chinese room, how could machines possibly understand? You'd need something other than complex rules manipulating input for literal understanding to exist, as the Chinese room story demonstrates. What could the designer possibly add to make a computer understand? A magical ball of yarn?

What does literal understanding encompass? Are you simply stating that to be us is to know us? That regardless of any superior intellect capabilities it is beyond anyone to truly understand us unless they are us? If so, that's very presumptive and not realistic. If you put a CPU into a human body--to reverse the notion--it will be able to fully comprehend what it is to be human. It's said that the sum total of a person's memories can fit onto about 15 petabytes of drive space. That's about 10 years off-maybe. When you can transfer someone's entire mind to a computer, does that memory loose it's soul? If an advanced AI computer analyzes this, would it not understand? All humanistic understand requires is a fram of reference to be understood.

Great, but that doesn't help the strong AI thesis. It's easy to say understanding is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. The man may be capable of learning a new language, but this clearly requires something other than a complex set of rules for input/output processing. My question: so what else do you have?

Well if you can't teach him chinese, you will just have to take him to china :wink:

I suppose so, given that this is a brain of a human child. But this still doesn't solve the problem of the Chinese room, nor does it imply that a computer can be sentient. Computer programs use a complex set of instructions acting on input to produce output. As the Chinese room illustrates, that is not sufficient for understanding. So what else do you have?

That's the way current AI operates. In the future this may not always be the case. I've been reading some of Jeff Hawkin's papers- interesting stuff. If you change the way a computer processes the information, it may be capable of learning the same way we do, through association. The Chinese room is a dilema. I'm not suggesting that the chinese room is wrong exactly. I'm just saying we need to change the rules of the room so that he can learn in chinese(human). The funny part is that this debate is a step backwards in evolution. We can teach a machine to understand why humans behave the way we do, but why would we want to teach them to "BE" human? Humans make mistakes. Humans do illogical things that don't make sense. Humans get tired, humans forget, humans get angry and jealous. Machines do none of those things. The purpose of machines is to assist us, not to take our place.

That being said I believe that if we change the way machines process inpu progress can be made. As far as how we get from point a to point b, that I can't answer.
 
  • #103
Zantra said:
Your analogy has holes in it.

Please tell me what they are.


Regardless of weather the man can understand chinese, machines CAN understand us.

That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):


Will a computer program ever be able to simulate intelligence (e.g. a human conversation)? If technological progress continues, I am certain that it will. But at its heart computer programs are nothing more than giant rulebooks manipulating bits of data (1s and 0s). So while a program may be a successful natural language processor, it will never understand language any more than the man in the Chinese room understands Chinese.

The Chinese room thought experiment shows that a computer program clearly requires something other than a complex set of rules manipulating input. But what else could a computer possibly have to make it have understanding?

Feel free to answer that question (I haven't received much of an answer yet).


You have to define for me exactly what it doesn't understand

Assuming "it" means a computer, a computer cannot understand anything. It may be able to simulate conversations etc. via a complex set of rules manipulating input, but it cannot literally understand the language anymore than the person in the Chinese room understands Chinese.


- exactly what it is that cannot be taught

It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.


The brain is composed of synapses, dendrites, and action potentials- so I think we can agree that no where in the brain has any scan ever revealed a particular region of of the brain that is the "soul".

Well, duh. The soul is a nonphysical entity. I'm saying that is where the soul interacts with the physical world.


I'm taking this from a totally scientific POV, which means that you can no more prove you have a soul than you can prove the machine doesn't have one.

I wouldn't say that.


What does literal understanding encompass?

I use no special definition. Understanding means "to grasp the meaning of."


Are you simply stating that to be us is to know us? That regardless of any superior intellect capabilities it is beyond anyone to truly understand us unless they are us?

The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?


It's said that the sum total of a person's memories can fit onto about 15 petabytes of drive space. That's about 10 years off-maybe. When you can transfer someone's entire mind to a computer, does that memory loose it's soul?

As for a person's literal mind, I don't even know if that's possible. But if you're only talking about memories--raw data--than I would say the person's soul is not transferred.


If an advanced AI computer analyzes this, would it not understand?

If all this computer program does is use a set of instructions to mechanically manipulate the 1s and 0s, then my answer would be "no more understanding than the man in the Chinese room understands Chinese." Again, you're going to need something other than rules manipulating data here.


If you change the way a computer processes the information, it may be capable of learning the same way we do, through association.

But if you're still using the basic principle of rules acting on input etc. that won't get you anywhere. A variant of the Chinese room also changes the way input is processed. But the man still doesn't understand Chinese.


I'm not suggesting that the chinese room is wrong exactly. I'm just saying we need to change the rules of the room so that he can learn in chinese(human).

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?
 
  • #104
Tist, You seem to make quite a few assumtions without much other reason than "There must be something more."
Why exactly must it be that there is something more?
Why is a complex, mutable, and rewritable system of rules not enough to process information like a human? What does the soul do that is different than this?
Your soul argument is just a homunculus. There is a small insubstancial being inside of us that does all of the "understanding" or real information processing.
Lets hit that first. What is "understanding" if not a manner of processing information? You say that "understanding" is required for meaningful output. Then please illucidate us as to the process that is understanding. How is understanding different than processing of information via complex rules?
To this I am sure that you will once again invoke your chinese room argument, but your chinese room does not allow the potential AI any of the freedoms of a human being. The man in the chinese room is just another homunculus scenario except that you have made the assumption that this one is apparently incapable of this magical metaphysical property that you refer to as "understanding" (a magic ball of yarn in the human mind?).
You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why? Simply because the AI homunculus doesn't possesses the same magical ball of yarn that your soul homunculus has? Your soul homunculus is in the same position that the homunculus in the chinese room is. The chinese room homunculus is that function which receives input and formulates a response based on complex rules received from the outside world. Your soul homunculus is in it's own box receiving information from the outside world in some sort of language that the brain uses to express what it sees and hears but you say that it's decision making process is somehow fundamentally differant. Does the soul homunculus not have a set of rule books? Does it somehow already supernaturally know how to understand brainspeak? What is the fundamental difference between the situations of the chinese room homunculus and the soul homunculus?
 
  • #105
Tisthammerw said:
Please tell me what they are.

Ape seems to have beaten me to the punch. If we show the man in the room how to translate chinese that says to me that he is able to understand the language he is working with. No further burden of proof is required. Furthermore, let us assume we not only teach the man how to read chinese, but what the purpose of language is, how it allows us to communicate, etc. The man is capable of learning chinese- that's the assumption. Your assumption is that the rules are static and can't be changed.

That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):

And in response to that I say that the human mind is nothing more than an organic mirror of it's cpu conterpart: processing input, interpreting the data and outputting a response. You're trying to lure me into a "soul debate".

The Chinese room thought experiment shows that a computer program clearly requires something other than a complex set of rules manipulating input. But what else could a computer possibly have to make it have understanding?

Essentially a CPU emulates the human brain in terms of processing information. If AI can learn the "why" behind answers to questions, that to me satisfies the requirement. The better question would be: what is the computer lacking that makes it incapable of understanding to your satisfaction?"

It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

The room understands that his name is Bob. What more needs to be known about bob? That's an example of a current AI program. I can problably find something like that online. But what if the conversation went a little differently-ie:

Human: how are you today?
Room: I'm lonely. What is your name?
Human: My name is Bob. Why are you lonely?
Room: Nice to meet you Bob. You are the first person I have met in 2 years.
HUMAN: I can understand why you are lonely. Would you like to play a game with me?
Room: I would like that very much.

The computer appears to have more of a "soul". In fact, If we take away the nametags, we could easily assume this is a conversation between 2 people.

Well, duh. The soul is a nonphysical entity. I'm saying that is where the soul interacts with the physical world.

And I'm saying that if a computer has enough experience in the human condition, weather it has a soul or not, doesn't matter- it still understands enough.

I use no special definition. Understanding means "to grasp the meaning of."

Ok then by that definition, a computer is fully capable of understanding. Give me any example of something that a computer can't "understand" and I will tell you how a computer can be taught this, weather by example, experience or just plain old programming. I'm talking about a computer that learns on it's own without being prompted. A computer that sees something it doesn't understand, and takes it upon itsself to deduce the answers using it's available resources. That's true AI.

The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

You keep alluding to your own magic ball of yarn. What is this magical property that you keep hinting at but never defining? What is this thing that humans have that machines cannot posess? Are you talking about a soul? What is a soul exactly? How about curiosity? If we design a machine that is innately curious, doesn't that make him strikingly human in nature?

If all this computer program does is use a set of instructions to mechanically manipulate the 1s and 0s, then my answer would be "no more understanding than the man in the Chinese room understands Chinese." Again, you're going to need something other than rules manipulating data here.

Then what do humans use to process data? How do we interact with the world around us? We process sensory input(IE data) and we process the information in our brains (CPU) then react to that processed data accordingly(output). What did I miss about the human process?

But if you're still using the basic principle of rules acting on input etc. that won't get you anywhere. A variant of the Chinese room also changes the way input is processed. But the man still doesn't understand Chinese.

But yet you still refuse to teach the guy how to read chinese. Man he must be frustrated. Throw the guy a bone :wink:

Absolutely, but note that this requires something other than a set of rules manipulating input. It's easy to say learning is possible when we already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?

You have to change your way of thinking. sentience can be had in a machine. Can I tell you how to go out and build one? Can I tell you how something like this will be accomplished? No. But this isn't science fiction, it is science future. It's hard to see how to launch rockets into space when we've just begun to fly- we're still at Kitty Hawk. But in time it will come.
 
Last edited:
  • #106
TheStatutoryApe said:
Tist, You seem to make quite a few assumtions without much other reason than "There must be something more."
Why exactly must it be that there is something more?
Why is a complex, mutable, and rewritable system of rules not enough to process information like a human?

I'll try again.

The Chinese Room

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.

The Chinese room shows that having a complex system of rules acting on input is not sufficient for literal understanding to exist. We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

(Remember, variants of the Chinese room include the system of rules being complex, rewritable etc. and yet the man still doesn’t understand a word of Chinese.)


What does the soul do that is different than this?

I believe that literal understanding (in addition to free will) requires something fundamentally different--to the extent that the physical world cannot do it. The soul is and provides the incorporeal basis of oneself.


Lets hit that first. What is "understanding" if not a manner of processing information?

Grasping the meaning of the information. It is clear from the Chinese room that merely processing it does not do the job.


To this I am sure that you will once again invoke your chinese room argument, but your chinese room does not allow the potential AI any of the freedoms of a human being.

By all means, please tell me what else a potential AI has other than a complex set of instructions to have literal understanding.


You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why?

I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.


Note: the text below goes off topic into the realm of the soul

Simply because the AI homunculus doesn't possesses the same magical ball of yarn that your soul homunculus has?

Actually, my point is that the soul is the figurative "magical ball of yarn." Physical processes seem completely incapable of producing real understanding; something fundamentally different is required.


Does it somehow already supernaturally know how to understand brainspeak?

This is one of the reasons why I believe God is the best explanation for the existence of the soul; the incorporeal would have to successfully interact with a highly complex form of matter (the brain). The precise metaphysics may be beyond our ability to discern, but I believe that this how it came to be.


What is the fundamental difference between the situations of the chinese room homunculus and the soul homunculus?

The soul provides that “something else” that mere computers don't have.
 
Last edited:
  • #107
Zantra said:
Ape seems to have beaten me to the punch. If we show the man in the room how to translate chinese that says to me that he is able to understand the language he is working with. No further burden of proof is required.

I've already responded to this. While your idea may sound good on paper, watch what happens when we try to instantiate this analogy into a real computer.

You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.


Your assumption is that the rules are static and can't be changed.

Not at all. Variants of the Chinese room include learning algorithms and the creation of different procedures (the man has extra paper to write down more information etc.) as I illustrated before (when the Chinese room "learns" a person's name).


That seems rather question begging in light of the Chinese room thought experiment. As I said in post #56 (p. 4 of this thread):

And in response to that I say that the human mind is nothing more than an organic mirror of it's cpu conterpart: processing input, interpreting the data and outputting a response.

...

Essentially a CPU emulates the human brain in terms of processing information.

And is still question begging based on what we've learned from the Chinese room, and still doesn't answer my question of "what else" a computer has besides using a complex set of rules acting on input in order to literally understand.


It can be metaphorically taught; the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he dosn't understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

The room understands that his name is Bob. What more needs to be known about bob? That's an example of a current AI program. I can problably find something like that online. But what if the conversation went a little differently-ie:

Human: how are you today?
Room: I'm lonely. What is your name?
Human: My name is Bob. Why are you lonely?
Room: Nice to meet you Bob. You are the first person I have met in 2 years.
HUMAN: I can understand why you are lonely. Would you like to play a game with me?
Room: I would like that very much.

The computer appears to have more of a "soul".

And so does the room. Nonetheless, the person in the room doesn't know the man's name is Bob, isn't necessarily feeling lonely, doesn't even understand Bob's words at all etc. We still just have a complex set of rules operating on input, which I've shown is insufficient for literal understanding to exist.


I use no special definition. Understanding means "to grasp the meaning of."

Ok then by that definition, a computer is fully capable of understanding.

The Chinese room thought experiment would seem to disprove that statement--unless you can show me what else a computer has besides a complex set of rules etc. that would make it literally understand.


The answer to both questions is no. Now how about answering my question? We'd need computers to have something else besides a set of instructions (however complex) manipulating input to overcome the point the Chinese room makes. It's difficult to conceive how that could even be theoretically possible. What could we possibly add to the computer to make it literally understand? A magic ball of yarn? A complex arrangement of bricks? What?

You keep alluding to your own magic ball of yarn. What is this magical property that you keep hinting at but never defining?

I have repeatedly pointed out that computers manipulating input via a set of instructions is not sufficient to produce understanding. My question: "what else do you have?" That's for you to answer, not me. I claim there is nothing you can add to the computer to make it literally understand.


Note: going off topic to the soul realm

What is this thing that humans have that machines cannot posess?

A soul.


Are you talking about a soul?

Yes.


What is a soul exactly?

The incorporeal basis of oneself.


How about curiosity? If we design a machine that is innately curious, doesn't that make him strikingly human in nature?

I believe we can make a machine "strikingly human in nature" in the sense that the machine can mimic human behavior--just as the Chinese room can mimic a person fluent in Chinese. But that does not imply the existence of literal understanding.


You have to change your way of thinking. sentience can be had in a machine.

Rather question begging in light of the Chinese room, especially when you can't answer my question: what else could a computer possibly add for it to possesses literal understanding?

Apparently nothing.
 
Last edited:
  • #108
what complex rule? Learnign algorithms dont' use logic rules in the sense of language.
 
  • #109
neurocomp2003 said:
what complex rule? Learnign algorithms dont' use logic rules in the sense of language.

Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.
 
  • #110
Your entire argument still revolves around, "There must be something more."
You still limit the homunculus in the chinese room even though your soul of which you speak is the same thing except you have given it a magic ball of yarn. Just a magic ball of yarn with no explination as to what the ball of yarn does, what it's made of, or how it works. So if I told you that all I have to do is give a computer "AI" that it would be sentient would you believe me? Wouldn't you ask me what this "AI" does and how it does it? If I simply told you that it's the fundamental element of computer sentience that gives it "free will" and "understanding" would you be satisfied?
This is as much information as you have given us regarding this soul. You simply say that it must exist for there to be "free will" and "understanding" hence since humans have "free will" and "understanding" this soul obviously exists! This argument is completely useless and a classic example of bad logic.

Do you realize that Searle, who came up with the chinese room, didn't argue for a soul? He argued what he calls intrinsic intentionality which it seems is just as vague a notion as the soul which you argue for. You would call it "free will" most likely but Searle doesn't postulate that a soul is necessary for free will.

But what about current AI computers that outpreform what most people ever thought they would be able to do? Deep Blue beat Kasparov (the world champion chess player). How does a machine do that without being genuinely intelligent? It would have to make decisions and produce meaningful output wouldn't it?
I have a cheap computer program that plays Go. That's a complex (even more so than chess) Japanese/Chinese strategy game. One day I was playing the computer and found that I had gotten the better of the computer pretty well in a certain part of the board. I decided to back track the moves and play that bit over again and see if there were possibly any better moves to be made. After playing with different options and being satisfied that I had made the most advantageous moves in that situation I tried playing the original sequence to get to where I was before I had back tracked. The computer though decided it was going to do something completely different than it had the first time. If the computer has no "understanding" what so ever of what is going on then how does it make decisions to make differing responses to the same set of circumstances? And this is just a cheap program that isn't very good.
 
  • #111
ah so your definition fo a rule is any rule...fair enough. I was under the impression your def'n of rule was something grandeur than bit logic. Anyways going back to consciousness...isn't it just the product of interactions/collisions of the physical objects that creates it ...or are you saying that there exists this mysticism known as a soul that exists outside of any physical object(known/unknown to man) that exist in our universe.
 
  • #112
Tisthammerw said:
Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.
How is this any different than the human body and brain? The signals that our brain receives isn't in english nor are the outputs that it gives. Like I've been trying to show you just put a little man inside the brain, you can even call it a soul if you'd like, and you will have the exact same situation that you have been giving us regarding the chinese room.

---edit---

I wouldn't be suprised if those who try to negate the idea of freewill and a human being more than the some of it's parts would use a version of the chinese room argument to make their case.
 
Last edited:
  • #113
Here is a short simple essay discussing the way humans think. One of Searle's arguments is that the homunculus in the chinese room can only learn syntactic rules but not semantic rules based on it's situation. After reading this article it occurred to me that Searle is, perhaps unintentionally, proposing an underlying essence to the chinese words and characters by bringing in the element of semantic rules as a base line for comprehension. If you go a layer or two deeper on the matter of semantic rules though you'll quickly realize that even the semantic rules are based on a form of syntactic rule. That is to say the syntax of experiencial information creates the semantic rule.
Semantics are in reality rooted in the syntax which Searle contends is the only thing that computers "understand". The computers capacity of only being able to "understand" syntax is the very basis of his argument. THAT is the gaping hole in Searle's chinese room argument. At it's base all cognition comes from syntax.

HA! I feel so much better now that I was finally able to pin point what it is that made the argument seem so illogical to me.
 
  • #114
TheStatutoryApe said:
Your entire argument still revolves around, "There must be something more."

Yes, and the Chinese room thought experiment (see post post #106) would seem to illustrate that point rather nicely. You still haven’t found a way to overcome that problem.


You still limit the homunculus in the chinese room even though your soul of which you speak is the same thing except you have given it a magic ball of yarn.

Not quite. The soul is the incorporeal basis for the self, consciousness, understanding, and sentience. Using our yarn metaphor, the soul is the “magic ball of yarn.”


Do you realize that Searle, who came up with the chinese room, didn't argue for a soul?

Yes I do. Searle was a physicalist. But that doesn't alter my points. It still seems that a computer lacks the means to possesses literal understanding, and it still seems that the Chinese room thought experiment is sound.


But what about current AI computers that outpreform what most people ever thought they would be able to do? Deep Blue beat Kasparov (the world champion chess player). How does a machine do that without being genuinely intelligent?

In this case, it did so using a complex set of rules (iterative deepening search algorithms with Alpha-Beta pruning etc.) acting on input. I myself have made an AI that could beat many players at a game called Nim. Nonetheless, it still doesn't overcome the point the Chinese room makes: a complex set of rules operating on input is insufficient for literal understanding. So what else do you have?


It would have to make decisions and produce meaningful output wouldn't it?
I have a cheap computer program that plays Go. That's a complex (even more so than chess) Japanese/Chinese strategy game. One day I was playing the computer and found that I had gotten the better of the computer pretty well in a certain part of the board. I decided to back track the moves and play that bit over again and see if there were possibly any better moves to be made. After playing with different options and being satisfied that I had made the most advantageous moves in that situation I tried playing the original sequence to get to where I was before I had back tracked. The computer though decided it was going to do something completely different than it had the first time. If the computer has no "understanding" what so ever of what is going on then how does it make decisions to make differing responses to the same set of circumstances?

Like many programs, it uses a complex set of instructions acting on input. Don't forget that the Chinese room can emulate these very same features (e.g. making different responses with the same question etc.) given the appropriate set of rules.
 
  • #115
neurocomp2003 said:
ah so your definition fo a rule is any rule...fair enough. I was under the impression your def'n of rule was something grandeur than bit logic. Anyways going back to consciousness...isn't it just the product of interactions/collisions of the physical objects that creates it

I believe the answer is no.


...or are you saying that there exists this mysticism known as a soul that exists outside of any physical object(known/unknown to man) that exist in our universe.

I can only speculate as to the precise metaphysics behind it, but it seems clear to me that the mere organization of matter is insufficient for producing consciousness and free will. Therefore, such things having an incorporeal basis is the only logical alternative.
 
  • #116
TheStatutoryApe said:
Tisthammerw said:
Computer algorithms (learning and otherwise) do use logic rules in the sense of programming languages. And if you recall, a computer program is a set of instructions telling the computer what to do. Among its most basic levels are assembly and machine languages (all high-level languages can be and in fact are “translated” into assembly or machine code) which has instructions like storing a value into a data register (a physical component of the computer), adding this value and that etc. all according to rules like Boolean logic.

How is this any different than the human body and brain?

If you recall, I believe there is an incorporeal basis for consciousness and understanding for human beings. Otherwise I think you're right; there really is no fundamental difference. If the Chinese room thought experiment is sound, it would seem to rule out the possibility of physicalism.

One could make this argument

  1. If physicalism is true, then strong AI is possible via complex sets of rules acting on input
  2. Physicalism is true
  3. Therefore such strong AI is possible (from 1 and 2)

But premise 2 is a tad question begging, and the Chinese room seems to refute the conclusion. Therefore I could argue (if premise 1 were true)

  1. If physicalism is true, then strong AI is possible via complex sets of rules acting on input
  2. Such strong AI is not possible (Chinese room)
  3. Therefore physicalism is not true (from 1 and 2)

So the first premise doesn't really establish anything for strong AI unless perhaps one can do away with the Chinese room, and I haven't seen a refutation of it yet.
 
  • #117
TheStatutoryApe said:
Here is a short simple essay discussing the way humans think. One of Searle's arguments is that the homunculus in the chinese room can only learn syntactic rules but not semantic rules based on it's situation. After reading this article it occurred to me that Searle is, perhaps unintentionally, proposing an underlying essence to the chinese words and characters by bringing in the element of semantic rules as a base line for comprehension. If you go a layer or two deeper on the matter of semantic rules though you'll quickly realize that even the semantic rules are based on a form of syntactic rule. That is to say the syntax of experiencial information creates the semantic rule.

It is true that we humans can pick up semantic rules based on experience. It is also evident that we humans can "learn by association." Nonetheless, this type of learning presupposes consciousness etc. and it is evident from the Chinese room that a complex set of rules acting on input is insufficient for literal understanding to exist. Even when a computer "learns by association" through audio-visual input devices, literal understanding does not take place.

Note that we already discussed something similar: a computer learning by what it sees and hears. Even when based on sensory experience, it didn't work, remember? You said:

You ask what else do you add to an AI to allow it to "understand". I, and others, offered giving your chinese room homunculus a view of the world outside so that the language it is receiving has a context. Also give it the ability to learn and form an experience with which to draw from. You seem to have rejected this as simply more input that the homunculus won't "understand". But why?

I replied:

I never said the homunculus wouldn't understand, only that a computer won't. Why? (I've explained this already, but I see no harm in explaining it again.) Well, try instantiating this analogy to real computers. You have cameras and microphones, transducers that turn the signals into 1s and 0s, then use a complex set of rules to manipulate that input and produce output...

And we have the exact same problem as last time. It's the same scenario (set of rules operating on input) with a slightly different flavor. All you've done here is change the source of the input. A different person may ask different Chinese questions, but the man in the room still won't understand the language.

Obviously, something else is required besides a complex set of rules acting on input.


Semantics are in reality rooted in the syntax which Searle contends is the only thing that computers "understand". The computers capacity of only being able to "understand" syntax is the very basis of his argument. THAT is the gaping hole in Searle's chinese room argument. At it's base all cognition comes from syntax.

If you're claiming that all knowledge is ultimately based on a complex set of rules acting on input, I wouldn't say that--unless you wish to claim that the man in the Chinese room understands Chinese. It's true that we humans learn the rules of syntax for words, but it's more than that; we can literally understand their meaning. This is something a mere complex set of rules etc. can't do, as I've illustrated with the Chinese room thought experiment.


HA! I feel so much better now that I was finally able to pin point what it is that made the argument seem so illogical to me.

Start feeling bad again. The Chinese room still shows that a set of rules--however complex and layered--acting on input is insufficient for literal understanding to exist. Adding additional layers of rules still isn't going to do the job (we could add additional rules to the rulebook, as we did before in this thread with the variations of the Chinese room, but the man still doesn't understand Chinese). Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.
 
Last edited:
  • #118
so understanding lies outside the physcality of our universe...but is contained within our brain/body? so the firing of billion neurons feeding form vision/audition to memory/speech will not form understanding?
 
  • #119
neurocomp2003 said:
so understanding lies outside the physcality of our universe...but is contained within our brain/body?

If you want my theory, I believe the soul is parallel to the physical realm, acting within the brain.


so the firing of billion neurons feeding form vision/audition to memory/speech will not form understanding?

By itself, no (confer the Chinese room) since it seems that mere physical processes can't do the job.
 
  • #120
Tisthammerw said:
If you're claiming that all knowledge is ultimately based on a complex set of rules acting on input, I wouldn't say that--unless you wish to claim that the man in the Chinese room understands Chinese. It's true that we humans learn the rules of syntax for words, but it's more than that; we can literally understand their meaning. This is something a mere complex set of rules etc. can't do, as I've illustrated with the Chinese room thought experiment.
But the question is why and how do we understand. The chinese room shows that both machines and humans will be unable to understand a language without an experiencial syntax to draw from. This is how humans learn, through syntax. Through syntax we develope a semantic understanding. We do not know inately what things mean. There is no realm of platonic ideals that we tap from birth. We LEARN TO UNDERSTAND MEANING. How do you not get that? Your necessity for a magic ball of yarn is not a valid or logical argument since I might as well call your soul a magic ball of yarn and it holds about as much meaning. Tell me what the soul does, not just that it is the incorporial manifestation of self because that's entirely meaningless as well. It doesn't tell me what it does. "Freewill" and "Understanding", these things don't tell me what it does or how it does it either. You're going to have to do a hell of a lot better than that.
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
Replies
1
Views
3K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • Poll Poll
  • · Replies 76 ·
3
Replies
76
Views
10K
Replies
15
Views
6K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K