Can Artificial Intelligence ever reach Human Intelligence?

Click For Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #121
Tisthammerw said:
Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.
My point is that nothing else is required. Just the right hardware and the right program. I denounce your need for a magic ball of yarn until you can give me some concrete property that belongs to it that helps process information. "Freewill" and "true understanding" are just more vague philosophical notions without anything to back them up or even any reason to believe that a soul is necessary for them.
I contend that a human mind starts out with nothing but it's OS and syntactic experience as a base from which it developes it's "meaningful understanding" and that a computer has the capacity for the same.
 
Physics news on Phys.org
  • #122
Pengwuino said:
I'm pretty sure my cell phone has more intelligence then some of the people I have met...

...and I am sure that who created the concept of cp for it to be realized, is much more intellegence that any models of cp that exists... without human intellegence cp can't possibly exist.
 
  • #123
TheStatutoryApe said:
But the question is why and how do we understand. The chinese room shows that both machines and humans will be unable to understand a language without an experiencial syntax to draw from. This is how humans learn, through syntax.

Partially. The Chinese room shows that a complex set of instructions is insufficient for understanding. Real understanding may include the existence of rules, but a set of rules is not sufficient for understanding.


We LEARN TO UNDERSTAND MEANING. How do you not get that?

I understand that we humans can learn to understand meaning. My point is that something other than a set of instructions is required (see above), and the Chinese room thought experiment proves it. Note the existence of learning algorithms on computers. If the learning algorithms are nothing more than another set of instructions, the computer will fail to understand (note the variant of the Chinese room that had learning algorithms; learning the person's name and so forth).


Your necessity for a magic ball of yarn is not a valid or logical argument

My argument is that something else besides a complex set of instructions is required, and my argument is logical since I have the Chinese thought experiment to prove it. Here we have an instance of a complex set of instructions acting on input to produce valid output, yet no understanding is taking place. Thus, a set of instructions is not enough for understanding.


Tell me what the soul does

This is going off topic again but here it goes: the soul interacts with the corporeal world to produce effects via agent-causation (confer the agency metaphysical theory of free will) as well as receiving input from the outside world.


TheStatutoryApe said:
Thus, something else is required. A human may have that “something else” but it isn't clear that a computer does. And certainly you have done nothing to show what else a computer could possibly have to make it possesses literal understanding, despite my repeated requests.

My point is that nothing else is required.

The Chinese room thought experiment disproves that statement. Here we have an instance of a complex set of rules acting on input (questions) to produce valid output (answers) and yet no real understanding is taking place.


Just the right hardware and the right program.

Suppose we have the "right" program. Suppose we replace the hardware with Bob. Bob uses a complex set of rules identical to the program. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run the program, get valid output etc., and yet no real understanding is taking place. So even having the “right” rules and the “right” program is not enough. So what else do you have?

You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?


I denounce your need for a magic ball of yarn until you can give me some concrete property that belongs to it that helps process information.

The magical ball of yarn was just a metaphor, as in when I asked the question "What else do you have besides a complex set of rules manipulating input? A magical ball of yarn?"

That last question may have been somewhat rhetorical (though the first one was not).


"Freewill" and "true understanding" are just more vague philosophical notions without anything to back them up or even any reason to believe that a soul is necessary for them.

That's not entirely true. One thing to back up the existence of “true understanding” is everyday experience: we grasp the meaning of words all the time. We have reason to believe a soul is necessary for free will (click here to see this article on that).
 
  • #124
The bottom line is that you have nothing to counter with. "something more" is not a valid argument. Define what you're referring to, or the argument is done. I know you can't. And the reason you don't know specifically is because that "something more" doesn't exist, except in our minds. If there were a human-like robot with AI advanced enough to imitate human speech and behavior, it would be indecipherable from a true human. What you're saying to me, is that even if you were fooled into believing it was a human initially, if it was then revealed that it was actually a machine, you would deem it not enough of a human to be human. You would think this because you "percieve" something that isn't there. A magical component that only human beings possesses which cannot be duplicated. However, you can't name this thing, because it's in your mind. It does not exist. You are referring to in essence a "soul" which is an ideal. Ideals can be programmed. Nothing exists in use which cannot be duplicated.

As I've already stated, in my version of the chinese room, the man is taught chinese, and so he understands the information he is processing. You refuse to accept that analogy, but it still stands. I'm satisfied this discussion is resolved. Everything else at this point is refusal to accept the truth, unless you can tell me exactly what this "something more" is. You keep referring to "understanding" but we've already defined understanding. For instance, mathmatics. I think we can generally agree that there is no room for interpretation there-you understand math, or you don't. You are right, or you are wrong. There's no subtle undertones, no underlying philosophy. Yet you claim computers cannot understand it the way you do. I didn't realize we as humans possessed some mathematical reasoning which is beyond that of a machine.

So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.
 
Last edited:
  • #125
zantra/ape: outta curiosity are you suggesting that searle's argument is only capable of
rendering the view of child/toddler learning/development (the whole syntax/semantic thing) and that it is to naive an argument to compete wtih teh complexity of the adult brain? or rather i should say the computational complexity of the brain.
 
  • #126
Zantra said:
The bottom line is that you have nothing to counter with. "something more" is not a valid argument.

You're right that "something more" is not a valid argument. But the Chinese room thought experiment is a valid argument in that it demonstrates the need for something more.

Recapping it again:

The Chinese Room thought experiment

Suppose we have a man who speaks only English in a room. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook containing a complex set of instructions (formal syntactic rules, e.g. "if you see X write down Y") of what to write down in response to a set of Chinese characters. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, the slips of paper are actually questions and he is writing back answers.

The Chinese room can simulate a conversation in Chinese; a person can slip questions written in Chinese under the door of the room and get back answers. Nonetheless, although the person can respond to questions with valid output (via using a complex set of instructions acting on input), he does not understand Chinese at all.

Here we have an instance of a complex set of rules acting on input (questions) yielding valid output (answers) without real understanding. (Do you disagree?) Thus, a complex set of rules is not enough for literal understanding to exist.


If there were a human-like robot with AI advanced enough to imitate human speech and behavior, it would be indecipherable from a true human.

The man in the Chinese room would be indecipherable from a person who understands Chinese, yet he does not understand the language.


As I've already stated, in my version of the chinese room, the man is taught chinese, and so he understands the information he is processing.

Except that I'm not claiming a person can't understand Chinese, I'm claiming that a machine can't. You're argument "a person can be taught Chinese, therefore a computer can too" is not a valid argument. You need to provide some justification, and you haven't done that at all.

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

TheStatutoryApe claimed just having “the right hardware and the right program” would be enough. Clearly having the “right” program doesn't work. He mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?


Everything else at this point is refusal to accept the truth, unless you can tell me exactly what this "something more" is.

That's ironic. It is you who must tell me what this "something more" a computer has for it to literally understand. The Chinese room proves that a complex set of rules acting on input isn't enough. So what else do you have?


So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.

I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).

What about your burden of proof? You haven’t justified your claim of “if a human can learn Chinese, so can a computer,” for instance. Let's see you prove your theory: show me something else (other than a complex set of instructions acting on input) a computer has that enables it to literally understand. I've made this request repeatedly, and have yet to here a valid answer (most times it seems I don't get an answer at all).
 
  • #127
tishammerw: but you see i think our proof is in the advancement of ADAPTIVE learning techniques. THat is our something more...however your something more still remains a mysticism to us and i think that was zantra's point...

as for the chinese searle room problem.
I will be arguing that the chinese searle room also argues that humans have no extra "understanding" as you suggest...that the understanding is a mere byproduct

lets say there are 3 people.2 are conversing over the phone in chinese.
One only understands chinese the other(westerner) is learning chinese. the 3rd person is an english2chinese teacher and is only allowed to converse with the westerner for
5min and cannot converse with the chineses person. How much comprehension of chinese do you think the westerner can get within 5 mins?
 
  • #128
Tisthammer said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis. It's sensory information which is syntactic. The man's brain takes in syntactic information, that is information that has no more meaning than it's pattern structure and context with no intrinsic meaning to be understood, and it decifers the information without any meaningful thought and understanding what so ever in order to produce those chinese characters that he's looking at. The understanding of what the "picture" represents is an entirely different story but just ataining the "picture" that is sensory information is easily done by the processes the mans brain is already doing that are not requiring meaningful thoughts or output of him as a human. So I don't see the problem with allowing the man sensory input from outside. This is all that the man in the box has access to is the syntax of the information being presented. So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books? It all depends on the complexity of the language being used. Any spoken human language is incredibly complex and takes a vast reserve of experiencial data (learned rules of various sorts) to process, and experiencial data is syntactic as well.
Give the man in the room a more simple language to work with then. Start asking the man in the room math questions. What is one plus one? What is two plus two? The man in the room will be able to understand math given enough time to decifer the code and be capable of applying it.

Tisthammer said:
I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).
The use of spoken human language in this thought experiment is cheating. The man in the box obviously hasn't enough information to process by which to gain an understanding. If, as I stated earlier, you used math instead which is entirely selfreferencial and sytactic then the man would have all the information he needed to understand the mathematical language right there in front of him.

[quote="Tisthammerw]What about your burden of proof? You haven’t justified your claim of “if a human can learn Chinese, so can a computer,” for instance. Let's see you prove your theory: show me something else (other than a complex set of instructions acting on input) a computer has that enables it to literally understand. I've made this request repeatedly, and have yet to here a valid answer (most times it seems I don't get an answer at all).[/quote]
I contend that all of the information processing that a human does is at it's base syntactic and that we learn from syntactic information in order to build a semantic understanding. I say that if a computer is capable of learning from syntactic information, which are the only rules in the chinese room that the man is allowed to understand, that the computer can eventually build a semantic understanding in the same manner which a human does. NO "SOMETHING MORE" NEEDED.

The chinese room is far too simple and very much misleading. It forces the man in the box to abide by it's rules without establishing that it's rules are even valid.
 
  • #129
neurocomp2003 said:
zantra/ape: outta curiosity are you suggesting that searle's argument is only capable of
rendering the view of child/toddler learning/development (the whole syntax/semantic thing) and that it is to naive an argument to compete wtih teh complexity of the adult brain? or rather i should say the computational complexity of the brain.
Yes, that's more or less my point. The argument is far too simple and jumps orders of magnitude in complexity of a real working system as if they don't exist.
 
  • #130
neurocomp2003 said:
tishammerw: but you see i think our proof is in the advancement of ADAPTIVE learning techniques. THat is our something more

But if these adaptive learning algorithms are simply another complex set of instructions, this will get us nowhere. Note that I also used a variant of the Chinese room that had learning algorithms that adapted to the circumstances, and still no understanding took place.


as for the chinese searle room problem.
I will be arguing that the chinese searle room also argues that humans have no extra "understanding" as you suggest

Please do.


lets say there are 3 people.2 are conversing over the phone in chinese.
One only understands chinese the other(westerner) is learning chinese. the 3rd person is an english2chinese teacher and is only allowed to converse with the westerner for
5min and cannot converse with the chineses person. How much comprehension of chinese do you think the westerner can get within 5 mins?

This really doesn't prove that a complex set of rules (as for a program) is sufficient for understanding. Note that I'm not claiming a person can't learn another language. We humans can. My point is that this learning requires something other then a set of rules. Rules may be part of the learning process, but a set of instructions is not sufficient for understanding as the Chinese room indicates (we have a set of instructions, but no understanding).
 
  • #131
TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.

People are capable of understanding; no one is disputing that. However, my claim is that a complex set of instructions--while perhaps necessary--is not sufficient for understanding. Searle for instance argued that our brains have unique causal powers that go beyond the execution of program-like instructions. You may doubt the existence of such causation, but notice the thought experiment I gave. This is a counterexample proving that merely having the "right" program is not enough for literal understanding to take place. Would you claim, for instance, that this man executing the program understands binary when he really doesn't?


Tisthamemrw said:
TheStatutoryApe said:
So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.

I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).

Your reply:

The use of spoken human language in this thought experiment is cheating.

I don't see how. You asked, and I answered. Spoken human language appears to be something a computer cannot understand.


I contend that all of the information processing that a human does is at it's base syntactic and that we learn from syntactic information in order to build a semantic understanding.

Syntax rules like the kind a program runs may be necessary, but as the Chinese room experiment shows it is not sufficient--unless you wish to claim that the man in the room understands Chinese. As I said, rules may be part of the process, but they are not sufficient. My thought experiments prove this: they are examples of complex sets of instructions executing without real understanding taking place.

You could claim that the instructions given to the man in the Chinese room are not of the right sort, and that if the “right” program were run on a computer literal understanding would take place. But if so, please answer my questions regarding the robot and program X (see below).


I say that if a computer is capable of learning from syntactic information, which are the only rules in the chinese room that the man is allowed to understand, that the computer can eventually build a semantic understanding in the same manner which a human does. NO "SOMETHING MORE" NEEDED.

But if this learning procedure is done solely by a complex set of instructions, merely executing "right" program (learning algorithms and all) is not sufficient for understanding. By the way, you haven't answered my questions regarding my latest thought experiment (the robot and program X). Let's review:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the “right” program with learning algorithms etc. (let's call it “program X”) there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

You claimed that just having “the right hardware and the right program” would be enough. Clearly, just having the “right” program doesn't work. You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

I await your answers.


The chinese room is far too simple and very much misleading. It forces the man in the box to abide by it's rules without establishing that it's rules are even valid.

The rules are indeed valid: they give correct and meaningful answers to all questions received. In other words, the man has passed the Turing test.

And it isn't clear why the thought experiment is too “simple.” The man is using a complex set of instructions to do his work after all.
 
  • #132
I think the major premise of the Searle argument has been bypassed. He argued that semantics was essential to conciousness and that syntax could not generate semantics. The Chinese room was just an attempt to illustrate this position. At the time, decades ago, it was a valid criticism of AI, which had focussed on more and more intricate syntax.

But the AI community took the criticism to heart and has spent those decades investigating the representation of semantics, they have used more general systems than syntactic ones to do it, such as neural nets. So the criticism is like some old argument against Galilian dynamics; whatever you could say for it in terms of the knowledge of the time, by now it's just a quaint historical curiosity.
 
  • #133
selfAdjoint said:
I think the major premise of the Searle argument has been bypassed.

How so?


He argued that semantics was essential to conciousness and that syntax could not generate semantics. The Chinese room was just an attempt to illustrate this position. At the time, decades ago, it was a valid criticism of AI, which had focussed on more and more intricate syntax.

But the AI community took the criticism to heart and has spent those decades investigating the representation of semantics, they have used more general systems than syntactic ones to do it, such as neural nets.

The concept of neural networks in computer science is still just another complex set of instructions acting on input (albeit formal instructions of a different flavor than the days of yore); so it still doesn't really answer the question of "what else do you have?" Nor does it really address my counterexample of running the "right" program (the robot and program X; see post #131).

But perhaps you're thinking of something else: are you proposing the following:
Creating a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands questions in Chinese and gives answers to them. Surely then we would have to say that the computer understands then...?
 
  • #134
Tisthamemrw said:
TheStatutoryApe said:
So here's the burden of proof: Give me one example of something that you understand that a computer can't learn. Just one. Prove your theory.
I literally understand the meaning of words. It would appear that a computer cannot learn to literally understand meaning of words (confer the Chinese room thought experiment).
Your reply:
TheStatutoryApe said:
The use of spoken human language in this thought experiment is cheating.
I don't see how. You asked, and I answered. Spoken human language appears to be something a computer cannot understand.
For one you have misquoted me, the first quote there was from someone else, and while that doesn't make much difference the fact that you don't seem to be paying attention and the fact that you conveniently don't quote any of my answers to the questions you claim I am not answering constitutes a problem with having any real discussion with you. If you don't agree with my answers that's quite alright but please give me a response telling me the issues that you have with them and it would also help if you stopped simply invoking the Chinese Room as your argument when I am telling you that I do not agree with the chinese room and I do not agree that a complex set of instructions isn't enough.

Learn to make a substancial argument rather than lean on someone else's as if it were a universal fact.

I gave you answers to your questions. If you want to find them and make a real argument against them I will indulge you further in this but not until then.
Thank you for what discussion we have had so far. I was not aware of the chinese room argument until you brought it up and I read up on it.
 
Last edited:
  • #135
TheStatutoryApe said:
For one you have misquoted me, the first quote there was from someone else

I apologize that I got the quote mixed up. Nonetheless the second quote was yours.


and while that doesn't make much difference the fact that you don't seem to be paying attention and the fact that you conveniently don't quote any of my answers to the questions you claim I am not answering constitutes a problem with having any real discussion with you.

Please tell me where you answered the following questions found the end of the quote below:

Tisthammerw said:
By the way, you haven't answered my questions regarding my latest thought experiment (the robot and program X). Let's review:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the “right” program with learning algorithms etc. (let's call it “program X”) there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

You claimed that just having “the right hardware and the right program” would be enough. Clearly, just having the “right” program doesn't work. You mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

I await your answers.

Where did you answer these questions?

Note what happened below:

TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.

I responded that while people are obviously capable of understanding (there's no dispute there) my claim that a complex set of instructions--while perhaps necessary--is not sufficient for understanding (as this example proves: we have the “right” program and still no understanding).

But notice that you cut out the part of the thought experiment where I asked the questions. See post #128 for yourself if you don’t believe me. You completely ignored the questions I asked.

I will however answer one of your questions I failed to answer earlier.

So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books?

Part of it is that he can't learn binary code the same way he can learn English. Suppose for instance you use this rule:

If you see 11101110111101111
replace with 11011011011101100

And you applied this rule many times. How could you know what you what the sequence 11101110111101111 means merely by executing the instruction over and over again? How would you know, for instance, that you're answering “What is 2+2?” or “What is the capital of Minnesota?” It doesn’t logically follow that Bob would necessarily know the meaning of the binary code merely by following the rulebook any more than the man in the Chinese room would necessarily know Chinese. And ex hypothesi he doesn't know what the binary code means when he follows the rulebook. Are you saying such a thing is logically impossible? If need be, we could add that he has a mental impairment that renders him incapable of learning the meaning of binary code even though he can do fantastic calculations (a similar thing is true in real life for some autistic savants and certain semantics of the English language). So we still have a clear counterexample here (see below for more on this) of running the “right” program without literal understanding.


If you don't agree with my answers that's quite alright but please give me a response telling me the issues that you have with them and it would also help if you stopped simply invoking the Chinese Room as your argument when I am telling you that I do not agree with the chinese room and I do not agree that a complex set of instructions isn't enough.

The reason I use the Chinese room (and variants thereof) is because this is a clear instance of a complex set of instructions giving valid answers to input without literal understanding. I used what is known as a counterexample. A counterexample is an example that disproves a proposition or theory. In this case, the proposition that having a complex set of instructions is enough for literal understanding to exist. Note the counterexample of the robot and program X: we had the “right” set of instructions and it obviously wasn't enough. Do you dispute this? Do you claim that this man executing the program understands binary when he really doesn't?

You can point to the fact that humans can learn languages all you want, claim they are using syntactic rules etc. but that still doesn't change the existence of the counterexample. Question-begging and ignoratio elenchi is not the same thing as producing valid answers.


I gave you answers to your questions.

Really? Please tell me where you answered the questions I quoted.
 
Last edited:
  • #136
Neural networking directly addresses these issues.
 
  • #137
tishammerw: i don't think your argument against learning algorithms is conclusive...when you discuss such techniques you are not thinking along the lines of serial processing like if-then logic, rather parallel processing. And with that you are not discussing the simple flow of 3-4 neurons like in spiking neurons but on a numerous system of billions of interaction whether it be nnets or GAs or RL.

On another thing...we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding. However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument...what is your "what else" that will support your argument. Heh we shouldn't have to come up with your side of the argument.
 
  • #138
pallidin said:
Neural networking directly addresses these issues.

Addresses what issues? And how exactly does it do so?
 
  • #139
neurocomp2003 said:
tishammerw: i don't think your argument against learning algorithms is conclusive...when you discuss such techniques you are not thinking along the lines of serial processing like if-then logic, rather parallel processing.

Even parallel processing can do if-then logic. And we can say that the man in the Chinese room is a multi-tasker when he follows the instructions of the rulebook; still no literal understanding.


And with that you are not discussing the simple flow of 3-4 neurons like in spiking neurons but on a numerous system of billions of interaction whether it be nnets or GAs or RL.

One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.


On another thing...we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding.

And I have provided you with a counterexample, remember? Learning algorithms, sensors, etc. and still no understanding.


However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument

My counterexample proved that not even the existence of learning algorithms in a computer program is sufficient for literal understanding. The man in the Chinese room used the learning algorithms of the rulebook (and we can make them very complex if need be) and still there was no literal understanding. Given this, I think it's fair for me to ask "what else"? As for what I personally believe, I have already given you my answer. But this belief is not necessarily relevant to the matter at hand: I provided a counterexample--care to address it?
 
  • #140
tishammerw: what counterexample? that searle's argument says that there is no literal understanding by the brain without this "something else" tha tyou speak of? I'm still lost with your counterexample...or is it that if something else can imitate the human and clearly not understand...and then doesn't this imply that humans may not "understand" at all? what makes us so special? why do you believe that humans "understand"? and where is this proof...wouldn't searles argument also argue against human understanding?

It is fair for you to ask "what else" but you must also answer the question...because to us all that is needed are learning algorithms that emulate the brain nothing more.
If we were to state this "what else" then we would go against our beliefs? so is it fair for you to ask us to state this "what else" that YOU believe in? NO! and thus you must provide us with this explanation
 
  • #141
neurocomp2003 said:
tishammerw: what counterexample?

I have several, but I'll list two that seem to be the most relevant. Remember it was said earlier:

However it is your statement that such interaction does not lead to "understanding" ergo it should be YOU who provides us with the substance of "what else" not vice versa. We already have our "what else"=learning algos...and that is our argument

One of the counterexamples is the instance of a complex set of instructions including learning algorithms without literal understanding taking place. From post #103 (with a typo correction):

the program can be made so it changes itself based on the input it receives. But as I illustrated, this does not imply literal understanding. Note an example conversation of the Chinese room (translated into English):

Human: How are you doing?
Room: Just fine. What is your name?
Human: My name is Bob.
Room: Hello Bob.
Human: You've learned my name?
Room: Yes.
Human: What is it?
Room: Bob.

Learning has metaphorically taken place, and yet the person in the room really doesn't know the person's name; in fact he doesn’t understand anything at all regarding this conversation. The problem is that "learning algorithms" are just another set of instructions, thus really not anything fundamentally different from the Chinese room (the man using a complex set of instructions) and not at all an answer to the question "what else do you have?" besides a complex set of instructions acting on input for the computer to have literal understanding.

So even a program with learning algorithms is not sufficient for literal understanding to exist.

It was said earlier:

we have provided you with our statement that learning algorithms(with its complexity) with sensorimotor hookup would suffice understanding.

The other counterexample be found in post #126 where I talk about the robot and program X. This is an instance in which the "right" program (you can have it possessing complex learning algorithims etc.) is run and yet there is still no literal understanding.

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.

TheStatutoryApe claimed just having “the right hardware and the right program” would be enough. Clearly having the “right” program doesn't work. He mentioned the “right” hardware. But what relevant difference could that make if the exact same operations are being done? Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magical ball of yarn? What?

So here we have an instance of the "right" program--learning algorithms and all--being run in a robot with sensors, and still no literal understanding. There is no real understanding even when this program is being run.

One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?


that searle's argument says that there is no literal understanding by the brain without this "something else" tha tyou speak of?

Searle argued that our brains have unique causal powers that go beyond the simple (or even complex) manipulation of input.


I'm still lost with your counterexample...or is it that if something else can imitate the human and clearly not understand...and then doesn't this imply that humans may not "understand" at all? what makes us so special?

Because we humans have that "something else."


why do you believe that humans "understand"?

Well, I'm an example of this. I am a human, and I am capable of literal understanding whenever I read, listen to people, etc.


wouldn't searles argument also argue against human understanding?

No, because we humans have that "something else."


It is fair for you to ask "what else" but you must also answer the question

Fair enough, but I have answered this question before. I personally believe this "something else" is the soul (Searle believes it is the brain’s unique causal powers, but I believe the physical world cannot be the source of them). Whether you agree with my belief however is irrelevant to the problem: you must still find a way out of the counterexamples if you wish to rationally maintain your position. And I don't think that can be done.
 
  • #142
tishammerw: cool i see yor argument now...taking aside what we know from physics...do you believethe soul is made out of some substance in our universe? does it exist in some form of physicality(not necessarily what we understand of physics today) or do you believe it exists from nothing?

also do you really think you can "understand" what can be written ...or can you see it as a complex emergence behaviour that gives you this feel for having a higher cognitive path then robots? The instincts to associate one word form to some complex pattern of inputs?

as for you support arguements to searle(the counterexamples)...how can you take a finite fragment of life(that is t0-t1) and state that a computer clearly cannot
understand because of this finite time frame...i could do the same thing with children.
And they nod their heads in agrreement though they will literally not understand...though as time goes forth they will grasp that concept. You do not think that a computer can do the same and grasp this concept over time? Do children not imitate their adult surroundings? I think you have neglected the true concept of learning by imitation and learning by interaction with the adults around you.
 
  • #143
neurocomp2003 said:
tishammerw: cool i see yor argument now...taking aside what we know from physics...do you believethe soul is made out of some substance in our universe?

No.


does it exist in some form of physicality(not necessarily what we understand of physics today) or do you believe it exists from nothing?

I believe the soul is incorporeal. Beyond that there is only speculation (as far as I know).


also do you really think you can "understand" what can be written ...or can you see it as a complex emergence behaviour that gives you this feel for having a higher cognitive path then robots?

The ability to understand likely relies on a number of factors (including learning “algorithms”). So the answer is “yes” if you're asking me if the mechanics is complex, "no" if you're asking me if it magically “emerges” through some set of physical parts.


as for you support arguements to searle(the counterexamples)...how can you take a finite fragment of life(that is t0-t1) and state that a computer clearly cannot
understand because of this finite time frame

I'm not sure what you're asking here. If you're asking me why I believe that computers (at least with their current architecture: complex system of rules acting on input etc.) given my finite time in the universe, my answer would be "logic and reason"--the variants of the Chinese room thought experiment as my evidence.


...i could do the same thing with children.
And they nod their heads in agrreement though they will literally not understand...though as time goes forth they will grasp that concept. You do not think that a computer can do the same and grasp this concept over time?

No, because it lacks that "something else" humans have. Think back to the robot and program X counterexample. Even if program X (with its diverse and complex set of learning algorithms) is run for a hundred years, Bob still won’t understand what's going on. The passage of time is irrelevant because it still doesn't change the logic of the circumstances.
 
  • #144
Tisthammerw said:
Where did you answer these questions?
Note what happened below:
TheStatutoryApe said:
One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough.
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis.
I responded that while people are obviously capable of understanding (there's no dispute there) my claim that a complex set of instructions--while perhaps necessary--is not sufficient for understanding (as this example proves: we have the “right” program and still no understanding).
Note what happened above.
You conveniently did not quote my full answer...
That very man which you have placed inside the box does process that very same kind of information that you are talking about and use's it meaningfully on a regular basis. It's sensory information which is syntactic. The man's brain takes in syntactic information, that is information that has no more meaning than it's pattern structure and context with no intrinsic meaning to be understood, and it decifers the information without any meaningful thought and understanding what so ever in order to produce those chinese characters that he's looking at. The understanding of what the "picture" represents is an entirely different story but just ataining the "picture" that is sensory information is easily done by the processes the mans brain is already doing that are not requiring meaningful thoughts or output of him as a human. So I don't see the problem with allowing the man sensory input from outside. This is all that the man in the box has access to is the syntax of the information being presented. So if the man's brain is already capable of working by syntactic rules to produce meaningful output why are you saying that he should not be able to decifer information and find meaning in it based solely on the syntactic rules in the books? It all depends on the complexity of the language being used. Any spoken human language is incredibly complex and takes a vast reserve of experiencial data (learned rules of various sorts) to process, and experiencial data is syntactic as well.
Give the man in the room a more simple language to work with then. Start asking the man in the room math questions. What is one plus one? What is two plus two? The man in the room will be able to understand math given enough time to decifer the code and be capable of applying it.
Do you see that you have not adressed all of this?
I understand that it's a bit hodge podge so let me condense it down to my point that I don't think you have adressed.
You say that giving the homunculus sensory input via program X will only give the homunculus more script that it cannot meaningfully understand. The basis of this is that the homunculus can only draw conclusions based on the syntax of the information.
First off let's cut out the idea of the homunculus understanding what it sees since this is not what I am trying to prove yet. I am only trying to prove that it can actually see the outside world utilizing this program X.
Human sensory input itself is syntactic information which the brain translates into visual data (I'm just going to use vision as an example for the argument to keep this simple). The human brain acomplishes this feat in a minute fraction of a second without any meaningful understanding taking place. There are other parts of the brain that will give the data meaning but I am not bothering with going that far yet. Based on the rules of the C.R. and relating it to the manner in which humans receive sensory information we should be able to deduce that the homunculus in the CR should be capable of at least "seeing" if not understanding what it is seeing.
Can we agree on this?
As I already stated there remains the matter of understanding what is seen but if we could let's put that and all other matters on the back burner for the moment and see if we can agree on what I have proposed here. Let's also dispense with the idea of the homunculus formulating output based on the input since sensory input does not necesitate output. Let's say that the sensory input is solely for the benefit of the homunculus and it's learning program. Instead of throwing it directly into converations in chinese and asking of it "sink or swim" in regard to it's understanding of the conversation let us say that we are going to take it to school first and give it the opertunity to learn beforehand.
Perhaps if we can move through this point by point it will make it easier to communicate. We'll start with whether or not the CR homunculus can "see", not understand but just see, and formulate the CR environment so that it is in "learning mode" instead of being forced to respond to input.

One other thing though...

Tisthammerw" said:
Fair enough, but I have answered this question before. I personally believe this "something else" is the soul (Searle believes it is the brain’s unique causal powers, but I believe the physical world cannot be the source of them). Whether you agree with my belief however is irrelevant to the problem: you must still find a way out of the counterexamples if you wish to rationally maintain your position. And I don't think that can be done.
Perhaps to help you understand a bit more where I am coming from in this I do consider the idea of there being a sort of "something more" but not in the same manner that you do. Instead of "soul" I simple call it a "mind". The difference is that I do not believe that this is a dualistic thing. A more appropriate name for it might be "infospace", a sort of holographic matrix of information that has no tangible substance to it. My perception of it is not dualistic because I believe that it is wholely dependant upon a physical medium whether that be a brain or a machine. I believe that the processes of computers exist in "infospace". I see the difference between the "mind-space" and the purely computational "infospace" of a computer as nothing but a matter of structure and complexity.
I'm sure you don't agree with this idea, at least not completely, but hopefully it will help you understand better the way I perceive the AI problem and the comparison of human to machine.
 
  • #145
TheStatutoryApe said:
Note what happened above.
You conveniently did not quote my full answer...

Initially, I (wrongfully) dismissed it as not adding any real substance to the text I quoted.

Do you see that you have not adressed all of this?

In post #135 I addressed the question you asked, and responded (I think) to the gist of the text earlier.


I understand that it's a bit hodge podge so let me condense it down to my point that I don't think you have adressed.
You say that giving the homunculus sensory input via program X will only give the homunculus more script that it cannot meaningfully understand. The basis of this is that the homunculus can only draw conclusions based on the syntax of the information.
First off let's cut out the idea of the homunculus understanding what it sees since this is not what I am trying to prove yet. I am only trying to prove that it can actually see the outside world utilizing this program X.

That doesn't seem possible given the conditions of this thought experiment. Ex hypothesi he doesn't see the outside world at all; he is only the processor of the program.


Human sensory input itself is syntactic information which the brain translates into visual data (I'm just going to use vision as an example for the argument to keep this simple). The human brain acomplishes this feat in a minute fraction of a second without any meaningful understanding taking place. There are other parts of the brain that will give the data meaning but I am not bothering with going that far yet. Based on the rules of the C.R. and relating it to the manner in which humans receive sensory information we should be able to deduce that the homunculus in the CR should be capable of at least "seeing" if not understanding what it is seeing.
Can we agree on this?


The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.


In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.


As I already stated there remains the matter of understanding what is seen but if we could let's put that and all other matters on the back burner for the moment and see if we can agree on what I have proposed here. Let's also dispense with the idea of the homunculus formulating output based on the input since sensory input does not necesitate output.

Bob (the homunculus in the robot and program X scenario) does indeed formulate output (based on program X) given the input.


Lets say that the sensory input is solely for the benefit of the homunculus and it's learning program. Instead of throwing it directly into converations in chinese and asking of it "sink or swim" in regard to it's understanding of the conversation let us say that we are going to take it to school first and give it the opertunity to learn beforehand.

Again, while we can teach the homunculus a new language this doesn't have any bearing on the purpose of the counterexample: this (the robot and program X experiment) is a clear instance in which the “right” program is being run and yet there is still no literal understanding. And you still haven't answered the questions I asked regarding this thought experiment.

You can modify the thought experiment all you want, teach the homunculus a new language etc. but it still doesn't change the fact that I've provided a counterexample. "The right program with the right hardware" doesn't seem to work. Why? Because I provided a clear instance in which the "right program" was run on the robot and still there was no literal understanding. To recap:

One could claim that if a robot (with cameras, microphones, limbs etc.) were given the "right" program with learning algorithms etc. (let's call it "program X") there could exist literal understanding. But I have a response to that. Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.

One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?
 
  • #146
tishammerw- so the soul isn't made of anything detectable or not yet detectable but still persists as a single classifiable object. that is what your saying?

you stated "understanding" is not magically emergent yet you say the soul is not physically detectable."incorporeal" Are these statements not contradictory?

as for the finite time...i didn't mean your time but the time of the counterexample...which btw you referred to as bob as the robot though in your example bob was the human. Such a finite example of a robots life...
but isn't human "understanding" built through many years of learning. And thus you would need to take a grandeur example(many pages rather then 5 lines) inorder to give me an idea of what you are talking about with the pseudo-understanding because if i captured that instance of two humans rather than human-robot then i could say that both could be robots.

as for the learning that i describe in my last post..i wasn't talking about learning algorithms of programming techniques but the concept of learning from sociological/psychological standpoint...
 
  • #147
neurocomp2003 said:
tishammerw- so the soul isn't made of anything detectable or not yet detectable but still persists as a single classifiable object. that is what your saying?

Er, sort of. It is indirectly detectable; we can rationally infer its existence. The soul exists, but the precise metaphysical properties may be beyond our current understanding.


you stated "understanding" is not magically emergent yet you say the soul is not physically detectable."incorporeal" Are these statements not contradictory?

I don't see why they would be contradictory.


as for the finite time...i didn't mean your time but the time of the counterexample...which btw you referred to as bob as the robot though in your example bob was the human.

No, I was referring to Bob the human. Any other implication was unintentional. And in any case it is as I said; even if the counterexample were run for a hundred years Bob wouldn't understand anything.


but isn't human "understanding" built through many years of learning.
And thus you would need to take a grandeur example(many pages rather then 5 lines) inorder to give me an idea of what you are talking about with the pseudo-understanding because if i captured that instance of two humans rather than human-robot then i could say that both could be robots.

Huh?


as for the learning that i describe in my last post..i wasn't talking about learning algorithms of programming techniques but the concept of learning from sociological/psychological standpoint...

Well, yes we humans can learn. But learning algorithms for computers seem insufficient for the job of literal understanding.
 
  • #148
Tisthammerw said:
That doesn't seem possible given the conditions of this thought experiment. Ex hypothesi he doesn't see the outside world at all; he is only the processor of the program.
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole. The homunculus is supposed to represent the processing power of whole system not just a lone processor amidst it. Even a human's capacity for understanding is based on it's whole system acting as a single entity. If a human had never experienced eye sight this would leave a large gap in it's ability to understand human language. If your stripped a human down to nothing but a brain it would be in the same exact situation that you insist that a computer is in because it is now incapable of developing meaningful understanding of the outside world. Any sensory system that you give a computer should be treated exactly as the ones for a human, as part of the whole rather than just another source of meaningless script, because those tools are part of the systems corpus as a whole, just like a human.

Tisthammerw said:
The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.

In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.
It is true that if you lay down a bunch of binary in front of a human they are not likely to understand it. This does not mean though that the human brain is incapable of decifering raw syntactic information. As a matter of fact it translates syntactic sensory information at a furious pace continuously and that information is more complex than binary. The problem is that the CR asks the human to translate it with a portion of his brain illsuited to the task. You might as well ask your pacman to preform calculus or your texas instruments to play pacman. If you are intending to ask the man in the CR to interpret syntactic sensory data as fast and efficiently as possible you may as well let him use the portions of his brain that are suited to the task and give him a video feed. This would only be fair and the information he would be receiving would still be syntactic in nature.

I either missed it or didn't understand it but we do agree that translation of sensory data is a purely syntactic process right? Not the recognition but just the actual "seeing" part right?

How about some experimental evidence that may back this up. In another thread Evo reminded me of an experiment that was run where the subjects were given eyewear that inverted their vision. After a period of time their eyes adjusted and they began to see normally with the eyewear on. There was no meaningful understanding involved, no intentionality, no semantics. The brain simply adjusted the manner in which it interpreted the syntactic sensory data to fit the circumstances without the need of any meaningful thought on the part of the subjects.

How about one that involves AI. A man created small robots on wheels that were capable of using sensors to sense their immediate surroundings and tell if there was a power source near by. They were programmed to have "play time" where they scurried about the room and then "feeding time" when they were low on power where they sought out a power source and recharged. They were capable of figuring out the layout of they room they were in so as to avoid running into objects when they "played" and seeking out and remembering where the power sources were that they had sought out for when it was time to "feed". The room could get changed around and the robots would adapt.
Even with just regard to this last bit would you still contend that a computer would be unable to process syntactic sensory data, learn from it, and utilize it?

Tisthammerw said:
Bob (the homunculus in the robot and program X scenario) does indeed formulate output (based on program X) given the input.
A computer only produces output when it's program suggests that it should, AI or not. It isn't necessary so I see no need to continue with forcing the AI to produce output when ever it receives any kind of input here in the CR.

I'll have to finish later I need to get going.
 
  • #149
TheStatutoryApe said:
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole.

Ah, the old systems reply. The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese? That doesn’t strike me as plausible. Second, Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language. So the systems reply doesn't seem to work at all.


The homunculus is supposed to represent the processing power of whole system not just a lone processor amidst it.

Well, in the Chinese room he is the processing power of the whole system.


The human being can learn and understand in normal conditions. But in this circumstance, he does not understand the meaning of the binary digits even though he can do all of the necessary mathematical and logical operations. Thus, he cannot see or know what the outside world is like using program X. Depending on what you’re asking, the answer is “yes” in the first case, “no” in the latter.

In the case of the original Chinese room thought experiment, I agree that the homunculus can see the Chinese characters even though he can’t understand the language.

It is true that if you lay down a bunch of binary in front of a human they are not likely to understand it. This does not mean though that the human brain is incapable of decifering raw syntactic information. As a matter of fact it translates syntactic sensory information at a furious pace continuously and that information is more complex than binary.

That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.

BTW, don't forget the brain simulation reply:

One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.

So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.


I either missed it or didn't understand it but we do agree that translation of sensory data is a purely syntactic process right?

No (see above and below for more info).


Not the recognition but just the actual "seeing" part right?

The "seeing" of objects I do not believe to be purely syntactic (though I do believe it involves some syntactic processes within the brain).


How about some experimental evidence that may back this up. In another thread Evo reminded me of an experiment that was run where the subjects were given eyewear that inverted their vision. After a period of time their eyes adjusted and they began to see normally with the eyewear on. There was no meaningful understanding involved, no intentionality, no semantics. The brain simply adjusted the manner in which it interpreted the syntactic sensory data to fit the circumstances without the need of any meaningful thought on the part of the subjects.

Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive. One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.


How about one that involves AI. A man created small robots on wheels that were capable of using sensors to sense their immediate surroundings and tell if there was a power source near by. They were programmed to have "play time" where they scurried about the room and then "feeding time" when they were low on power where they sought out a power source and recharged. They were capable of figuring out the layout of they room they were in so as to avoid running into objects when they "played" and seeking out and remembering where the power sources were that they had sought out for when it was time to "feed". The room could get changed around and the robots would adapt.
Even with just regard to this last bit would you still contend that a computer would be unable to process syntactic sensory data, learn from it, and utilize it?

I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).

And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
 
  • #150
Tisthammerw said:
Ah, the old systems reply. The systems reply goes something like this:
This does not adress my objection what so ever. I am not saying that the whole system understands chinese. I'm not saying that combining the man with the book and pen and paper will make him understand chinese. The situation would be a bit more accurate with regard to paralleling a computer though.
The objection I had was in regard to the manner in which you are seperating the computer from the sensory input. My entire last post was in regard to sensory input. I told you in the post before that this is what I wanted to discuss before we move on. Pay attention and stop detracting from the issues I am presenting.
If I were to rip your eye balls out, somehow keep them functioning, and then have them transmit data to you for you to decifer you wouldn't be able to do it. Your eyes work because they are part of the system as a whole. You're telling me that the "eyes" of the computer are separate from it and just deliver input for the processor to formulate output for. In your argument it's "eyes" are a separate entity processing data and sending information on to the man in the room. Are there little men in the eyes processing information just like the man in the CR? Refusing to allow the AI to have eyes is just a stuborn manner by which to preserve the CR argument.
This is no where near an accurate picture. This is one of the reasons I object to you stating that the computer must produce output based on the sensory input. You're distracting from the issue of the computer absorbing and learning by saying that it is incapable of anything other than reacting when this isn't even accurate. Computers can "think" and simply absorb information and process it without giving immediate reactionary output. As a matter of fact most computers "think" before they act now a days. Computers can cogitate information and analyze it's value, I'll go into this more later.
Are you really just unaware of what computers are capable of now a days?
With the way that this conversation is going I'm inclined to think that you are a chinese man in an english room formulating output based on rules for arguing the Chinese Room Argument. Please come up with your own arguments instead of pulling out stock arguments that don't even adress my points.

Tisthammerw said:
Well, in the Chinese room he is the processing power of the whole system.
He should be representative of the system as a whole including the sensory aperati. If you were separated from your sensory organs and made to interpret sensory information from an outside source you would be stuck in the same situation the man in the CR is. You are not a homunculus residing inside your head nor is the computer a homunculus residing inside it's shell.

Tisthammerw said:
That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.

BTW, don't forget the brain simulation reply:
No. Your hypothetical system does not allow the man in the room to use the portions of his brain suited for the processing of the sort of information you are sending him. Can you read the script on a page by smelling it? How easily do you think you could tell the difference between a piece by Beethoven and one by Motzart with your finger tips? How about if I asked you to read a book only utilizing the right side of your brain? Are any of these a fair challenge? The only one that you might be able to pull of is the one with your finger tips but either way you are still not hearing the music are you?
It has nothing to do with not having the "right program". The human brain does have the right program but you are refusing to allow the man in the room to use it just like you are refusing to allow the computer to have "eyes" of it's own but rather it's outsourcing the job to another little man in another little room somewhere who only speaks chinese.

Tisthammerw said:
One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?

Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.
So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.
Even here yet again you fail to adress my objection while using some stock argument. My objection was that you are not allowing the man in the room to properly utilize his own brain. Yet again you force us to devorce the man in the room from the entirety of the system by creating some crude mock up of a nueral net rather than allowing him to utilize the one already in his head. Why create the mock up when he has the real thing with him already? Creating these intermediaries only hinders the man. You continually set him up to fail by not allowing him to reach his goal in the most well suited and efficient manner at his disposal. If anyone were to actually design computers like you(or Searle) design your rooms which are supposed to parallel them they'd be fired.

Tisthammerw said:
Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive.
Here you seem to misunderstand the CR argument. The property of the information that the man in the CR room is able to understand is the syntax; the structure, the context, the patterns. This isn't just the manner in which it arrives it is the manner in which he works with it and perceives it. He lacks only the semantic property. Visual information is nothing but syntactic. There is no further information there except the structure, context, and pattern of the information. You do not have to "understand" what you are looking at in order to "see" it. The man in the box does not understand what the chinese characters are that he is looking at but he can still perceive them. He lacks only the ability to "see" the semantic property, that is all.

Tisthammerw said:
One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.
You do understand why the brain surgeon can not perceive colour right? It's a lack of the proper hardware, or rather wetware in this case. The most common problem that creates colour blindness is that the eyes lack the proper rods and cones (I forget exactly which ones do what but the case is something of this sort none the less.). If she were to undergo some sort of operation to add the elements necessary for gathering colour information to her eyes, a wetware upgrade, then she should be able to see in colour assuming that the proper software is present in her brain. If the software is not present then theoretically she could undergo some sort of operation to add it, software upgrade for her nueral processor. Funny enough your own example is perfect in demonstrating that even a human needs the proper software and hardware/wetware to be capable of perception! So why is it that the proper software and hardware is necessary for a human to do these special processes that you attribute to it but the right software and hardware is not enough to help a computer? Does the human have a magic ball of yarn? What? LOL!
And I already know what you are going to say. You'll say that the human does have a magic ball of yarn which you have dubbed a "soul". Yet you can not tell me the properties of this soul and what exactly it does without invoking yet more magic balls of yarn like "freewill" or maybe Searle's "Causal Mind" or "Intrinsic Intentionality". So what are these things what do they do? Will you invoke yet more magic balls of yarn? Maybe even the cosmic magic ball of yarn called "God"? None of these magic balls of yarn prove anything. Ofcourse you will say that the CR proves that there must be "Something More". So what if I were to just take a cue from you and say that all we need to do is find a magic ball of yarn called "AI" and embue a computer with it. I can't tell you what it does except say that it gives the computer "Intrinsic Intentionality" and/or "Freewill". Will you except this answer to your question? If you won't then you can not expect me to accept your magic ball of yarn either, so both arguments are then uselss and invalid for the purpose of our discussion since they yield no results.

Tisthammerw said:
I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).

And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
Obviously it doesn't understand things the way we do but what about understanding things the way a hamster does? You seem to misunderstand the way AI works in instances such as these. The AI is not simply following instructions. When the robot comes to a wall there is not an instruction that says "when you come to a wall turn right". It can turn either right or left and it makes a decision to do one or the other. Ofcourse this is a rather simplistic example so let's bring it up a notch.
Earlier in this discussion Deep Blue was brought up. You responded in a very similar manner to that as you did this but I never got back to discussing it. You seem to think that a complex set of syntactic rules is enough for Deep Blue to have beaten Kasperov. The problem though is that you are wrong. You can not create such rules for making a computer play chess and have the computer be successful. At least not against anyone who plays chess well and especially not against a world champion such as Kasperov. You can not simply program it with rules such as "when this is the board position move king's knight one to king's bishop three". If you made this sort of program and expected it to respond properly in any given situation you would have to map out the entire game tree. Computers can do this far faster than we can and even they at current max processing speed will take at least hundreds of thousands of years to do this. By that time we will be dead and unable to write out the answers for every single possible board position. So we need to make short cuts. I could go on and on about how we might accomplish this but how about I tell you the way that I understand it is actually done instead.
The computer is taught how to play chess. It is told the board set up and how the pieces move, take each other, and so on. Then it is taught strategy such as controling the center, using pieces in tandum with one another, hidden check, and so forth. So far the computer has not been given any set of instructions on how to respond to any given situation such as the set up in the CR. It is only being taught how to play the game, more or less in the same fashion that a human learns how to play the game except much faster. The computer is then asked based on the rules presented for the game and the goals presented to it to evaluate possible moves and pick one that is the most advantageous. This is pretty much what a human does when a human plays chess. So since the computer is evaluating options and making decisions would you still say that it can not understand what it is doing and is only replacing one line of code with another line of code as it says to do in it's manual?
 

Similar threads

  • · Replies 26 ·
Replies
26
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 21 ·
Replies
21
Views
2K
Replies
1
Views
3K
  • · Replies 40 ·
2
Replies
40
Views
5K
  • Poll Poll
  • · Replies 76 ·
3
Replies
76
Views
10K
Replies
15
Views
6K
  • · Replies 18 ·
Replies
18
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K