Tisthammerw said:
Ah, the old systems reply. The systems reply goes something like this:
This does not adress my objection what so ever. I am not saying that the whole system understands chinese. I'm not saying that combining the man with the book and pen and paper will make him understand chinese. The situation would be a bit more accurate with regard to paralleling a computer though.
The objection I had was in regard to the manner in which you are seperating the computer from the sensory input. My entire last post was in regard to sensory input. I told you in the post before that this is what I wanted to discuss before we move on. Pay attention and stop detracting from the issues I am presenting.
If I were to rip your eye balls out, somehow keep them functioning, and then have them transmit data to you for you to decifer you wouldn't be able to do it. Your eyes work because they are part of the system as a whole. You're telling me that the "eyes" of the computer are separate from it and just deliver input for the processor to formulate output for. In your argument it's "eyes" are a separate entity processing data and sending information on to the man in the room. Are there little men in the eyes processing information just like the man in the CR? Refusing to allow the AI to have eyes is just a stuborn manner by which to preserve the CR argument.
This is no where near an accurate picture. This is one of the reasons I object to you stating that the computer must produce output based on the sensory input. You're distracting from the issue of the computer absorbing and learning by saying that it is incapable of anything other than reacting when this isn't even accurate. Computers can "think" and simply absorb information and process it without giving immediate reactionary output. As a matter of fact most computers "think" before they act now a days. Computers can cogitate information and analyze it's value, I'll go into this more later.
Are you really just unaware of what computers are capable of now a days?
With the way that this conversation is going I'm inclined to think that you are a chinese man in an english room formulating output based on rules for arguing the Chinese Room Argument. Please come up with your own arguments instead of pulling out stock arguments that don't even adress my points.
Tisthammerw said:
Well, in the Chinese room he is the processing power of the whole system.
He should be representative of the system as a whole including the sensory aperati. If you were separated from your sensory organs and made to interpret sensory information from an outside source you would be stuck in the same situation the man in the CR is. You are not a homunculus residing inside your head nor is the computer a homunculus residing inside it's shell.
Tisthammerw said:
That may be the case, but it still doesn't change the fact of the counterexample: we have an instance in which the “right” program is being run and still there is no literal understanding. And there are questions you haven’t yet answered. Do you believe that replacing Bob with the robot’s normal processor would create literal understanding? If so, please answer the other questions I asked earlier.
BTW, don't forget the brain simulation reply:
No. Your hypothetical system does not allow the man in the room to use the portions of his brain suited for the processing of the sort of information you are sending him. Can you read the script on a page by smelling it? How easily do you think you could tell the difference between a piece by Beethoven and one by Motzart with your finger tips? How about if I asked you to read a book only utilizing the right side of your brain? Are any of these a fair challenge? The only one that you might be able to pull of is the one with your finger tips but either way you are still not
hearing the music are you?
It has nothing to do with not having the "right program". The human brain does have the right program but you are refusing to allow the man in the room to use it just like you are refusing to allow the computer to have "eyes" of it's own but rather it's outsourcing the job to another little man in another little room somewhere who only speaks chinese.
Tisthammerw said:
One interesting response to Searle's Chinese room thought experiment is the brain simulation reply. Suppose we create a computer that simulates the actual sequence of neuron firings at the synapses of a Chinese speaker when he understands stories in Chinese and gives answers to them. Surely then we would have to say that the computer understands then, right?
Searle says that even getting this close to the brain is not sufficient to produce real understanding. Searle responds by having a modified form of the thought experiment. Suppose we have a man operate a complex series of water pipes and valves. Given the Chinese symbols as input, the rulebook tells him which valves to turn off an on. Each water connection corresponds to a synapse in the Chinese person’s brain, and at the end of the process the answer pops out of the pipes. Again, no real understanding takes place. Searle claims that the formal structure of the sequence of neuron firings is insufficient for literal understanding to take place. And in this case I agree with him.
So it seems that even the raw syntactic processes of the human brain are insufficient for literal understanding to exist. Can humans understand? Absolutely, but more is going on here than formal structure of neuron firings, syntactic rules etc. as my counterexamples demonstrate. Searle for instance claims that the human brain has unique causal powers to it enable real understanding.
Even here yet again you fail to adress my objection while using some stock argument. My objection was that you are not allowing the man in the room to properly utilize
his own brain. Yet again you force us to devorce the man in the room from the entirety of the system by creating some crude mock up of a nueral net rather than allowing him to utilize the one already in his head. Why create the mock up when he has the real thing with him already? Creating these intermediaries only hinders the man. You continually set him up to fail by not allowing him to reach his goal in the most well suited and efficient manner at his disposal. If anyone were to actually design computers like you(or Searle) design your rooms which are supposed to parallel them they'd be fired.
Tisthammerw said:
Here's my take on this. Syntax can be the means to provide “input” but syntax itself (I believe) is not sufficient for the self to literally perceive.
Here you seem to misunderstand the CR argument. The property of the information that the man in the CR room is able to understand
is the syntax; the structure, the context, the patterns. This isn't just the manner in which it arrives it is the manner in which he works with it and perceives it. He lacks only the semantic property. Visual information is
nothing but syntactic. There is no further information there except the structure, context, and pattern of the information. You do not have to "understand" what you are looking at in order to "see" it. The man in the box does not understand what the chinese characters are that he is looking at but he can still perceive them. He lacks only the ability to "see" the semantic property, that is all.
Tisthammerw said:
One interesting story is the thought experiment of the color-blind brain scientist. She is a super-brilliant brain surgeon who knows everything about the brain and its "syntactical rules." But even if she carries out all the syntactic procedures and algorithms in her head (like the homunculus memorizing the blueprints of the water-pipes and simulating each step in his head), she still cannot perceive color. She could have complete knowledge of a man's brain states while he experiences a sunset and still not perceive color.
You do understand why the brain surgeon can not perceive colour right? It's a lack of the proper hardware, or rather wetware in this case. The most common problem that creates colour blindness is that the eyes lack the proper rods and cones (I forget exactly which ones do what but the case is something of this sort none the less.). If she were to undergo some sort of operation to add the elements necessary for gathering colour information to her eyes, a wetware upgrade, then she should be able to see in colour assuming that the proper software is present in her brain. If the software is not present then theoretically she could undergo some sort of operation to add it, software upgrade for her nueral processor. Funny enough your own example is perfect in demonstrating that even a human needs the proper software and hardware/wetware to be capable of perception! So why is it that the proper software and hardware is necessary for a human to do these special processes that you attribute to it but the right software and hardware is not enough to help a computer? Does the human have a magic ball of yarn? What? LOL!
And I already know what you are going to say. You'll say that the human does have a magic ball of yarn which you have dubbed a "soul". Yet you can not tell me the properties of this soul and what exactly it does without invoking yet more magic balls of yarn like "freewill" or maybe Searle's "Causal Mind" or "Intrinsic Intentionality". So what are these things what do they do? Will you invoke yet more magic balls of yarn? Maybe even the cosmic magic ball of yarn called "God"? None of these magic balls of yarn prove anything. Ofcourse you will say that the CR proves that there must be "Something More". So what if I were to just take a cue from you and say that all we need to do is find a magic ball of yarn called "AI" and embue a computer with it. I can't tell you what it does except say that it gives the computer "Intrinsic Intentionality" and/or "Freewill". Will you except this answer to your question? If you won't then you can not expect me to accept your magic ball of yarn either, so both arguments are then uselss and invalid for the purpose of our discussion since they yield no results.
Tisthammerw said:
I believe computers can process syntactic data, conduct learning algorithms and do successful tasks--much like the person in the Chinese room can process the input, conduct learning algorithms, and do successful tasks (e.g. communicating in Chinese) but that neither entails literal perception of sight (in the first case) or meaning (in the second case).
And at the end of the day, we still have the counterexamples: complex instructions acting on input and still no literal understanding.
Obviously it doesn't understand things the way we do but what about understanding things the way a hamster does? You seem to misunderstand the way AI works in instances such as these. The AI is not simply following instructions. When the robot comes to a wall there is not an instruction that says "when you come to a wall turn right". It can turn either right or left and it makes a
decision to do one or the other. Ofcourse this is a rather simplistic example so let's bring it up a notch.
Earlier in this discussion Deep Blue was brought up. You responded in a very similar manner to that as you did this but I never got back to discussing it. You seem to think that a complex set of syntactic rules is enough for Deep Blue to have beaten Kasperov. The problem though is that you are wrong. You can not create such rules for making a computer play chess and have the computer be successful. At least not against anyone who plays chess well and especially not against a world champion such as Kasperov. You can not simply program it with rules such as "when this is the board position move king's knight one to king's bishop three". If you made this sort of program and expected it to respond properly in any given situation you would have to map out the entire game tree. Computers can do this far faster than we can and even they at current max processing speed will take at least hundreds of thousands of years to do this. By that time we will be dead and unable to write out the answers for every single possible board position. So we need to make short cuts. I could go on and on about how we might accomplish this but how about I tell you the way that I understand it is actually done instead.
The computer is taught how to play chess. It is told the board set up and how the pieces move, take each other, and so on. Then it is taught strategy such as controling the center, using pieces in tandum with one another, hidden check, and so forth. So far the computer has not been given any set of instructions on how to respond to any given situation such as the set up in the CR. It is only being taught how to play the game, more or less in the same fashion that a human learns how to play the game except much faster. The computer is then asked based on the rules presented for the game and the goals presented to it to evaluate possible moves and pick one that is the most advantageous. This is pretty much what a human does when a human plays chess. So since the computer is evaluating options and making decisions would you still say that it can not understand what it is doing and is only replacing one line of code with another line of code as it says to do in it's manual?