TheStatutoryApe said:
You seem to be ignoring some very important parts of my argument.
Like what? You made the point about a learning computer, and I addressed that.
Rather than making rediculous comments about magic balls of yarn
I don't think its ridiculous. I asked what else one could add to a computer to make it understand, and there doesn't appear to be anything other than "magic."
You are assuming here that the baby has a soul. There is no proof of the existence of a soul
See
this web page for why (in part) I believe there is evidence for the soul.
Anyway, my main point (and I should've mentioned this earlier) of the soul thing is that I offer it as a possible explanation why humans are capable of understanding and why machines are not. Some people claim that if humans can understand we can build machines to understand also, but that is not necessarily true.
and even if it does exist there is no proof that this soul would be necessary for a being to be sentient. You are only assuming this a priori.
Not really. I am using the Chinese room for one piece of evidential support. I ask you again, what could be added to the computer other than a set of rules for manipulating input to make it understand?
Does a chimp have a soul?
I believe that any sentience requires the incorporeal, but that is another matter.
So really the question is I guess do you believe only humans have the capacity for sentience or only living things?
So far it seems that only living things have the capacity for sentience. I have yet to find a satisfactory way of getting around the Chinese room thought experiment.
You see an open ended program wasn't my only criterion. As I stated earlier you seem to be ignoring very important parts of my argument and now I'll add that you are ignoring the implications of a computer being capable of learning.
Well, I
did address the part of computer learning, remember? You seem to be ignoring some very important parts of
my argument.
If the man in the room is capable of learning he can begin to pick up on the pattern if the language code
That's a bit of question begging. The symbols mean nothing to him. Consider this rule (using a made-up language):
If you see @#$% replace with ^%@af
Would you understand the meaning of @#$% merely because you've used the rule over and over again? I admit that maybe he can remember input-output patterns, but that's it. The man may be capable of learning a new language, but this clearly requires something
other than a complex set of rules for input/output processing. My question: so what else do you have?
One of my main points in my argument that I mentioned you did not comment on was sensory information input and experience.
Well, the same holds true for my modified Chinese room thought experiment. The complex set of instructions tells the man what to do when new input (the Chinese messages) is received. New procedures and rules are created (ultimately based on the rulebook acting on input, which represents a computer program with learning algorithms), but the man still doesn't a word of Chinese.
This goes hand in hand with the learning ability. If the man in the box was capable when ever he saw a word to have some sort of sensory input that would give him an idea of the meaning of the word then he would begin to learn the language, no?
Absolutely, but note that this requires something
other than a set of rules manipulating input. It's easy to say learning is possible when we
already have an fully intelligent, thinking, sentient human being. But you seem to forget that I am attacking the idea that it’s possible to have a fully intelligent, thinking, sentient computer in the first place. My point is that if we limit ourselves to the input/output processing like we do in the Chinese room, we simply don't get real understanding. And as my modified Chinese room illustrates, we can't get real understanding even with learning algorithms. So I ask again, what else do you have?