Let's recap some terms before moving on:
Using the Chinese Room thought experiment as a case in point, let’s recap my
definition of understanding.
When I say “understand” I mean “grasp the meaning of.” When I say “grasp the meaning of” I mean he actually
knows what the Chinese words mean. When I say he knows what they mean, I am saying that he
perceives the meaning of the words he sees/hears, or to put it another way, that
he is aware of the truth of what the Chinese words mean.
Let’s recap my
definition of consciousness.
Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.
moving finger said:
Tisthammerw said:
If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.
Ah, I see now. Therefore an agent can have the characteristic only of “sensation”, but at the same time NOT be able to perceive, or to think, or to be aware, and still (by your definition) it would necessarily be conscious?
Not quite. Go to http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=sensation to once again read definition 1b of sensation.
Therefore by your definition even the most basic organism which has “sensation” (some plants have sensation, in the sense that they can respond to stimuli) is necessarily conscious? I think a lot of biologists would disagree with you.
You have evidently badly misunderstood what I meant by sensation. Please look up the definition of sensation again (1b). In light of what I mean when I use the terms, it is clear that plants do not possesses consciousness.
Tisthammerw said:
Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).
Incorrect. If the CR also defines “awareness” as implicitly meaning “conscious awareness”, and it is not conscious, it would necessarily answer “No”.
It would necessarily answer “Yes” because
ex hypothesi the program (of the rulebook) is designed to simulate understanding, remember? (Again, please keep in mind what I mean when I use the term “understanding.”)
Tisthammerw said:
he consciously knows all the rules, consciously carries them out etc.
Here you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.
It depends what you mean by “consciously understanding the rules.” He understands the rules in the sense that he knows what the rules mean (see my definition of “understanding”). He does
not understand the rules in the sense that, when he applies the rules, he actually understands Chinese.
Tisthammerw said:
Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.
You define “understanding” as requiring consciousness, thus it is hardly surprising that your definition of understanding requires consciousness! That is a classic tautology.
That's essentially correct. Note however that my definition of understanding wasn't merely “consciousness,” rather it is about knowing what the words mean. At least we (apparently) agree that understanding--in the sense that I mean when I use the term--requires consciousness.
Tisthammerw said:
Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that?
By definition, if one chooses to define understanding such that understanding requires consciousness, then it is necessarily the case that for any agent to possesses understanding it must also possesses consciousness. I see no reason why a machine should not possesses both consciousness and understanding.
Well then let me provide you with a reason: the Chinese room thought experiment. This is a pretty good counterexample to the claim that a “complex set of instructions acting on input etc. is sufficient for literal understanding to exist.” Unless you wish to dispute that the man in the Chinese room understands Chinese (again, in the sense that I use it), which is pretty implausible.
But this is not the point – I dispute that consciousness is a necessary pre-requisite to understanding in the first place.
You yourself may mean something different when you use the term “understanding” and that's okay I suppose. But please recognize what
I mean when I use the term.
Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?
The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.
Please see my response above. Additionally, Tournesol made a very good point when he said: “Definitions are not things which are true and false so much as conventional or unusual.” We both may mean something different when we use the term “understanding,” but neither of our definitions is necessarily “false.” And this raises a good question: I have defined what
I mean when I use the term “understanding,” so what’s
your definition?
By the way, you haven't really answered my question here. Given what my definition of understanding, is it the case that computers cannot have understanding in
this sense of the word? From your response regarding understanding and consciousness regarding machines, the answer almost seems to be “yes” but it’s a little unclear.
I have answered your question.
You didn't really answer the question here, at least not yet (you seem to have done it more so later in the post).
Now please answer mine, which is as follows :
Can you SHOW that “understanding” requires consciousness, without first ASSUMING that understanding requires consciousness in your definition of “understanding”?
Remember, tautologies are by definition true.
Can I show that understanding requires consciousness? It all depends on how you define “understanding.” Given
my definition, i.e. given what
I mean when I use the term, we seem to agree that understanding requires consciousness. (Tautology or not, the phrase “understanding requires consciousness” is every bit as sound as “all bachelors are unmarried”). You may use the term “understanding” in a different sense, and I'll respect your own personal definition. Please respect mine.
Now, to the question at hand:
(in other words, can you express your argument such that it is not a tautology?)
I don't know of a way how to, but I don't think it matters. Why not? The argument is still
perfectly sound even if you don't like how I expressed it. What more are you asking for?
Tisthammerw said:
“i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”
Using MY definition of “perceive” and “be aware”, yes, I believe computers can (in principle) perceive and be aware of the meaning of what words mean.
Well, how about
my definitions of those terms? I made some explicit citations in the dictionary if you recall.
Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?
I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.
The Chinese room thought experiment, the robot and program X are very good reasons since they serve as effective counterexamples (again, using my definitions of the terms).
Tisthammerw said:
My full reply was in fact :
“I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.”
I read your reply, but that reply did not give a clear “yes” or “no” to my question. So far, your answer seems to be “No, it is not the case that a computer cannot perceive the meaning of words...” but this still isn't entirely clear since you said “No” in the following context:
Using MY definition of “perceive” and “be aware”
I was asking the question using
my definitions of the terms, not yours. Given the terms as
I have defined them, is the answer yes or no? (Please be clear about this.) You said:
moving finger said:
Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?
I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.
So is the answer a “No” as it seems to be? (Again, please keep in mind what I mean when I use the terms.)