Can Artificial Intelligence ever reach Human Intelligence?

AI Thread Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
  • #151
tishammerw: at what age of a being does the soul arise? I think you posted before that you were not sure...but if you were not sure how can you quantify its existence? also you say that the soul is more metaphysics then physicality but maintained still within the brain does this mean that some physical structure of the brain creates this phenomenon? if not how does the soul become limited within the brain...that is to say why doesn't it float around aside the body? what makes its restraint inside the head.

btw this might be more of a personally question but I was wondering if you have children or have you ever helped raise children?
 
Last edited:
Physics news on Phys.org
  • #152
Sorry I had to go again and couldn't finish thoughtfully. I'll continue.

Something I have failed to bring up yet. The scenario in the CR of a computer program with a large enough set of instructions telling the computer to replace certain lines of script with other lines of script making it indistinguishable from a human is not possible in reality. Any significantly long coversation or just a person testing it to see if it is a computer of this sort or not will reveal it for what it is. Just the same as in the case of mapping out the game tree for chess it would be equally impossible to map out a "conversation tree" sufficient enough to cover even a significant portion of the possible conversational scenarios. It's fine as a hypotheitical scenario because a hypothetical can allow something that is impossible. BUT if you were to come across a computer in reality that could carry on a conversation indestinguishable from a human you would have to assume that the computer was capable of some level of semantic understanding. I see no other way of getting around the problem.
 
  • #153
Had to take off again.

So if it is impossible to create a program with a sefficient syntactic rule book like the one in Searle's chinese room to be indestinguishable from a human in conversation due to the shear vastness of the "Conversation Tree" then likewise due to the shear vastness of the game tree for chess the Chinese Room should predict that a computer will not be able to play a good game of chess.
Whether or not you want to agree that Deep Blue is capable of any sort of "understanding" Deep Blue and programs like it are still proof that AI has broken out of the Chinese Room.
 
  • #154
TheStatutoryApe said:
This does not adress my objection what so ever.

Please consider the context of the quote:

Tisthammerw said:
TheStatutoryApe said:
This is a product of flaws in the thought experiment that are misleading. We are focusing on the homunculus in the CR rather than on the system as a whole.

Ah, the old systems reply. The systems reply...

My point is that the "system" approach doesn't seem to work. You may have included additional claims (and I did address other parts of the post) but I felt the need to explain the systems reply anyway.


The objection I had was in regard to the manner in which you are seperating the computer from the sensory input.

I didn't really separate the computer from the sensory input though (in the robot and program X scenario). On the contrary, program X receiving sensory input is an essential part of the thought experiment.


If I were to rip your eye balls out, somehow keep them functioning, and then have them transmit data to you for you to decifer you wouldn't be able to do it.

Not necessarily. If the data is transmitted in a form that my physical brain would recognize as it normally does, I would be able to see. The eyes are a separate organ from the brain, but the eye can convert what it sees into signals and pass them off to the brain where I can see and act accordingly (sound familiar?).


Your eyes work because they are part of the system as a whole.

And the robot's cameras are also part of the system as a whole.


Refusing to allow the AI to have eyes is just a stuborn manner by which to preserve the CR argument.

But I am allowing the AI to have eyes. That's an essential part of my thought experiment. It's just that I replaced the part of the robot that would normally process the program with Bob...

But then if you wish to claim that the robot with its normal processor could understand you should answer the questions I asked earlier.


Are you really just unaware of what computers are capable of now a days?

I am a computer science major and am at least roughly aware of the capability of computers. They can do many impressive things, but I know of no computer accomplishment that would lead me to believe they are capable of understanding given the evidence of how computers work, the evidence of the thought experiments etc.


Your hypothetical system does not allow the man in the room to use the portions of his brain suited for the processing of the sort of information you are sending him.

If you are asking me if he literally sees the outside world, you are correct. If you are saying I am not allowing him to process the information (i.e. operate the program) you are incorrect. He clearly does so, and we have an instance in which the "right" program is being run and still there is no literal understanding.


It has nothing to do with not having the "right program".

That's not quite what you said in post #121. You said it was just a matter of having "the right hardware and the right program." I supplied the "right" program, still no understanding. And my subsequent questions criticized the usefulness of joining the "right hardware" to this "right program;" questions you have not answered.

Even here yet again you fail to adress my objection while using some stock argument. My objection was that you are not allowing the man in the room to properly utilize his own brain.

He is using his own brain to operate the program. Is he using his brain in such a way he can learn a new language? No, but that is beside the point. I'm not claiming a human being can't learn and literally understand. If you wish to object to my thought experiment, please answer my questions. The fact that a human being is capable of seeing and learning does not imply that my argument is unsound. That is my objection to your objection.


You do understand why the brain surgeon can not perceive colour right?

Well, I didn't say that...

It's a lack of the proper hardware, or rather wetware in this case.

Even still, she can have complete knowledge of the non-color-blind brain, know its every rule of operation and the sequence of neurons firing etc. and still not see color.

The most common problem that creates colour blindness is that the eyes lack the proper rods and cones (I forget exactly which ones do what but the case is something of this sort none the less.). If she were to undergo some sort of operation to add the elements necessary for gathering colour information to her eyes, a wetware upgrade, then she should be able to see in colour assuming that the proper software is present in her brain.

True, but you're missing the point...

Funny enough your own example is perfect in demonstrating that even a human needs the proper software and hardware/wetware to be capable of perception!

Something I never disputed.

So why is it that the proper software and hardware is necessary for a human to do these special processes that you attribute to it but the right software and hardware is not enough to help a computer?

I claim that the "right program" and "right hardware" are not sufficient for a computer because of my thought experiment regarding the robot and program X, which you have consistently ignored.


And I already know what you are going to say. You'll say that the human does have a magic ball of yarn which you have dubbed a "soul". Yet you can not tell me the properties of this soul and what exactly it does without invoking yet more magic balls of yarn like "freewill" or maybe Searle's "Causal Mind" or "Intrinsic Intentionality". So what are these things what do they do? Will you invoke yet more magic balls of yarn? Maybe even the cosmic magic ball of yarn called "God"? None of these magic balls of yarn prove anything.

I can directly perceive my own free will, and thus can rationally believe in the existence of the soul. Just because I am not able to discern the precise mechanics of how they work does not mean I cannot have rational basis to believe in it.

In any case my personal beliefs are not relevant. My counterexamples still remain as do my unanswered questions.


You seem to misunderstand the way AI works in instances such as these. The AI is not simply following instructions. When the robot comes to a wall there is not an instruction that says "when you come to a wall turn right". It can turn either right or left and it makes a decision to do one or the other.

I'd be interested in knowing the mechanisms on how this "decision" works. Sometimes its deterministic (e.g. an "if-then" type of thing) or perhaps it is "random," but even "random" number generators are actually built upon deterministic rules (and hence are actually pseudorandom). Are you even aware of how this works?


Earlier in this discussion Deep Blue was brought up. You responded in a very similar manner to that as you did this but I never got back to discussing it. You seem to think that a complex set of syntactic rules is enough for Deep Blue to have beaten Kasperov. The problem though is that you are wrong. You can not create such rules for making a computer play chess and have the computer be successful.

Depends what you mean by "syntactic" rules. If you are referring to a complex set of instructions (the kind that computer programs can use) that work as a connected and orderly system to produce valid output, then you are incorrect. You can indeed create such rules for making a computer play chess and having it be this successful. As I said, Deep Blue did (among other things) use an iterative deepening search algorithm with Alpha-Beta pruning. The program of Deep Blue was indeed a complex set of instructions, and computer programs (which are complex sets of instructions) can do some pretty impressive stuff, as I said before. I never thought it would be you who would underestimate that.


At least not against anyone who plays chess well and especially not against a world champion such as Kasperov. You can not simply program it with rules such as "when this is the board position move king's knight one to king's bishop three". If you made this sort of program and expected it to respond properly in any given situation you would have to map out the entire game tree.

Deep Blue didn't map out the entire game tree, but its search algorithms did go as deep as 14 moves. Being able to see 14 moves ahead is a giant advantage. From their it could pick the "best" solution.


Computers can do this far faster than we can and even they at current max processing speed will take at least hundreds of thousands of years to do this. By that time we will be dead and unable to write out the answers for every single possible board position. So we need to make short cuts. I could go on and on about how we might accomplish this but how about I tell you the way that I understand it is actually done instead.
The computer is taught how to play chess. It is told the board set up and how the pieces move, take each other, and so on. Then it is taught strategy such as controling the center, using pieces in tandum with one another, hidden check, and so forth.

And this is done by...what? Hiring a computer programmer to write the right set of instructions.


So far the computer has not been given any set of instructions on how to respond to any given situation such as the set up in the CR.

Actually, the Chinese room thought experiment can be given this type of programming. Remember my variants of the Chinese room were rather flexible and went well beyond mere "if-then" statements.


The computer is then asked based on the rules presented for the game and the goals presented to it to evaluate possible moves and pick one that is the most advantageous.

Like the search algorithms of Deep Blue?


This is pretty much what a human does when a human plays chess. So since the computer is evaluating options and making decisions would you still say that it can not understand what it is doing and is only replacing one line of code with another line of code as it says to do in it's manual?

Short answer, yes.

Let's do another variant of the Chinese room thought experiment. The questions are asking what to do in a given chess scenario. Using a complex set of instructions found in the rulebook, the man in the room writes down an answer (e.g. "Pawn to Queen's Four"). We can even have him carry out the same mathematical and logical operations the Deep Blue program does in binary code, and still he won't understand what's going on.
 
  • #155
Tisthammerw said:
Let's do another variant of the Chinese room thought experiment. The questions are asking what to do in a given chess scenario. Using a complex set of instructions found in the rulebook, the man in the room writes down an answer (e.g. "Pawn to Queen's Four"). We can even have him carry out the same mathematical and logical operations the Deep Blue program does in binary code, and still he won't understand what's going on.
Agian you relagate the man to the postion of a singlar processor of information utilizing portions of his brain illsuited for the process that he is preforming. Certainly you realize that the man going through these voluminous manuels and using only his critical faculties for every byte of information is going to take an exceedingly ponderous time for every single move made? How long do you think it would take? Hours? Weeks? Monthes? Don't you think that if you allowed the man to utilize the full processing capacity of his brain so that the information processing moved at the same pace as his own mind that he may start to catch on and find meaning in the patterns?


Remember Searle's argument was that the syntactic patterns of the information was not enough to come to an understanding. The fact that the man is unable to decifer meaning in the patterns when put into the shoes of the computer is supposed to prove this. If we prove that a man can be put in those shoes and find an understanding then this ruins the proof of the argument. Simply stating that the man is expected to be capable of understanding because he already is capable of it does not save the proof from being invalidated because the proof hinges on the man not being able to understand. Otherwise you admit that the Chinese Room is a useless argument.
 
Last edited:
  • #156
Tisthammerw said:
But I am allowing the AI to have eyes. That's an essential part of my thought experiment. It's just that I replaced the part of the robot that would normally process the program with Bob...
That's the point. Bob should represent the whole system, including the cameras. You are just taking out a pentium chip and replacing it with a human. He is not representing the sum of the parts just a processor.

Tisthammerw said:
I can directly perceive my own free will...
You can tell me that and the man in the chinese room will tell me that he can speak chinese too. Many a conginitive science major will tell you that your pereptions are just illusions like the illusion that the man in the CR understands chinese. This is why I have told you that this argument argues against the idea that there is something special about humans better than it does that there is from a cognitive science perspective since you can not prove or quantify in any meaningfully scientific fashion that this "something else" exists given the circumstances.
 
  • #157
The Chinese Room

Tisthammerw said:
The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese?
Why is it necessary for the room to be conscious in order for it to understand Chinese? I do not see that consciousness is a necessary prerequisite for understanding a language.

Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.
IMHO this is incorrect reasoning. The man is not conscious of the fact that he understands the language, and yet he is perfectly capable of carrying out rational conversations in the language. The fact that he is able to carry out a rational conversation in Chinese is demonstration that he (unconsciously) understands Chinese.

The error in Searle’s reasoning is the assumption that “understanding a language” requires “consciousness”. It is easy to see how this assumption is made, since to date our only experiences of agents with the capacity to understand language have been conscious agents, but I would submit that this need not necessarily always be the case in the future.

In fact the systems reply does work very well indeed.

MF
 
  • #158
moving finger said:
Why is it necessary for the room to be conscious in order for it to understand Chinese? I do not see that consciousness is a necessary prerequisite for understanding a language.


IMHO this is incorrect reasoning. The man is not conscious of the fact that he understands the language, and yet he is perfectly capable of carrying out rational conversations in the language. The fact that he is able to carry out a rational conversation in Chinese is demonstration that he (unconsciously) understands Chinese.

The error in Searle’s reasoning is the assumption that “understanding a language” requires “consciousness”. It is easy to see how this assumption is made, since to date our only experiences of agents with the capacity to understand language have been conscious agents, but I would submit that this need not necessarily always be the case in the future.

In fact the systems reply does work very well indeed.

MF
The problem here is that human language has what Searle might refer to as a "semantic" property, the meaning of the words. The meanings of these words, for the most part, are attached to things of the outside world which the man in the box has no access to and therefore no referance by which to learn those meanings. This is why I have argued here so strongly for allowing the man in the box sensory input. The way Searle sets up his argument though sensory information from say a camera will just be in another language which the man in the room will not understand. Personally I think that this is unfair and unrealistic.

Searle's Chinese Room may have started out as a sincere thought experiment but since I think the room has been shaped specifically to make the man in the room fail in his endevour to understand what is happening.
 
  • #159
TheStatutoryApe said:
Something I have failed to bring up yet. The scenario in the CR of a computer program with a large enough set of instructions telling the computer to replace certain lines of script with other lines of script making it indistinguishable from a human is not possible in reality.

Why not?


Any significantly long coversation or just a person testing it to see if it is a computer of this sort or not will reveal it for what it is.

Ah, so you’re criticizing the reliability of the Turing test for strong AI, instead of the technological possibility of a program being able to mimic a conversation.


Just the same as in the case of mapping out the game tree for chess it would be equally impossible to map out a "conversation tree" sufficient enough to cover even a significant portion of the possible conversational scenarios.

This requires some explanation (especially on what you mean by “conversation tree”).


It's fine as a hypotheitical scenario because a hypothetical can allow something that is impossible. BUT if you were to come across a computer in reality that could carry on a conversation indestinguishable from a human you would have to assume that the computer was capable of some level of semantic understanding.

I don't see why, given the counterexample of the Chinese room. Note that I didn't specify exactly what kinds of instructions were used. It doesn't have to be a giant "if-then" tree. The person in the room can use the same kinds of rules (for loops, arithmetic etc.) that the computer can. Thus, if a computer can simulate understanding so can the man in the Chinese room. And yet still there is no literal understanding.


TheStatutoryApe said:
Whether or not you want to agree that Deep Blue is capable of any sort of "understanding" Deep Blue and programs like it are still proof that AI has broken out of the Chinese Room.

This requires some justification, especially given the fact that we haven't even been able to produce the Chinese Room (yet).


Tisthammerw said:
Let's do another variant of the Chinese room thought experiment. The questions are asking what to do in a given chess scenario. Using a complex set of instructions found in the rulebook, the man in the room writes down an answer (e.g. "Pawn to Queen's Four"). We can even have him carry out the same mathematical and logical operations the Deep Blue program does in binary code, and still he won't understand what's going on.

Agian you relagate the man to the postion of a singlar processor of information utilizing portions of his brain illsuited for the process that he is preforming.

Perhaps so, but you're missing the point. This is a clear instance of a program simulating chess without having any real understanding. You can make arguments showing how a human being can understand, but this has little relevance to the counterexample.

Certainly you realize that the man going through these voluminous manuels and using only his critical faculties for every byte of information is going to take an exceedingly ponderous time for every single move made?

And thus of course real computers would do it much faster. But this doesn't change the point of the argument (e.g. simulation of chess without real understanding) and if need be we could say that this person is an extraordinary autistic savant capable of processing inordinate amounts of information rapidly.


Remember Searle's argument was that the syntactic patterns of the information was not enough to come to an understanding. The fact that the man is unable to decifer meaning in the patterns when put into the shoes of the computer is supposed to prove this. If we prove that a man can be put in those shoes and find an understanding then this ruins the proof of the argument.

This doesn't work for a variety of reasons. First, we are not disputing whether or not a man can learn and understand, so showing that a man can understand by itself has little relevance. Second, even though we can find ways a human can literally understand, this doesn’t change the fact that we have a clear instance of a complex set of instructions giving valid output (e.g. meaningful answers, good chess moves) without literal understanding; and so the counterexample is still valid. Third, you have constantly failed to connect analogies of a human understanding with a computer literally understanding in ways that would overcome my counterexamples (e.g. the robot and program X, and you still haven't answered my questions regarding this).


But I am allowing the AI to have eyes. That's an essential part of my thought experiment. It's just that I replaced the part of the robot that would normally process the program with Bob...

That's the point.

Well then, please answer my questions regarding what happens when we replace Bob with the robot's ordinary processor. You haven't done that. Let's recap the unanswered questions:

Tisthammerw said:
One could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But if you claim this, several important questions must be answered, because it isn’t clear why that would make a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?

I (still) await your answers.


Bob should represent the whole system...

Remember, I already responded to the systems reply (e.g. post #149). But I can do so again in a way that's more fitting for this thought experiment. Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).

If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions above.


I can directly perceive my own free will...

You can tell me that and the man in the chinese room will tell me that he can speak chinese too.

Er, no he won't. He’ll tell you he won't understand Chinese ex hypothesi, remember?


Many a conginitive science major will tell you that your pereptions are just illusions

And many a cognitive science major will tell me that my perceptions of free will are correct. The existence of free will falls into the discipline of metaphysics, not science (though there is some overlap here). Here's the trouble with the "illusion" claim: if I cannot trust my own perceptions, on what basis am I to believe anything, including the belief that free will does not exist? Hard determinism gets itself into some major intellectual difficulties when examined closely.
 
  • #160
moving finger said:
Tisthammerw said:
The systems reply goes something like this:

It’s true that the person in the room may not understand. If you ask the person in the room (in English) if he understands Chinese, he will answer “No.” But the Chinese room as a whole understands Chinese. Surely if you ask the room if it understands Chinese, the answer will be “Yes.” A similar thing would be true if a computer were to possesses real understanding. Although no individual component of the computer possesses understanding, the computer system as a whole does.

There are a couple of problems with this reply. First, does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese?

Why is it necessary for the room to be conscious in order for it to understand Chinese? I do not see that consciousness is a necessary prerequisite for understanding a language.

It depends on how you define "understanding," but I'm using the dictionary definition of "grasping the meaning of." Without consciousness there is nothing to grasp meaning. Suppose we knock a Chinese person unconscious. Will he understand anything we say to him? Suppose we speak to a pile of books. Will they understand anything? How about an arrangement of bricks?


Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.

IMHO this is incorrect reasoning. The man is not conscious of the fact that he understands the language, and yet he is perfectly capable of carrying out rational conversations in the language.

Which (I think) is a giant non sequitur for literal understanding. He can simulate a conversation using a complex set of rules manipulating input (e.g. if you see X replace with Y), but he clearly doesn't know the meaning of any Chinese word ex hypothesi. (If you wish to dispute this, you’ll have to show that such a thing is logically impossible, and that will be difficult to prove.) Similarly, a person can do the logical/arithmetic operations a computer can without understanding what the strings of binary digits mean.
 
  • #161
Tisthammerw said:
This doesn't work for a variety of reasons. First, we are not disputing whether or not a man can learn and understand, so showing that a man can understand by itself has little relevance. Second, even though we can find ways a human can literally understand, this doesn’t change the fact that we have a clear instance of a complex set of instructions giving valid output (e.g. meaningful answers, good chess moves) without literal understanding; and so the counterexample is still valid. Third, you have constantly failed to connect analogies of a human understanding with a computer literally understanding in ways that would overcome my counterexamples (e.g. the robot and program X, and you still haven't answered my questions regarding this).
You don't seem to understand how a thought experiment works. The thought experiment is supposed to present an analogous situation. The man in the room is supposed to represent a computer. The argument is supposed to show that even a man attempting to understand chinese while in the shoes of a computer is unable to do so and hence a computer obviously will not be able to understand chinese. If we can show that the room can be manipulated in such a way that reflects the situation of a computer and allows the man to understand chinese then the proof of the CR(the man not being able to understand while in the shoes of a computer) has been invalidated. If you don't understand this then there is no more point to discussing the CR.
 
  • #162
TheStatutoryApe said:
You don't seem to understand how a thought experiment works.

How's that?


The thought experiment is supposed to present an analogous situation. The man in the room is supposed to represent a computer.

True.


The argument is supposed to show that even a man attempting to understand chinese while in the shoes of a computer is unable to do so and hence a computer obviously will not be able to understand chinese.

That's not how I'm using this particular thought experiment. I'm using it as a counterexample: complex instructions are yielding valid output and still no literal understanding. Thus, a complex set of instructions yielding valid output does not appear sufficient for literal understanding to exist. This is of course analogous to a computer (since a computer uses a complex set instructions etc.), but that doesn't change my purpose of the thought experiment.


If we can show that the room can be manipulated in such a way that reflects the situation of a computer and allows the man to understand chinese then the proof of the CR(the man not being able to understand while in the shoes of a computer) has been invalidated.

True--if it reflects the situation of a computer. Given your remarks my subsequent arguments it isn't clear that this is the case, nor have you answered my questions thereof (e.g. the robot and program X). You seem to forget what I said in the post you replied to:

Tisthammerw said:
you have constantly failed to connect analogies of a human understanding with a computer literally understanding in ways that would overcome my counterexamples (e.g. the robot and program X, and you still haven't answered my questions regarding this).
 
  • #163
TheStatutoryApe said:
The problem here is that human language has what Searle might refer to as a "semantic" property, the meaning of the words. The meanings of these words, for the most part, are attached to things of the outside world which the man in the box has no access to and therefore no referance by which to learn those meanings.
But it is not necessary for "the man in the box" to learn any meanings. The facility to understand is not resident within "the man in the box". It is the entire CR which has the facility to understand. "the man in the box" is simply functioning as part of the input/output mechanism for the CR. The only reason the CR can understand Chinese is because it must already be loaded with (programmed with) enough data which allow it to form relationships between words, which allow it to draw analogies, which allow it to associate words, phrases and sentences with each other, in short which allow it to grasp the meaning of words. It is not necessary for the CR to have direct access to any of the things in the outside world which these words represent in order for it to understand Chinese. If I lock myself in a room and cut off all access to the outside world, do I suddenly lose the ability to understand English? No, because my ability to understand (developed over a period of years) is now a part of me, it is internalised, my ability to understand is now "programmed" within my brain, and it continues to operate whether or not I have any access to the outside world.

TheStatutoryApe said:
This is why I have argued here so strongly for allowing the man in the box sensory input.
This is not necessary. Sensory input may be required as a (optional) way of learning a language in the first place, but once an agent has learned a language (the CR has already learned Chinese) then continued sensory input is not required to maintain understanding.

TheStatutoryApe said:
Searle's Chinese Room may have started out as a sincere thought experiment but since I think the room has been shaped specifically to make the man in the room fail in his endevour to understand what is happening.
imho the reason Searle's CR argument continues to persuade some people is because the focus continues to be (wrongly) on just the "man in the box" rather than on the entire CR.

MF
 
Last edited:
  • #164
Hi Tisthammerw

Tisthammerw said:
I'm using the dictionary definition of "grasping the meaning of." Without consciousness there is nothing to grasp meaning.
With respect, this is anthropocentric reasoning and is not necessarily correct.
Ask the Chinese Room (CR) any question in Chinese, and it will respond appropriately. Ask it if it understands Chinese, it will respond appropriately. Ask it if it grasps the meanings of words, it will respond appropriately. Ask it if it understands semantics, it will respond appropriately. Ask it any question you like to "test" its ability to understand, to grasp the meanings of words, and it will respond appropriately. In short, the CR will behave just as any human would behave who understands Chinese. On what basis then do we have any right to claim that the CR does NOT in fact understand Chinese? None.

Why should "ability to understand" be necessarily associated with consciousness? Yes, humans (perhaps) need to be conscious in order to understand language (that is the way human agents are built), but that does not necessarily imply that consciousness is a pre-requisite of understanding in all possible agents.

A human being can perform simple arithmetical calculations, such as add two integers together and generate a third integer. A simple calculator can do the same thing. But an unconscious human cannot do this feat of addition - does that imply (using your anthropocentric reasoning) that consciousness is a pre-requisite for the ability to add two numbers together? Of course not. We know the calculator is not conscious and yet it can still add two numbers together. The association between cosnciousness and ability to add numbers together is therefore an accidental association peculiar to human agents, it is not a necessary association in all agents. Similarly I argue that the association between understanding and consciousness in human agents is an accidental association peculiar to humans, it is not a necessary association in all possible agents.

Tisthammerw said:
Suppose we knock a Chinese person unconscious. Will he understand anything we say to him?
Perhaps not, because consciousness and ability to understand are accidentally associated in humans. The same Chinese person will also not be able to add two numbers together whilst unconscious, but that does not imply that a simple pocket calculator must necessarily be conscious in order to add two numbers together.

Tisthammerw said:
Suppose we speak to a pile of books. Will they understand anything? How about an arrangement of bricks?
I hope you are being flippant here (if I thought you were being serious I might start to doubt your ability to understand English, or at least your ability to think rationally). Neither a pile of books nor a pile of bricks has the ability to take the information we provide (the sounds we make) and perform any rational processing of this information in order to derive any kind of understanding. Understanding is a rational information processing exercise, a static agent cannot be in a position to rationally process information therefore cannot understand. The pile of books here is in the same position as the unconscious chinese man - neither can understand what we are saying, and part of the reason for this is because they have no way of rationally processing the information we are providing.

In the following (to avoid doubt) we are talking about a man who internalises the rulebook.
Tisthammerw said:
Which (I think) is a giant non sequitur for literal understanding. He can simulate a conversation using a complex set of rules manipulating input (e.g. if you see X replace with Y), but he clearly doesn't know the meaning of any Chinese word ex hypothesi.
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.

Your argument works only if you define “understanding” as necessarily implying “conscious understanding” (again this is an anthropocentric perspective). If one of your assumptions is that any agent who underdstands must also be conscious of the fact that it understands (as you seem to be saying) then of course by definition an agent can understand only if it is also conscious of the fact that it understands. But I would challenge your assumption. To my mind, it is not necessary that an agent be conscious of its own understanding in order for it to be able to understand, just as an agent does not need to be conscious of the fact that it is carrying out an arithmetic operation in order to carry out arithmetic operations.

Tisthammerw said:
(If you wish to dispute this, you’ll have to show that such a thing is logically impossible, and that will be difficult to prove.)
With respect, if you wish to base your argument on this assumption, the onus is in fact on you to show that “understanding” necessally implies “conscious understanding” in all agents (and is not simply an anthropocentric perspective).

Tisthammerw said:
Similarly, a person can do the logical/arithmetic operations a computer can without understanding what the strings of binary digits mean.
Again you are implicitly assuming an anthropocentric perspective in that “understanding what the strings of binary digits mean” can only be done by an agent which is conscious of the fact that it understands.

With respect,

MF
 
  • #165
moving finger said:
But it is not necessary for "the man in the box" to learn any meanings.
Without knowing any meanings to any of the words it seems to make little sense to claim he can understand them. Perhaps we'll have to agree to disagree on that point.

The facility to understand is not resident within "the man in the box". It is the entire CR which has the facility to understand.
Ah, the systems reply. A couple problems here. Does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese? That doesn’t strike me as plausible. Second, Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.

I'm using the dictionary definition of "grasping the meaning of." Without consciousness there is nothing to grasp meaning.
With respect, this is anthropocentric reasoning and is not necessarily correct.
Anthropocentric? I never said only humans are capable of understanding.

Ask the Chinese Room (CR) any question in Chinese, and it will respond appropriately. Ask it if it understands Chinese, it will respond appropriately. Ask it if it grasps the meanings of words, it will respond appropriately. Ask it if it understands semantics, it will respond appropriately. Ask it any question you like to "test" its ability to understand, to grasp the meanings of words, and it will respond appropriately. In short, the CR will behave just as any human would behave who understands Chinese. On what basis then do we have any right to claim that the CR does NOT in fact understand Chinese?
Ask the man inside the room if he understands Chinese. The reply will be in the negative. Also, see above regarding the systems reply.

Which (I think) is a giant non sequitur for literal understanding. He can simulate a conversation using a complex set of rules manipulating input (e.g. if you see X replace with Y), but he clearly doesn't know the meaning of any Chinese word ex hypothesi.
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese.
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.

(If you wish to dispute this, you’ll have to show that such a thing is logically impossible, and that will be difficult to prove.)
With respect, if you wish to base your argument on this assumption, the onus is in fact on you to show that “understanding” necessally implies “conscious understanding” in all agents (and is not simply an anthropocentric perspective).
To me it seems pretty self-evident (if you understand what consciousness is). Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness. If something (e.g. an arrangement of bricks) does not know the meaning of words it cannot possesses literal understanding of them. Calling my belief “anthropocentric” doesn't change the logic of the circumstances. An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.

And you've avoided my request. Ex hypothesi he has no knowledge of what any Chinese word means. He utters sounds but has no idea what they mean. Again, you'll have to show that such a thing is logically impossible, and you haven't done anything close to that. Nor is such a claim of logical impossibility plausible.
 
  • #166
Tisthammerw said:
moving finger said:
But it is not necessary for "the man in the box" to learn any meanings.
Without knowing any meanings to any of the words it seems to make little sense to claim he can understand them. Perhaps we'll have to agree to disagree on that point.
With respect, you have quoted me out of context here. Please check again my full reply in post #163 above on this point, which runs thus :
moving finger said:
But it is not necessary for "the man in the box" to learn any meanings. The facility to understand is not resident within "the man in the box". It is the entire CR which has the facility to understand. "the man in the box" is simply functioning as part of the input/output mechanism for the CR.
In other words (and at the risk of repeating myself), the ability "to understand" is not resident solely within the man in the box (the man is there simply to pass written messages back and forth, he could be replaced by a simply mechanism), the ability "to understand" is an emergent and dynamic property of the entire contents of the box, of which the man forms only a minor part. This is why it is not necessary for the man in the box to know the meanings of any words.
In the same way, individual neurons in your brain participate in the process of understanding that takes place in your brain, but the ability "to understand" is an emergent and dynamic property of your brain, of which each neuron forms only a minor part. It is not necessary (and indeed makes no sense) for anyone neuron to "know the meanings" of any words.
If you cannot see and understand this then (with respect) I am afraid that you have missed the whole point of the CR argument.
Tisthammerw said:
Does the combination of the book, paper, pen, and the person somehow magically create a separate consciousness that understands Chinese?
With respect, you seem to be ignoring my replies above – see for example post #164. I have made it quite clear that imho the association between “consciousness” and “ability to understand” (viz consciousness is a necessary pre-requisite to understanding) may be a necessary but accidental relationship in homo sapiens, and this does not imply that such a relationship is necessary in all possible agents. Please read again my analogy with the simple calculator, which runs as follows :
moving finger said:
A human being can perform simple arithmetical calculations, such as add two integers together and generate a third integer. A simple calculator can do the same thing. But an unconscious human cannot do this feat of addition - does that imply (using your anthropocentric reasoning) that consciousness is a pre-requisite for the ability to add two numbers together? Of course not. We know the calculator is not conscious and yet it can still add two numbers together. The association between cosnciousness and ability to add numbers together is therefore an accidental association peculiar to human agents, it is not a necessary association in all agents. Similarly I argue that the association between understanding and consciousness in human agents is an accidental association peculiar to humans, it is not a necessary association in all possible agents.
Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.
Again, I have already answered this in my post #164 above, thus :
moving finger said:
The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.
Tisthammerw said:
Anthropocentric? I never said only humans are capable of understanding.
Homo Sapiens is the only species that we “know” possesses consciousness. To be more correct, the only individual that I know who possesses consciousness is myself. I surmise that other humans possesses consciousness, but I challenge anyone to prove to another person that they are conscious. In the case of non-human species, I have no idea whether any of them are conscious or not. And I challenge anyone to prove that any non-human species is indeed conscious.
Tisthammerw said:
Ask the man inside the room if he understands Chinese. The reply will be in the negative.
See my first reply in this post. It makes no difference whether the man inside the room understands Chinese or not, this is the whole point. It is the entire room which possesses the understanding of Chinese. I do not wish to repeat the argument all over again, so please read the beginning of this post again.
.
Tisthammerw said:
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, you are not reading my posts correctly.
I am saying (in this very abstract theoretical case) that IF Searle could successfully internalise and implement the rulebook within his body, then the physical body of Searle understands Chinese (because he has internalised the rulebook) – ask him any question in Chinese and he will provide a rational response in Chinese, using that internalised rulebook. Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail. The body of Searle thus understands Chinese. The only thing he does not possesses is that he is not CONSCIOUS of the fact that he understands Chinese. He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese, but he is not CONSCIOUS of the fact that he knows the meanings of these words.
All of this assumes that Searle could INTERNALISE the rulebook and implement the rulebook internally within his person without being conscious of the details of what he is doing – whether this is possible in practice or not I do not know (but it was Searle who suggested the internalisation, not me!)
Tisthammerw said:
That isn't logical.
Imho it is completely logical.
Tisthammerw said:
To me it seems pretty self-evident (if you understand what consciousness is).
It also seems self-evident to me that understanding is an emergent property of a dynamic system, as is consciousness, and the two may be associated (as in home sapiens) but there is in principle no reason why they must be associated in all possible agents.
Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this does not follow. All you have shown here is that consciousness is associated with understanding in home sapiens. You have NOT shown that understanding is impossible without consciousness in all possible agents.
Tisthammerw said:
If something (e.g. an arrangement of bricks) does not know the meaning of words it cannot possesses literal understanding of them.
An arrangementt of bricks is a static entity. Understanding is a dynamic process. Please do not try to suggest that my arguments imply a static arrangement of bricks possesses understanding.
Tisthammerw said:
An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.
I never said it could.
Please do try not to misread or misquote. I said that imho an agent need not necessarily be conscious in order to understand the meaning of words. You have not proven otherwise.

(for the avoidance of doubt, in the following we are discussing a man who internalises the rulebook)
Tisthammerw said:
And you've avoided my request. Ex hypothesi he has no knowledge of what any Chinese word means. He utters sounds but has no idea what they mean. Again, you'll have to show that such a thing is logically impossible, and you haven't done anything close to that. Nor is such a claim of logical impossibility plausible.
Once again you seem not to bother reading my posts. I have answered (in post #164) as follows :
moving finger said:
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.
Your argument works only if you define “understanding” as necessarily implying “conscious understanding” (again this is an anthropocentric perspective). If one of your assumptions is that any agent who underdstands must also be conscious of the fact that it understands (as you seem to be saying) then of course by definition an agent can understand only if it is also conscious of the fact that it understands. But I would challenge your assumption. To my mind, it is not necessary that an agent be conscious of its own understanding in order for it to be able to understand, just as an agent does not need to be conscious of the fact that it is carrying out an arithmetic operation in order to carry out arithmetic operations.
With respect, if you wish to base your argument on this assumption, the onus is in fact on you to show that “understanding” necessally implies “conscious understanding” in all agents (and is not simply an anthropocentric perspective).
Now, can you substantiate your claim that consciousness is a necessary pre-requisite for understanding in all possible agents (not simply in homo sapiens)? If you cannot, then your argument that the CR does not understand is based on faith or belief, not on rationality.
As always, with respect,
MF
 
Last edited:
  • #167
I believe one day AI will become far more powerful than the human brain. I cannot explain why I believe this with words, I just think that, given enough time, it will happen.
 
  • #168
tomfitzyuk said:
I believe one day AI will become far more powerful than the human brain. I cannot explain why I believe this with words, I just think that, given enough time, it will happen.

Back around the 70's I agreed with this. My reasoning was that hardware and software were both being improved at an exponential pace, and there was no obvious upper limit to their power short of Planck's constant, while human brains were evolving at a much slower pace.

But since the "gene explosion" of the 80's I have revised my view. Nowadays it appears there is a Moore's law analog for tinkering with our own genetic inheritance, so our great grandkids, if we survive, may become smarter at the same or greater pace than AI's are.

Added: In view of the new posting guidlines, I should specify my definitions. Obviously I consider human intelligence to be simply a function of brain (and other body) structure and action, under control of genes and gene expressions. So the human side and the AI side, for me, are comparable. If you want to develop AI intelligence to any degree, I see no theoretical reason why you should not be able to, given sufficient time and skill. In particular I reject, as I have posted many times before, the idea that Goedelian incompleteness or anything Chaitin has demonstrated about digital limitations, constitute a hard cap. Brains are not necessarilty digital, and AI's need not be.
 
Last edited:
  • #169
moving finger said:
With respect, you have quoted me out of context here.
Yes and no. I took that part of the quote and responded to it, at the time not knowing you were using the systems reply. I subsequently responded to the systems reply (neglecting to modify the previous quote) and this made my response to the first quote relevant (see below).


Still, I admit that your complaint has some validity. I thus apologize.


In other words (and at the risk of repeating myself), the ability "to understand" is not resident solely within the man in the box (the man is there simply to pass written messages back and forth, he could be replaced by a simply mechanism), the ability "to understand" is an emergent and dynamic property of the entire contents of the box
But I can use the same response I did last time. Let the man internalize the contents of the Chinese room; suppose the man memorizes the rulebook, the stacks of paper etc. He still doesn't understand the Chinese language. Thus,

Without knowing any meanings to any of the words it seems to make little sense to claim he can understand them.
And my response applies.


With respect, you seem to be ignoring my replies above – see for example post #164. I have made it quite clear that imho the association between “consciousness” and “ability to understand”...
With respect, you seem to have ignored my replies above - see for example post #165. I have made it quite clear that I believe this relationship is necessary in all possible agents by virtue of what consciousness means. Please read my explanation, which runs as follows:


Tisthammerw said:
To me it seems pretty self-evident (if you understand what consciousness is). Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness. If something (e.g. an arrangement of bricks) does not know the meaning of words it cannot possesses literal understanding of them. Calling my belief “anthropocentric” doesn't change the logic of the circumstances. An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.

Now, moving on…


Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.

Again, I have already answered this in my post #164 above
The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese. He is able to respond to any question we put to him in Chinese, to rationally process the Chinese words and symbols, and to respond appropriately in Chinese. Whether he is conscious of the fact that he is doing this is irrelevant.
Again, I have already answered this in my post #165 above


(I have reproduced my response for your convenience.)

Tisthammerw said:
The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese.
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.
To which you responded (in post #166):


Tisthammerw said:
That isn't logical.
Imho it is completely logical.
Imho you need to look up the law of noncontradiction.
Homo Sapiens is the only species that we “know” possesses consciousness.
Not at all (given how I defined it earlier). My pet cat for instance possesses consciousness (e.g. capable of perception).


To be more correct, the only individual that I know who possesses consciousness is myself. I surmise that other humans possesses consciousness, but I challenge anyone to prove to another person that they are conscious.
True, there is that epistemelogical problem of other minds. But for computers we at least have logic and reason to help guide us (e.g. the Chinese room and variants thereof).


Tisthammerw said:
Ask the man inside the room if he understands Chinese. The reply will be in the negative.
See my first reply in this post. It makes no difference whether the man inside the room understands Chinese or not, this is the whole point. It is the entire room which possesses the understanding of Chinese. I do not wish to repeat the argument all over again, so please read the beginning of this post again.
See my previous post. It makes a big difference whether the man (when he internalizes and becomes the system) understands Chinese or not. This is the whole point. I do not wish to repeat the argument all over again, so please read the beginning of my posts again.


But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, you are not reading my posts correctly.
I am saying (in this very abstract theoretical case) that IF Searle could successfully internalise and implement the rulebook within his body, then the physical body of Searle understands Chinese (because he has internalised the rulebook) – ask him any question in Chinese and he will provide a rational response in Chinese, using that internalised rulebook.
I'm not sure you've been reading my posts correctly. Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"? If you're a physicalist (as I suspect) the answer would seem to be no.


Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail.
Really? There is no test of understanding that he will fail? Let's ask him (in English) what Chinese word X means, and he will (quite honestly) reply "I have no idea." And of course, he is right. He doesn't know a word of Chinese.

He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese
You certainly haven't been reading my posts correctly. When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now). Please don’t twist the meaning of what I say again. This is getting tiresome.


Now, given my definition of the word “understand,” does the man understand a single word of Chinese? No, that is obviously not the case here. The man does not know a word of Chinese. If you ask him (in English) if he understands Chinese, his honest answer will be “no.” The part(s) of him that possesses understanding of words does not understand a single word of Chinese. When I say he “knows the meaning of the words” I did not mean he can use a giant rulebook written in English on how to manipulate Chinese characters to give valid output. (Note: “valid” in this case means that the output constitutes satisfactory answers [i.e. to an outside observer the answers seem “intelligent” and “rational”] to Chinese questions.)


Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this does not follow.

Yes it does. This is how I define consciousness. Given how I defined consciousness and understanding, it logically follows that literal understanding requires consciousness (regardless of what species one is). Look back at my definitions. When I say a computer cannot understand, I mean the definition of understanding that I have used. I'm not saying that a computer can't “understand” in some other sense (e.g. metaphorical, or at least metaphorical using my definition).
Please do try not to misread or misquote.
Ditto.


(for the avoidance of doubt, in the following we are discussing a man who internalises the rulebook)
Tisthammerw said:
And you've avoided my request. Ex hypothesi he has no knowledge of what any Chinese word means. He utters sounds but has no idea what they mean. Again, you'll have to show that such a thing is logically impossible, and you haven't done anything close to that. Nor is such a claim of logical impossibility plausible.
Once again you seem not to bother reading my posts. I have answered (in post #164) as follows :
disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese...

Once again you seem not to bother reading my posts. I have answered this (in post #165) as follows:


Tisthammerw said:
moving finger said:
I disagree. The man in this case does indeed understand Chinese, but he is NOT CONSCIOUS OF THE FACT that he understands Chinese.
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.
But then, you seemed to already know I responded to this. So why are you pretending that I didn't?


Now, can you substantiate your claim that consciousness is a necessary pre-requisite for understanding in all possible agents (not simply in homo sapiens)?


I could say, "Once again you seem not to bother reading my posts" and quote you my argument of why consciousness is a necessary pre-requisite for understanding in all possible agents, but I think that game is getting old. Please understand the terms as I have used and defined them. Once an intelligent person does that, I think it’s clear that my argument logically follows.
 
Last edited:
  • #170
moving finger said:
the ability "to understand" is not resident solely within the man in the box (the man is there simply to pass written messages back and forth, he could be replaced by a simply mechanism), the ability "to understand" is an emergent and dynamic property of the entire contents of the box
Tisthammerw said:
Let the man internalize the contents of the Chinese room; suppose the man memorizes the rulebook, the stacks of paper etc. He still doesn't understand the Chinese language.
I disagree. Let us assume that it is “possible” for the man to somehow internalise the rulebook and to use this rulebook without indeed being conscious of all the details. The physical embodiment of the man now takes the place of the CR, and the physical embodiment of the man therefore DOES understand Chinese. The man may not be conscious of the fact the he understands Chinese (I have explained this before several times) but nevertheless he (as a physical entity) does understand.
Tisthammerw said:
Thus, without knowing any meanings to any of the words it seems to make little sense to claim he can understand them.
But he DOES know the meaning of the words (in the example where he internalises the rulebook) – even though he is not CONSCIOUS of the fact that he knows the meanings.
Tisthammerw said:
And my response applies.
And your response does not apply.
Tisthammerw said:
To me it seems pretty self-evident (if you understand what consciousness is).
Do you claim to understand what consciousness is?
Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this is faulty logic. You have shown (possibly) that some of the characteristics of consciousness may have something in common with some of tha characteristics of understanding, but this does not imply that consciousness is a necessary pre-requisite for understanding.
Tisthammerw said:
An entity--human or otherwise--cannot know the meaning of words without knowing the meaning of words.
That’s obvious. But in your example (where the human agent internalises the rulebook), the physical embodiment of the agent DOES know the meaning of words, in the same way that the CR knew the meaning of words. The difference being (how many times do we have to go round in circles?) neither the CR nor the agent are conscious of the fact that they know the meanings of words.
Tisthammerw said:
Searle’s response was that suppose the person internalizes all of the system: the man memorizes the rulebook, the stacks of paper and so forth. Even though the man can conduct a conversation in Chinese, he still doesn’t understand the language.
But he doesn't know the meaning of any Chinese word!
Yes he DOES! He is not CONSCIOUS of the fact the he knows the meaning of any Chinese word, but the physical embodiment of that man “knows Chinese”.
Tisthammerw said:
Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, read my responses again. In all of this you are assuming that “understanding” entails “conscious understanding” and that “knowing” entails “conscious knowing”, which is an assumption not a fact.
Tisthammerw said:
Imho you need to look up the law of noncontradiction.
Imho you need to look up what entails a logical proof. You have not proven that understanding is a necessary pre-requisite to consciousness, you have assumed it (implicit in your definition of understanding). Your assumption may be incorrect.
moving finger said:
Homo Sapiens is the only species that we “know” possesses consciousness.
Tisthammerw said:
Not at all (given how I defined it earlier). My pet cat for instance possesses consciousness (e.g. capable of perception).
Is “perception” all that is required for consciousness? I don’t think so, hence your conclusion is a non sequitur.
moving finger said:
To be more correct, the only individual that I know who possesses consciousness is myself. I surmise that other humans possesses consciousness, but I challenge anyone to prove to another person that they are conscious.
Tisthammerw said:
True, there is that epistemelogical problem of other minds.
Thank you. Then you must also agree that you do not “know” whether your cat is conscious or not.
Tisthammerw said:
It makes a big difference whether the man (when he internalizes and becomes the system) understands Chinese or not.
In the case of the internalised rulebook, if you ask (in Chinese) the “entity” which has internalised the rulebook whether it understands Chinese then it WILL reply in the positive. Just as it will reply rationally to any Chinese question. Whether the man is “conscious” of the fact that he understands Chinese is not relevant.
Tisthammerw said:
But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words?
No, you are not reading my posts correctly.
I am saying (in this very abstract theoretical case) that IF Searle could successfully internalise and implement the rulebook within his body, then the physical body of Searle understands Chinese (because he has internalised the rulebook) – ask him any question in Chinese and he will provide a rational response in Chinese, using that internalised rulebook.
Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?
There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.
Searle has not consciously assimiliated the rulebook, and there is nothing in Searle’s consciousness which understands Chinese. But there is more to Searle than Searle’s consciousness, and some physical part of Searle HAS necessarily internalised the rulebook and is capable of enacting the rulebook – it is THIS part of Searle (which is not conscious) which understands Chinese, and not the “Searle consciousness”.
Thus the answer to your question ‘Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?’ is “yes, there is a difference.” Your question contains an implicit assumption that "Searle understands Chinese" actually means "Searle’s consciousness understands Chinese", whereas "Searle's physical body understands Chinese" does not necessitate that his consciousness understands Chinese.
moving finger said:
Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail.
Tisthammerw said:
Really? There is no test of understanding that he will fail? Let's ask him (in English) what Chinese word X means, and he will (quite honestly) reply "I have no idea."
It should be obvious to anyone with any understanding of the issue that asking him a question in English is NOT a test of his ability to understand Chinese.
He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese
Tisthammerw said:
When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now).
you implicitly assume that understanding requires consciousness, but you have not shown this to be the case (except by defining understanding to suit your conclusion)
Tisthammerw said:
Now, given my definition of the word “understand,” does the man understand a single word of Chinese?
I dispute your definition. I do not agree that an agent must necessarily be “aware of the fact that it understands” in order to understand.
Tisthammerw said:
No, that is obviously not the case here. The man does not know a word of Chinese. If you ask him (in English) if he understands Chinese, his honest answer will be “no.”
Asking a quaetion in English is not a test of understanding of Chinese. Why do you refuse to ask him the same question in Chinese?
Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.
No, this does not follow, as already explained above, you have shown (possibly) that some of the characteristics of consciousness may have something in common with some of tha characteristics of understanding, but this does not imply that consciousness is a necessary pre-requisite for understanding.
Tisthammerw said:
Given how I defined consciousness and understanding, it logically follows that literal understanding requires consciousness (regardless of what species one is).
Naturally, if one defines “X” as a being a pre-requisite of “Y” then it is trivial to show that X is a prerequisite of Y. But I have disputed your definition of understanding.
moving finger said:
Now, can you substantiate your claim that consciousness is a necessary pre-requisite for understanding in all possible agents (not simply in homo sapiens)?
Tisthammerw said:
I could say, "Once again you seem not to bother reading my posts" and quote you my argument of why consciousness is a necessary pre-requisite for understanding in all possible agents, but I think that game is getting old.
Your argument is invalid because you implicitly assume in your definition of understanding that understanding requires consciousness. I dispute your definition of understanding.
With respect
MF
 
  • #171
Some parts I have already addressed in my previous post, so I'll trim some of that.



Do you claim to understand what consciousness is?

Well, this is what I mean by consciousness (see my quote below):

Tisthammerw said:
Consciousness is the quality or state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. So by definition of what consciousness is, literal understanding requires consciousness.


No, this is faulty logic. You have shown (possibly) that some of the characteristics of consciousness may have something in common with some of tha characteristics of understanding, but this does not imply that consciousness is a necessary pre-requisite for understanding.

It does given how I defined consciousness and understanding. If a person did not possesses the aspects of consciousness as I defined it (e.g. the aspects of perception and awareness), it would be impossible to have literal understanding (given how I defined understanding). To recap what I said earlier:

When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now).

So exactly why doesn't my argument (regarding consciousness being necessary for understanding) logically follow, given the definition of the terms used?


Imho you need to look up what entails a logical proof.

Let's look at what I said in context.

But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical.

Let's see the response:


Tisthammerw said:
moving finger said:
Tisthammerw said:
That isn't logical.
Imho it is completely logical.

Imho you need to look up the law of noncontradiction.

Can you see why the denial of what I said can be taken as a violation of the law of noncontradiction?



Tisthammerw said:
Not at all (given how I defined it earlier). My pet cat for instance possesses consciousness (e.g. capable of perception).

Is “perception” all that is required for consciousness? I don’t think so, hence your conclusion is a non sequitur.

My conclusion logically follows given how I defined conscoiusness. You yourself may have something different in mind, but please be aware of how I am using the term.


Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?

There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.

So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?


Searle has not consciously assimiliated the rulebook

Technically that's untrue. He has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives. He just doesn't understand any word of Chinese (given how I defined understanding...).


But there is more to Searle than Searle’s consciousness, and some physical part of Searle HAS necessarily internalised the rulebook and is capable of enacting the rulebook – it is THIS part of Searle (which is not conscious) which understands Chinese, and not the “Searle consciousness”.

The part that has internalized the rulebook is his conscious self, remember?

moving finger said:
Tisthammerw said:
Ask him any question you like to test his understanding of Chinese, and he will respond accordingly. There is no test of understanding that he will fail.

Really? There is no test of understanding that he will fail? Let's ask him (in English) what Chinese word X means, and he will (quite honestly) reply "I have no idea."

It should be obvious to anyone with any understanding of the issue that asking him a question in English is NOT a test of his ability to understand Chinese.

I think I'll have to archive this response in my “hall of absurd remarks” given how I explicitly defined the term “understanding.”

Seriously though, given how I defined understanding, isn't it clear that this person obviously doesn't know a word of Chinese? Do you think he's lying when he says he doesn't know what the Chinese word means?

Tisthammerw said:
He KNOWS THE MEANING OF WORDS in the sense that he can respond rationally and intelligently to questions in Chinese

You certainly haven't been reading my posts correctly. When I say “understand” I mean “grasp the meaning of” (as I suggested earlier). When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean (as I suggested earlier). When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean (as I am suggesting now).

To which you have replied:

you implicitly assume that understanding requires consciousness, but you have not shown this to be the case (except by defining understanding to suit your conclusion)

Indeed I have done that, but this doesn't change the fact of my conclusion. Given how I defined understanding, consciousness is a prerequisite. And my claim is that computers as we know them (as in the robot and program X story) cannot possibly have literal understanding in the sense that I am referring to simply by “running the right program.” Could it have understanding in some other, metaphorical sense (at least, metaphorical to my definition)? Maybe, but that is another issue. My original point about a computer not being able to perceive the meaning of words (i.e. "understand") stands as valid. The computer cannot literally understand any more than the man in the Chinese room understands a word of Chinese.


Your argument is invalid because you implicitly assume in your definition of understanding that understanding requires consciousness.

So, my argument is invalid because it is a tautology? Tautologies are by definition true and are certainly logically valid (i.e. if the premise is true the conclusion cannot fail to be true).


I dispute your definition of understanding.

Too bad for you. But this is what I mean when I use the term “understanding.” Thus (using my definition) if a person understands a Chinese word, it is necessarily the case that the person is aware of what the Chinese word means. This is clearly not the case with the man in the Chinese room. He doesn't understand a word of Chinese. Again, perhaps computers can have understanding in some metaphorical sense, but it seems that a computer cannot understand in the sense that I mean when I use the term.

It sounds like our disagreement has been a misunderstanding of terms. Can we agree that a computer cannot “understand” given what I mean when I use the word?
 
  • #172
Tisthammerw said:
So exactly why doesn't my argument (regarding consciousness being necessary for understanding) logically follow, given the definition of the terms used?
Allow me to paraphrase your argument, to ensure that I have the correct understanding of what you are trying to say.
According to you (please correct me if I am wrong),
Consciousness = sensation, perception, thought, awareness
Understanding = grasp meaning of = knows what words mean = perceives meaning of words = is aware of truth of words
Firstly, with respect, as I have mentioned already, in the case of consciousness this is a listing of some of the “components of consciousness” rather than a definition of what consciousness “is”. It is rather like saying “a car is characterised by wheels, body, engine, transmission”. But this listing is not a definition of what a car “is”, it is simply a listing of some of the components of a car.
Secondly, I do not see how you make the transition from “Consciousness = sensation, perception, thought, awareness” to the conclusion “consciousness is a necessary pre-requisite for understanding”. Simply because consciousness and understanding share some characteristics (such as “awareness”)? But to show that two concepts share some characteristics is not tantamount to showing that one is a necessary pre-requisite of the other. A car and a bicycle share the common characteristic that both entities have wheels, but this observation tells us nothing about the relationship between these two entities.
Tisthammerw said:
Can you see why the denial of what I said can be taken as a violation of the law of noncontradiction?
Your argument is based on a false assumption, which is that “he knows the meaning of the words without knowing the meaning of the words” – and I have repeated many times (but you seem to wish to ignore this) this is NOT what is going on here. Can you see why your argument is invalid?
Tisthammerw said:
My conclusion logically follows given how I defined conscoiusness.
With respect, you have not shown how you arrive at the conclusion “my pet cat possesses consciousness”, you have merely stated it.
Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?
moving finger said:
There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.
Tisthammerw said:
So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?
I did not say it does not take place in his brain. Are you perhaps assuming that brain is synonymous with consciousness?
Let Searle (or someone else) first tell me “where he has internalised the rulebook”, and I will then be able to tell you where the understanding takes place (this is Searle’s thought experiment, after all)
Tisthammerw said:
The part that has internalized the rulebook is his conscious self
I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not necessarily as a part of his consciousness. In the same way, memories in the brain exist as a part of us, but are not necessarily part of our consciousness (unless and until such time as they are called into consciousness and are processed there).
(In the same way, the man in the CR participates in the Chinese conversation, but need not be consciously aware of that fact).
moving finger said:
It should be obvious to anyone with any understanding of the issue that asking him a question in English is NOT a test of his ability to understand Chinese.
Tisthammerw said:
I think I'll have to archive this response in my “hall of absurd remarks” given how I explicitly defined the term “understanding.”
Then you would be behaving illogically. What part of “grasp the meaning of a word in Chinese” (ie an understanding of Chinese, by your own definition) would necessarily mean that an agent could respond to a question in English?
Tisthammerw said:
given how I defined understanding, isn't it clear that this person obviously doesn't know a word of Chinese?
First define “person”. With respect I suggest by “person” you implicitly mean “consciousness”, and we both agreee that the consciousness that calls itself “Searle” does not understand Chinese. Does that make you happy?
Nevertheless, there is a part of the physical body of Searle (which is not part of his consciousness) which does understand Chinese. This is the “internalised rulebook”. You obviously will not accept this, because in your mind you are convinvced that consciousness is a necessary pre-requisite for understanding – but this is something that you have (with resepct) assumed, and not shown rigorously.
Tisthammerw said:
Do you think he's lying when he says he doesn't know what the Chinese word means?
The consciousness calling itself Searle does not know the meaning of a word of Chinese.
But there exists a part of the physical body of Searle (which is not conscious) which does understand Chinese – this is the part that has internalised the rulebook.
Tisthammerw said:
Given how I defined understanding, consciousness is a prerequisite.
You have not shown that consciousness is a prerequisite, you have assumed it, and I explained why above.
Tisthammerw said:
The computer cannot literally understand any more than the man in the Chinese room understands a word of Chinese.
Are you referring once again to the original CR argument, where the man is simply passing notes back and forth? If so, this man indeed does not understand Chinese, nor does he need to.
Tisthammerw said:
So, my argument is invalid because it is a tautology? Tautologies are by definition true and are certainly logically valid (i.e. if the premise is true the conclusion cannot fail to be true).
Do you agree your argument is based on a tautology?
moving finger said:
I dispute your definition of understanding.
Tisthammerw said:
But this is what I mean when I use the term “understanding.”
Then we will have to agree to disagree, because it’s not what I mean
Tisthammerw said:
Thus (using my definition) if a person understands a Chinese word, it is necessarily the case that the person is aware of what the Chinese word means.
Let me re-phrase that :
“Thus (using your definition) if a consciousness understands a Chinese word, it is necessarily the case that the consciousness is aware of what the Chinese word means.”
I agree with this statement.
But imho the following is also correct :
“If an agent understands a Chinese word, it is not necessarily the case that consciousness is associated with that understanding.”
This is clearly the case with the Chinese Room. The man is not conscious of understanding a word of Chinese.
Tisthammerw said:
Can we agree that a computer cannot “understand” given what I mean when I use the word?
If you mean “can we agree that a non-conscious agent cannot understand given the assumption that consciousness is a necessary pre-requisite of understanding” then yes I agree that this follows - but this is a trivial argument (in fact a tautology).

The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.
With the greatest respect,
MF
 
  • #173
moving finger said:
Allow me to paraphrase your argument, to ensure that I have the correct understanding of what you are trying to say.
According to you (please correct me if I am wrong),
Consciousness = sensation, perception, thought, awareness
Understanding = grasp meaning of = knows what words mean = perceives meaning of words = is aware of truth of words

Fairly accurate, except that the last part should be "is aware of the truth of what the words mean."


Firstly, with respect, as I have mentioned already, in the case of consciousness this is a listing of some of the “components of consciousness” rather than a definition of what consciousness “is”.

I wouldn't say that. If an entity has a state of being such that it includes the characteristics I described, the entity has consciousness (under my definition of the term).


Secondly, I do not see how you make the transition from “Consciousness = sensation, perception, thought, awareness” to the conclusion “consciousness is a necessary pre-requisite for understanding”.

Simple. Understanding (as how I defined it) requires that the entity be aware of what the words mean (this would also imply a form of perception, thought etc.). This would imply the existence of consciousness (under my definition of the term). I’ll recap the definitions near the end of this post.


Tisthammerw said:
Can you see why the denial of what I said can be taken as a violation of the law of noncontradiction?

Your argument is based on a false assumption, which is that “he knows the meaning of the words without knowing the meaning of the words”

But I was not discussing the argument in the section I was referring to. As I mentioned in post https://www.physicsforums.com/showpost.php?p=790665&postcount=171".


[quote="Tisthammerw”]My conclusion logically follows given how I defined conscoiusness."

With respect, you have not shown how you arrive at the conclusion “my pet cat possesses consciousness”, you have merely stated it.
[/quote]

Not at all. My argument went as follows (some premises were implicit):

  1. If my cat possesses key characteristic(s) of consciousness (e.g. perception) then my cat possesses consciousness (by definition).
  2. My cat does possesses those attribute(s).
  3. Therefore my cat has consciousness.


Tisthammerw said:
moving finger said:
Tisthammerw said:
Is there a difference between "Searle understands Chinese" and "Searle's physical body understands Chinese"?
There is an implicit difference, yes, because most of us (you and I included) when we talk about “Searle” implicitly assume that the “consciousness” that calls himself Searle is synonymous with the “physical body of Searle”. But “the consciousness that calls himself Searle” is not synonymous with the entire physical embodiment of Searle. Cut off Searle’s arm, and which one is now Searle – the arm or the rest of the body containing Searle’s consciousness? Searle would insist that he remains within the conscious part, his arm is no longer part of Searle, but logically the arm has a right to be also called part of the physical embodiment of Searle even though it has no consciousness.

So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?

Your response:

I did not say it does not take place in his brain.

Then perhaps you can understand why I asked the question.


Are you perhaps assuming that brain is synonymous with consciousness?

Are you?


Let Searle (or someone else) first tell me “where he has internalised the rulebook”, and I will then be able to tell you where the understanding takes place (this is Searle’s thought experiment, after all)

In the physical plane, it would be the brain would it not?


Tisthammerw said:
The part that has internalized the rulebook is his conscious self

I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not necessarily as a part of his consciousness.

Perhaps we are confusing each other's terms. When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives. What do you mean by it?


Then you would be behaving illogically. What part of “grasp the meaning of a word in Chinese” (ie an understanding of Chinese, by your own definition) would necessarily mean that an agent could respond to a question in English?

Because understanding Chinese words (as I have defined it) means he is aware of what the Chinese words mean, and thus (since he knows and understands English) he can tell me in English if he understands Chinese.


Tisthammerw said:
given how I defined understanding, isn't it clear that this person obviously doesn't know a word of Chinese?
First define “person”.

An intelligent, conscious individual.

With respect I suggest by “person” you implicitly mean “consciousness”, and we both agreee that the consciousness that calls itself “Searle” does not understand Chinese. Does that make you happy?

Happier anyway.


Nevertheless, there is a part of the physical body of Searle (which is not part of his consciousness) which does understand Chinese.

That is not possible under my definition of understanding. There is no part of Searle--stomach, arm, liver, or whatever--that is aware of what the Chinese words mean.


Tisthammerw said:
So, my argument is invalid because it is a tautology? Tautologies are by definition true and are certainly logically valid (i.e. if the premise is true the conclusion cannot fail to be true).

Do you agree your argument is based on a tautology?

It depends on what you mean by "tautology." If you are referring to an argument that is true by virtue of the definitions involved due to a repetition of an idea(s) (e.g. "all bachelors are unmarried"), then I agree that my argument is a tautology.

Tisthammerw said:
Thus (using my definition) if a person understands a Chinese word, it is necessarily the case that the person is aware of what the Chinese word means.

Then we will have to agree to disagree, because it’s not what I mean

Let me re-phrase that :
“Thus (using your definition) if a consciousness understands a Chinese word, it is necessarily the case that the consciousness is aware of what the Chinese word means.”
I agree with this statement.
But imho the following is also correct :
“If an agent understands a Chinese word, it is not necessarily the case that consciousness is associated with that understanding.”
This is clearly the case with the Chinese Room.

This clearly cannot be the case with the Chinese Room--if we use my definition of understanding. He cannot even in principle perceive the meaning of any Chinese word.

Let’s recap my definition of understanding.

When I say “understand” I mean “grasp the meaning of.” When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean. When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean.

Let’s recap my definition of consciousness.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.


The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.

Then do you also disagree with the belief that all bachelors are unmarried? Remember what I said before about tautologies...

To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

(Note: “valid” in this case means that the output constitutes satisfactory answers [i.e. to an outside observer the answers seem “intelligent” and “rational”] to Chinese questions.)
 
Last edited by a moderator:
  • #174
I said yes to the poll question. If we weren't created by God and there is no metaphysical component to our intelligence, if we are nothing but biological machines, then the answer is definitely yes. If there is a metaphysical component to us then maybe yes, maybe no but if no it would come pretty darn close, close enough to fool almost anyone, like in Blade Runner.

I am of the belief that we operate by knowing rules. Everything we do is governed by rules. There is a group of AI researchers that believe this too and are trying to create intelligence by loading their construct with as many rules as they can. Most of what we are is rules and facts. Rules and facts can simulate whatever there is of us that isn't rules and facts and make AI appear to be self aware and intelligent (random choice for example or emotion). If you don't believe this, name something people do that isn't or couldn't be governed by rules and facts.
 
  • #175
And we're still attempting to define this..

ok I don't claim to be a neuroscientist, so bear with me

In order to understand conciousness we need to understand the processes that come into play.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness

asuming suficient technological advance, we can grant any of these characteristics to a machine, inlcuding, but not exclusive to: sensation, perception(learning through observation, eg point at a chair and say "chair")

As far as I can tell TH, your definition of conciousness is the ability to "understand" words and meaning through an associative process, which is the way we percieve it. Our brain processes input from our external senses, then compares it to our past experiences before determining a reaction, if any. EG, when we hear the word chair, our ears send this signal to our brain which then searches for that word, and if found associates it with the visual, aural, and other sensory input from memory. Then we sit in the chair. If we had never heard the the word "chair before, then our brain proceses this as an unknown, and as a response will cause us to atttempt to ascertain what this object is, what it's use is, what it feels like, etc.

that's a very rough overview, but it will do. What you are saying is that a machine understands the word, due the word "chair" being in it's memory chip. But it 's the same process. The machine's video perceives a chair. The cpu analyzes the external input and runs it against it's memory banks to see if it knows this object. If so it reacts accordingly, if not it attempts to ascertain the purpose of the object. It's the same process. Unless you're talking about a specific aspect of understanding, such as emotion, there is no difference.

TH your chinese room is inflexible and does not take into account that the chinese man, as it relates to our purpose, IS capable of learning chinese. What you're referring to is that even if the chinese man knew the words, he couldn't associate the reference that the words make. However, as it relates to AI, he is capable of learning the words after being taught a few letters. So through deduction and trial and error, he will deduce the alphabet, then meaning of the words. And when I say meaning, I mean through association. (point at chair-this is a chair). Then he will leave the room and through description and observation be able to deduce their meanings.

Yes, if we stick by the strict rules of the chinese room it makes sense. But the chinese room contradicts the capabilities of AI. Therefore it cannot fully explain any limitations of AI.
 
  • #176
Tisthammerw said:
If an entity has a state of being such that it includes the characteristics I described, the entity has consciousness (under my definition of the term).
I understand. Your definition of consciousness is thus “any agent which possesses all of the characteristics of sensation, perception, thought and awareness is by definition conscious”, is that it?
We would then have to ask for the definitions of each of those characteristics – what exactly do we mean by “sensation, perception, thought, and awareness”? (without defining any of these words in terms of consciousness, otherwise we simply have a tautology).
Tisthammerw said:
Understanding (as how I defined it) requires that the entity be aware of what the words mean (this would also imply a form of perception, thought etc.). This would imply the existence of consciousness (under my definition of the term
If, as you say, perception, thought etc are implicitly included in “awareness”, are you now suggesting that “awareness alone necessarily implies consciousness”? Should we revise the definition of consciousness above?
If you ask the CR (in Chinese) “are you aware of what these words mean?”, it’s reply will depend on how it defines “awareness”. If awareness is defined as “conscious awareness” then (if it is not conscious) it will necessarily reply “no”. But defining awareness as “conscious awareness” makes the definition of “consciousness in terms of awareness” a tautology (“consciousness is characterised by a state of conscious awareness”) therefore not very useful in terms of our epistemology.
All we achieve with a tautology is the following :
“If I define understanding as requiring conscious awareness then it follows that undertsanding required consciousness”
This doesn’t really tell us very much does it?
The problem is that I do not agree that understanding requires conscious awareness. Thus we disagree at the level of your initial assumptions.
Tisthammerw said:
As I mentioned in post #171, I said, "But he doesn't know the meaning of any Chinese word! Are you saying he knows the meaning of the words without knowing the meaning of the words? That isn't logical." To which you replied, "Imho it is completely logical." And I thus said, "Imho you need to look up the law of noncontradiction" in post #169.
In which case I humbly apologise for this error on my part. My statement “Imho it is completely logical” was intended to respond to what I took be the implication that my own argument was not logical. What I should have said is that I do not agree with your assumption “he knows the meaning of the words without knowing the meaning of the words”, therefore (since the premise is disputed) your conclusion “that isn’t logical” is invalid.

moving finger said:
With respect, you have not shown how you arrive at the conclusion “my pet cat possesses consciousness”, you have merely stated it.
Tisthammerw said:
Not at all. My argument went as follows (some premises were implicit):
1. If my cat possesses key characteristic(s) of consciousness (e.g. perception) then my cat possesses consciousness (by definition).
2. My cat does possesses those attribute(s).
3. Therefore my cat has consciousness.
Your argument is still based on an implicit assumption – step 2.
If we assume your definition of consciousness is sufficient (I dispute that it is), then how do you “know” that your cat is aware?
Your earlier argument (as I have pointed out) already implies that “perception, thought etc” are subsumed into “awareness” – thus the acid test of consciousness (according to your own definition) should be the presence not of perception alone, but of awareness alone. Can you show that your cat is indeed “aware” (you need to define aware first)?
Tisthammerw said:
So where does this alleged understanding take place if not in Searle's brain? His arm? His stomach? What?
Tisthammerw said:
In the physical plane, it would be the brain would it not?
It could be, but then it’s not my thought experiment. If someone tells me he has internalised the rulebook, it is surely not up to me to guess where this internalised rulebook sits, is it?
Tisthammerw said:
The part that has internalized the rulebook is his conscious self
I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not as a part of his consciousness. Consciousness is not a fixed or a physical object, it cannot "contain" anything in permanent terms, much less a rulebook or the contents of a rulebook. Consciousness is a dynamic and ever-changing process, and as such it may gain access to information contained in physical objects (such as a rulebook, or in memories, or in sense perception) but it does not contain any such objects, and it does not contain any permanent information.
Tisthammerw said:
Perhaps we are confusing each other's terms. When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives. What do you mean by it?
His consciousness “participated in” the physiacl process of internalisation of the rulebook, but the rulebook does not sit “in his consciousness”. Consciousness is a dynamic and ephemeral process, it is not something that can “internalise something within itself”. What happens if we knock Searle unconscious, is the rulebook destroyed? No, it continues to exist. When Searle regains consciousness, he can once again access the rulebook, not because his consciousness recreates it from nothing but because the rulebook now physically exists within his entity (but not in his consciousness).
moving finger said:
What part of “grasp the meaning of a word in Chinese” (ie an understanding of Chinese, by your own definition) would necessarily mean that an agent could respond to a question in English?
Tisthammerw said:
Because understanding Chinese words (as I have defined it) means he is aware of what the Chinese words mean, and thus (since he knows and understands English) he can tell me in English if he understands Chinese.
We have the same problem. By “aware” you implicitly mean “consciously aware”. If you define “awareness” as “conscious awareness” then I dispute that an agent needs to be consciously aware in order to have understanding. The internalised rulebook does NOT understand English (it is another part of Searle which “understands English”). Asking the internalised rulebook a question in English would be a test only of whether it understands English, not a test of whether it understands per se.
Tisthammerw said:
There is no part of Searle--stomach, arm, liver, or whatever--that is aware of what the Chinese words mean.
I think we keep covering the same ground. The basic problem (correct me if I am wrong) is that you define understanding as requiring conscious awareness. I dispute that. Most of our disagreement stems from that.
The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.
Tisthammerw said:
Then do you also disagree with the belief that all bachelors are unmarried? Remember what I said before about tautologies...
Are you asking whether I agree with the definitions of your terms here, or with your logic, or with your conclusion?
If we agree on the definition of terms then if we follow the same logic it is a foregone conclusion that we will agree on the conclusion. The problem is that in the case of understanding and awareness we do not agree on the definition of terms.
Tisthammerw said:
the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?
Do we agree on what exactly?
I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.

With respect

MF
 
Last edited:
  • #177
Psi 5 said:
I said yes to the poll question. If we weren't created by God and there is no metaphysical component to our intelligence, if we are nothing but biological machines, then the answer is definitely yes.

Searle is not arguing that no artifical device can understand or be consicous;
he is arguing that no device can do so solely by virtue of executing rules.


I am of the belief that we operate by knowing rules. Everything we do is governed by rules. There is a group of AI researchers that believe this too and are trying to create intelligence by loading their construct with as many rules as they can. Most of what we are is rules and facts. Rules and facts can simulate whatever there is of us that isn't rules and facts and make AI appear to be self aware and intelligent (random choice for example or emotion). If you don't believe this, name something people do that isn't or couldn't be governed by rules and facts.

Well, Searle's arguemnt goes specifically against that conclusion.
 
  • #178
Tournesol said:
Searle is not arguing that no artifical device can understand or be consicous;
he is arguing that no device can do so solely by virtue of executing rules.
...

Well Searle, tell me something you or anyone else does that isn't governed by rules.
 
  • #179
It's hard to say that computers can be conscious if i can't be sure that other people are.
 
  • #180
-Job- said:
It's hard to say that computers can be conscious if i can't be sure that other people are.
This is the whole point.
Unless and until we establish a "test for X", we cannot definitively say "this agent possesses X", where X could be consciousness or intelligence or understanding.
To develop a "test for X" implicitly assumes a definition of X.
And so far we seem unable to agree on definitions.

With respect.

MF
 
  • #181
I would like to point out that "intelligence" and "consciousness" are two totally different concepts. "intelligence" can be detected behaviourally, consciousness cannot (unless you REDEFINE the concept). "consciousness" means that observations are somehow "experienced", which is something completely internal to the subject being conscious, and has no effect of the behaviour of the BODY of the conscious being.
This is what makes solipsism possible: you're the only conscious being around. All other bodies around you, which you call people, BEHAVE in a certain way which is quite equivalent to how YOUR BODY behaves, but they are not necessarily *conscious*. They are intelligent, yes, because they can solve problems (= behavioural). But they are not conscious. There's no way to find out.
You could take the opposite stance, and claim that rocks are conscious. They behave as rocks, but they are conscious of their behaviour. They experience pain when you break some of their crystals. There's no way to find out, either.
We only usually claim that people are conscious and rocks aren't, by analogy of our own, intimate, experience.
You can even have unconscious structures, such as bodies, type long texts about consciousness. That doesn't prove that they are conscious.
The problem is that many scientific disciplines have redefined consciousness into something that has behavioural aspects, such as "brain activity", or "intelligence" or other things, but that's spoiling the original definition which is the internal experience of observation.
 
  • #182
vanesch said:
I would like to point out that "intelligence" and "consciousness" are two totally different concepts.
OK. But I don't think anyone suggested that they were similar concepts, did they?
vanesch said:
"intelligence" can be detected behaviourally, consciousness cannot (unless you REDEFINE the concept). "consciousness" means that observations are somehow "experienced", which is something completely internal to the subject being conscious, and has no effect of the behaviour of the BODY of the conscious being.
This is what makes solipsism possible: you're the only conscious being around. All other bodies around you, which you call people, BEHAVE in a certain way which is quite equivalent to how YOUR BODY behaves, but they are not necessarily *conscious*. They are intelligent, yes, because they can solve problems (= behavioural). But they are not conscious. There's no way to find out.
What do we conclude from this?

Properties that we define in subjective terms, such as consciousness, cannot be objectively tested. Such properties can only be inferred or assumed.

Properties that we define in objective terms, such as intelligence, can be objectively tested.

Thus : Is understanding subjective, or objective?

MF
 
  • #183
moving finger said:
Thus : Is understanding subjective, or objective?
Again, it depends on what you mean by "understanding". If by "understanding" you mean, possessing enough organized information about it so that you can use the concept you're supposed to understand in a problem-solving task, then "understanding" is part of "intelligence" and as such a more or less objective property, which is related to behavioural properties ; behavioural properties the teacher is testing to see if his students "understand" the concepts he's teaching them. This requires no consciousness.
You can also mean by "understanding" the "aha experience" that goes with a certain concept ; this is subjective of course (and can be wrong ! You can have the feeling you understand something and you're totally off), and probably related to consciousness. But it has no functional, behavioural role and is not necessary in demonstrating problem solving skills.
 
  • #184
Psi 5 said:
Well Searle, tell me something you or anyone else does that isn't governed by rules.
Following rules is not the same as existing in virtue of follwing rules.
You are confusing a necessary conditions with sufficient conditions.
 
  • #185
Zantra said:
Tisthammerw said:
Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness
asuming suficient technological advance, we can grant any of these characteristics to a machine, inlcuding, but not exclusive to: sensation, perception(learning through observation, eg point at a chair and say "chair")

I disagree, at least with our current architecture. Confer the Chinese Room. A successful conversation all without understanding. I believe we can program a machine to say "chair" but I don't believe the computer will understand any more than the man in the Chinese Room understands Chinese. Note also the story of the robot and program X. Even when the “right” program is being run, he doesn’t see or hear anything going on in the outside world.


TH your chinese room is inflexible and does not take into account that the chinese man, as it relates to our purpose, IS capable of learning chinese.

Which is not something anybody is disputing. The point is the model of a complex set of rules acting on input etc. is not sufficient. Recall also the robot and program X counterexample. Even with the "right" program being run there is still no literal understanding (as I have defined it). Unless you can disprove this counterexample (and I don't think that can be done) the belief that the above model is capable of literal understanding has no rational basis.
 
  • #186
moving finger said:
I understand. Your definition of consciousness is thus “any agent which possesses all of the characteristics of sensation, perception, thought and awareness is by definition conscious”, is that it?

It’s amazing how quickly you can (unintentionally) distort my views. Let's look at a quote from the post you just responded to:

Tisthammerw said:
Let’s recap my definition of consciousness.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.

If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.


We would then have to ask for the definitions of each of those characteristics – what exactly do we mean by “sensation, perception, thought, and awareness”?

I'm getting a bit tired of playing the dictionary game, since we can go on like this forever (I define term A using words B, you ask what B means, I define it in terms of C, you ask what C means...). Go to www.m-w.com to look up the words. For “sensation” I mean definitions 1a and 1b. For “awareness” (look up “aware”) I mean definition 2. For “perception” (look up “perceive”) I mean definition 1a and 2. For “thought” I mean definition 1a.

Now if you still don’t know what I’m talking about even with a dictionary, I don’t know if I can help you.


If, as you say, perception, thought etc are implicitly included in “awareness”, are you now suggesting that “awareness alone necessarily implies consciousness”?

Please be careful not to distort what I am saying. I am saying that if an entity has perception, thought etc. this person has consciousness, I didn't say awareness in the context you used (though it could be argued that perception and thought implies some sort of awareness).



If you ask the CR (in Chinese) “are you aware of what these words mean?”, it’s reply will depend on how it defines “awareness”. If awareness is defined as “conscious awareness” then (if it is not conscious) it will necessarily reply “no”.

Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).


All we achieve with a tautology is the following :
“If I define understanding as requiring conscious awareness then it follows that undertsanding required consciousness”
This doesn’t really tell us very much does it?

If we define bachelors as being unmarried then it follows that all bachelors are unmarried.

Maybe it doesn't tell us much, but it doesn't change the fact that the statement is true and deductively valid. And frankly, I don't think that “knowing what the words mean” is such an unusual definition for “understanding” words.


Tisthammerw said:
Not at all. My argument went as follows (some premises were implicit):
1. If my cat possesses key characteristic(s) of consciousness (e.g. perception) then my cat possesses consciousness (by definition).
2. My cat does possesses those attribute(s).
3. Therefore my cat has consciousness.

Your argument is still based on an implicit assumption – step 2.
If we assume your definition of consciousness is sufficient (I dispute that it is), then how do you “know” that your cat is aware?

Yes, we all know the problem of other minds. I concede the possibility that all the world is an illusion etc. But we could say that our observations (e.g. of my cat's behavior) are sufficient to rationally infer consciousness unless we have good reason to believe otherwise. Because of the Chinese Room and variants thereof, we do have good reason to believe otherwise when it comes to computers.



The part that has internalized the rulebook is his conscious self

I disagree. His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not as a part of his consciousness.

I don't know how you can disagree here, given what I described. Ex hypothesi he consciously knows all the rules, consciously carries them out etc. But as I said, perhaps we are confusing each other's terms. When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives.

Tisthammerw said:
Because understanding Chinese words (as I have defined it) means he is aware of what the Chinese words mean, and thus (since he knows and understands English) he can tell me in English if he understands Chinese.

We have the same problem. By “aware” you implicitly mean “consciously aware”. If you define “awareness” as “conscious awareness” then I dispute that an agent needs to be consciously aware in order to have understanding.

First, be careful what you attribute to me. Second, remember my definition of understanding. Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.


Tisthammerw said:
There is no part of Searle--stomach, arm, liver, or whatever--that is aware of what the Chinese words mean.

I think we keep covering the same ground. The basic problem (correct me if I am wrong) is that you define understanding as requiring conscious awareness.

My definition of understanding requires consciousness (or at least, consciousness as how I defined it).


I dispute that.

Then please read my posts again if dispute how I have defined it (such as https://www.physicsforums.com/showpost.php?p=791706&postcount=173"). Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that? As I said earlier:

Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

Well, do we? (If we do, then we may have little to argue about.)


The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.

It isn't a premise, it's a logically valid conclusion (given what I mean when I use the terms).


The problem is that in the case of understanding and awareness we do not agree on the definition of terms.

Well, this is what I mean when I use the term understanding. Maybe you mean something different, but this is what I mean. So please answer my question above.


the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

Do we agree on what exactly?

On what I just described, “i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”

Again, you may mean different things when you use the words “understanding” and “consciousness.” My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?


I agree with your logic, but I disagree with your definition of the term understanding

So is that a yes?
 
Last edited by a moderator:
  • #188
StykFacE said:
1st time post here... thought i'd post up something that causes much debate over... but a good topic. ;-) (please keep it level-minded and not a heated argument)
Question: Can Artificial Intelligence ever reach Human Intelligence?
please give your thoughts... i vote no.

It's as if saying Fake can almost be Real.

Artificial Intelligence can always mimic Human Intelligence but NEVER would Human Intelligence mimic an Artificial Intelligence!

Artificial Intelligence models from a Human Intelligence whereas Human Intelligence is the model of Artificial Intelligence.

People sometimes say that machines are smarter than human being but hey, who makes what? I did not say: Who makes who? since AI is certainly not a Who? Incomparable isn't it. :smile:
 
  • #189
oh...and i forgot a TINY thing! REAL can NEVER be FAKE!
 
  • #190
oh...and i forgot a TINY thing! REAL can NEVER be FAKE!
 
  • #191
sorry to post twice...my PC hangs :smile:
 
  • #192
vanesch said:
If by "understanding" you mean, possessing enough organized information about it so that you can use the concept you're supposed to understand in a problem-solving task, then "understanding" is part of "intelligence" and as such a more or less objective property, which is related to behavioural properties ; behavioural properties the teacher is testing to see if his students "understand" the concepts he's teaching them. This requires no consciousness.
I would agree with this. I see no reason why a machine necessarily could not possesses this type of understanding.

MF
 
  • #193
Tournesol said:
Following rules is not the same as existing in virtue of follwing rules.
Does a machine which follows rules necessarily "not exist in virtue of following rules"?

MF
 
  • #194
Tisthammerw said:
If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.
Ah, I see now. Therefore an agent can have the characteristic only of “sensation”, but at the same time NOT be able to perceive, or to think, or to be aware, and still (by your definition) it would necessarily be conscious?
Therefore by your definition even the most basic organism which has “sensation” (some plants have sensation, in the sense that they can respond to stimuli) is necessarily conscious? I think a lot of biologists would disagree with you.
moving finger said:
If you ask the CR (in Chinese) “are you aware of what these words mean?”, it’s reply will depend on how it defines “awareness”. If awareness is defined as “conscious awareness” then (if it is not conscious) it will necessarily reply “no”.
Tisthammerw said:
Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).
Incorrect. If the CR also defines “awareness” as implicitly meaning “conscious awareness”, and it is not conscious, it would necessarily answer “No”.
moving finger said:
His conscious self may have “participated in the process of internalisation”, but once internalised, the internalised version of the rulebook exists within Searle but not as a part of his consciousness.
Tisthammerw said:
he consciously knows all the rules, consciously carries them out etc.
Here you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.
Tisthammerw said:
When I say he consciously internalized the rulebook, I mean that he has consciously memorized the rulebook, consciously knows all the rules, and consciously applies those rules to the input he receives.
Again you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.
Tisthammerw said:
Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.
You define “understanding” as requiring consciousness, thus it is hardly surprising that your definition of understanding requires consciousness! That is a classic tautology.
Tisthammerw said:
Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that?
By definition, if one chooses to define understanding such that understanding requires consciousness, then it is necessarily the case that for any agent to possesses understanding it must also possesses consciousness. I see no reason why a machine should not possesses both consciousness and understanding. But this is not the point – I dispute that consciousness is a necessary pre-requisite to understanding in the first place.
Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?
The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.
moving finger said:
The whole point is that I disagree with the basic premise that “consciousness is a necessary pre-requisite of understanding”.
Tisthammerw said:
It isn't a premise, it's a logically valid conclusion (given what I mean when I use the terms).
Your definition of understanding is a premise.
You cannot show that “understanding requires consciousness” without first assuming that “understanding requires consciousness” in your definition of consciousness. Your argument is therefore a tautology.
Thus it does not really tell us anything useful.
moving finger said:
The problem is that in the case of understanding and awareness we do not agree on the definition of terms.
Tisthammerw said:
Well, this is what I mean when I use the term understanding. Maybe you mean something different, but this is what I mean. So please answer my question above.
I have answered your question. Now please answer mine, which is as follows :
Can you SHOW that “understanding” requires consciousness, without first ASSUMING that understanding requires consciousness in your definition of “understanding”?
(in other words, can you express your argument such that it is not a tautology?)
moving finger said:
Do we agree on what exactly?
Tisthammerw said:
On what I just described, “i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”
Using MY definition of “perceive” and “be aware”, yes, I believe computers can (in principle) perceive and be aware of the meaning of what words mean.
Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?
I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.
moving finger said:
I agree with your logic, but I disagree with your definition of the term understanding
Tisthammerw said:
So is that a yes?
My full reply was in fact :
“I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.”
If by your question you mean “do I agree with your conclusion?”, then I think I have made that very clear. NO.
May your God go with you
MF
 
  • #195
moving finger said:
Does a machine which follows rules necessarily "not exist in virtue of following rules"?
MF

No, not necessarily.
 
  • #196
moving finger said:
The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.

Definitions are not things which are true and false so much
as conventional or unusual.

Conventionally, we make a distinction between understanding and know-how.
A lay person might know how to use a computer, but would probably not claim
to understand it in the way an engineer does.
 
  • #197
Let's recap some terms before moving on:

Using the Chinese Room thought experiment as a case in point, let’s recap my definition of understanding.

When I say “understand” I mean “grasp the meaning of.” When I say “grasp the meaning of” I mean he actually knows what the Chinese words mean. When I say he knows what they mean, I am saying that he perceives the meaning of the words he sees/hears, or to put it another way, that he is aware of the truth of what the Chinese words mean.


Let’s recap my definition of consciousness.

Consciousness is the state of being characterized by sensation, perception (e.g. of the meaning of words), thought (e.g. grasping the meaning of words), awareness (e.g. of the meaning of words), etc. By the definition in question, if an entity possesses any of these characteristics the entity has consciousness.



moving finger said:
Tisthammerw said:
If a person has any of the characteristics of sensation, perception etc., not necessarily all of them. For instance, a person could perceive the meaning of words in his mind without sensing pain, the fur of a kitten etc.

Ah, I see now. Therefore an agent can have the characteristic only of “sensation”, but at the same time NOT be able to perceive, or to think, or to be aware, and still (by your definition) it would necessarily be conscious?

Not quite. Go to http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=sensation to once again read definition 1b of sensation.


Therefore by your definition even the most basic organism which has “sensation” (some plants have sensation, in the sense that they can respond to stimuli) is necessarily conscious? I think a lot of biologists would disagree with you.

You have evidently badly misunderstood what I meant by sensation. Please look up the definition of sensation again (1b). In light of what I mean when I use the terms, it is clear that plants do not possesses consciousness.


Tisthammerw said:
Well, actually it will reply "yes" if we are to follow the spirit of the CR (simulating understanding, knowing what the words mean, awareness of what the words mean etc.).

Incorrect. If the CR also defines “awareness” as implicitly meaning “conscious awareness”, and it is not conscious, it would necessarily answer “No”.

It would necessarily answer “Yes” because ex hypothesi the program (of the rulebook) is designed to simulate understanding, remember? (Again, please keep in mind what I mean when I use the term “understanding.”)


Tisthammerw said:
he consciously knows all the rules, consciously carries them out etc.

Here you are assuming that “consciously knowing the rules” is the same as both (a) “consciously applying the rules” AND (b) “consciously understanding the rules”. In fact, only (a) applies in this case.

It depends what you mean by “consciously understanding the rules.” He understands the rules in the sense that he knows what the rules mean (see my definition of “understanding”). He does not understand the rules in the sense that, when he applies the rules, he actually understands Chinese.


Tisthammerw said:
Isn't it clear that understanding as I have explicitly defined it requires consciousness? If not, please explain yourself.

You define “understanding” as requiring consciousness, thus it is hardly surprising that your definition of understanding requires consciousness! That is a classic tautology.

That's essentially correct. Note however that my definition of understanding wasn't merely “consciousness,” rather it is about knowing what the words mean. At least we (apparently) agree that understanding--in the sense that I mean when I use the term--requires consciousness.


Tisthammerw said:
Now I'm not saying you can't define “understanding” in such a way that a computer could have it. But what about understanding as I have defined it? Could a computer have that?

By definition, if one chooses to define understanding such that understanding requires consciousness, then it is necessarily the case that for any agent to possesses understanding it must also possesses consciousness. I see no reason why a machine should not possesses both consciousness and understanding.

Well then let me provide you with a reason: the Chinese room thought experiment. This is a pretty good counterexample to the claim that a “complex set of instructions acting on input etc. is sufficient for literal understanding to exist.” Unless you wish to dispute that the man in the Chinese room understands Chinese (again, in the sense that I use it), which is pretty implausible.


But this is not the point – I dispute that consciousness is a necessary pre-requisite to understanding in the first place.

You yourself may mean something different when you use the term “understanding” and that's okay I suppose. But please recognize what I mean when I use the term.


Tisthammerw said:
To reiterate my point: the Chinese room (and its variants) strongly support my claim that programmed computers (under the model we’re familiar with; i.e. using a complex set of instructions acting on input to produce “valid” output)--even when they pass the Turing test--cannot literally understand (using my definition of the term); i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean. Do we agree on this?

The whole point is (how many times do I have to repate this?) I DO NOT AGREE WITH YOUR DEFINITION OF UNDERSTANDING.

Please see my response above. Additionally, Tournesol made a very good point when he said: “Definitions are not things which are true and false so much as conventional or unusual.” We both may mean something different when we use the term “understanding,” but neither of our definitions is necessarily “false.” And this raises a good question: I have defined what I mean when I use the term “understanding,” so what’s your definition?

By the way, you haven't really answered my question here. Given what my definition of understanding, is it the case that computers cannot have understanding in this sense of the word? From your response regarding understanding and consciousness regarding machines, the answer almost seems to be “yes” but it’s a little unclear.


I have answered your question.

You didn't really answer the question here, at least not yet (you seem to have done it more so later in the post).


Now please answer mine, which is as follows :
Can you SHOW that “understanding” requires consciousness, without first ASSUMING that understanding requires consciousness in your definition of “understanding”?

Remember, tautologies are by definition true.

Can I show that understanding requires consciousness? It all depends on how you define “understanding.” Given my definition, i.e. given what I mean when I use the term, we seem to agree that understanding requires consciousness. (Tautology or not, the phrase “understanding requires consciousness” is every bit as sound as “all bachelors are unmarried”). You may use the term “understanding” in a different sense, and I'll respect your own personal definition. Please respect mine.

Now, to the question at hand:

(in other words, can you express your argument such that it is not a tautology?)

I don't know of a way how to, but I don't think it matters. Why not? The argument is still perfectly sound even if you don't like how I expressed it. What more are you asking for?


Tisthammerw said:
“i.e. computers cannot perceive the meaning of words, nor can computers be aware of what words mean.”

Using MY definition of “perceive” and “be aware”, yes, I believe computers can (in principle) perceive and be aware of the meaning of what words mean.

Well, how about my definitions of those terms? I made some explicit citations in the dictionary if you recall.


Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?

I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.

The Chinese room thought experiment, the robot and program X are very good reasons since they serve as effective counterexamples (again, using my definitions of the terms).


Tisthammerw said:
So is that a yes?

My full reply was in fact :
“I agree with your logic, but I disagree with your definition of the term understanding (which you define as requiring conscious awareness, rather than showing that it requires conscious awareness), therefore I disagree with your conclusion.”

I read your reply, but that reply did not give a clear “yes” or “no” to my question. So far, your answer seems to be “No, it is not the case that a computer cannot perceive the meaning of words...” but this still isn't entirely clear since you said “No” in the following context:


Using MY definition of “perceive” and “be aware”

I was asking the question using my definitions of the terms, not yours. Given the terms as I have defined them, is the answer yes or no? (Please be clear about this.) You said:

moving finger said:
Tisthammerw said:
My question is this, given what I mean when I use the words, is it the case that the computer lacks understanding in my scenarios? Do you agree that computers cannot perceive the meaning of words, nor can computers be aware of what words mean (at least with the paradigm of complex set of rules acting on input etc.)?

I see no reason why a computer cannot in principle be conscious, cannot in principle understand, or be aware, or perceive, etc etc.

So is the answer a “No” as it seems to be? (Again, please keep in mind what I mean when I use the terms.)
 
  • #198
As someone with basic AI programming experience, my vote goes to the no camp.

An intelligence is not defined by knowledge, movement or interaction. An intelligence is defined by the ability to understand, to comprehend.

I have never seen nor heard of an algorythm that claims to implement understanding. I have thought about that one for years and I still don't know where I would even begin.
 
  • #199
As someone with AI programming experience as well I'd have to say yes, though I don't think we're on the right path in the industry at the moment.

Programmers and those who understand human emotion are almost mutually exclusive. That's the real reason we've not seen artificial intellegence become real intellegence yet IMHO. Most people excel at either emotional or logical pusuits and believe their method superior to the other. Software engineers lean toward logic.

IMO emotion is the key to actual intellegence.

To think that somehow no other intellegence can arise is just a vestige of geocentrism or otherwise human centric beliefs that have been around since man fist walked the earth. "Nothing can be as good as us, ever."

Basically this argument is almost religeous in nature. Are we just machines made from different material than we're used to seeing machines made of or are we somehow special?

Are we capable of creating AI that is no longer artificial in the sense that it can match some insect intellegence? Yes we can. Can we see examples of intellegence that are at every stage between insect and human? Yes we can, if you keep up with scientific news.

So someone tell me how this is not just a question of: Are humans super special in the universe or just another animal? Just another complex meat machine...

Know your own motivations behind your beliefs and you may find your beliefs changing.



Oh and by the way, I do have a vague idea where to start. Pleasure and displeasure. We have to set up what millions of years of survival of the fittest have boiled down to a single sliding scale first. The basis of motivation. A computer has no motivation.
The ability to change certain parts of self would be part of the next step. (while abhorrence to changing the core must be high on the displeasure list)

Truth tables in which things link together and links of experience or trusted sources become a sliding scale of truth or falsehood.

Faith, the ability to test and use something that is not fully truth as though it were.

The reason gambling and any other random success situations become obsessive is because intellegence constantly searches for black and white. To be able to set an 83% to a virtual 100%
Black and white search is the reason for the "terrible two's" in children. They simply want to set in stone the truth of what they can and cannot do. They need that solid truth to make the next logical leap. (You have to take for granted that a chair will hold you before you can learn how to properly balance to stand in a chair.) To make tests that stand upon what they consider "facts" (virtual 100%)though nothing is ever truly 100% truth. When parents reward and dicipline at random, the child must hold as truth the only reliable thing it has. It's own feelings. The child's mind is forever scarred with the inability to grasp truth that lies outside itself. (and those of you not overly politically correct will notice the intellegence gap in children that are poorly trained)

Pigeons given a item that realeases food every time, they peck it will peck it only when they need food. Given the same situation except that it drops food at random, the bird will become obsessed and create a pile of food as it tries to determine reliability and truth.


Human and animal intellegence is the model, we just haven't identified all the pieces. We haven't fully quantified what emotion is and does and why it was developed. (though I have some good conjecture I'll keep to myself)
 
  • #200
You are getting down to the definition of self and self-awareness.

Emotion perhaps is a method of generating an intelligence, however, it is still only a 'responsive mechanism'. That is, emotion change represents a reaction to something.

I think the layer we would be interested in would be above that, which can comprehend something that will cause an emotional change.

So, I would have to say that emotions are not going to lead to that breakthrough, as emotion and intelligence are radically different concepts.

AI is trying to create a self-awareness, this must be able to self-analyse in the third person and comprehend it. Such a thing is not possible, even using neural nets and fuzzy logic I have never even seen a simplistic algorythm.

I feel that the main problem with AI is that they have never really answered a basic set of questions:

1. What is intelligence?
2. What is the role of the universal framework (physics, etc) in the manifestation of intelligence?
3. What is self?
4. How do I recognise what self is?
5. How can I mimic it?

Unless accurate answers are established for the basic questions, any further research is just shots in the dark.
 

Similar threads

Replies
26
Views
2K
Replies
1
Views
2K
Replies
21
Views
2K
Replies
40
Views
5K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
2K
Back
Top