Can computers understand?Can understanding be simulated by computers?

  • Thread starter Thread starter quantumcarl
  • Start date Start date
  • Tags Tags
    China
Click For Summary
The discussion centers on John Searle's "Chinese Room" argument, which posits that computers cannot genuinely understand language or concepts, as they merely follow formal rules without comprehension. Critics argue that understanding may be an emergent property of complex systems, suggesting that the entire system, including the individual and the book, could possess understanding despite individual components lacking it. The conversation also explores the potential of genetic algorithms in artificial intelligence, questioning whether such systems can achieve a form of understanding without consciousness. Some participants believe that a sufficiently complex algorithm could surpass human understanding in specific contexts, while others maintain that true understanding requires consciousness. The debate highlights the need for clear definitions of "understanding" and "consciousness" to facilitate meaningful discussion on the capabilities of computers.
  • #121
quantumcarl said:
Yes, the understanding of a language would not only be the personal understanding of the language but the personal experience of comrehending the language. Comprehension is close to being another compound verb... like the word understanding, only, comprehension describes the ability to "apprehend" a "composition" of data. A personal understanding of the language would be based on personal experiences with the language.
It is rare that I hear someone claiming to "understand a language". The more common declaration of language comprehension is "I speak Yiddish" or "Larry speaks Cantonese" or "Sally knows German".
To say "I understand Yiddish" is a perfect example of the incorrect use of english and represents the misuse and abuse of the word "understand".
The word, "understand", represents the speaker's or writer's position (specifically the position of "standing under") with regard to a topic.
The word "understand" describes that the individual has experienced the phenomenon in question and has developed their own true sense of that phenomenon with the experiencial knowledge they have collected about it. This collection of data is the "process of understanding" or the "path to understanding" a phenomenon. This is why I dispute MF's claim that there are shades of understanding. There is understanding and there is the process of attaining an understanding. Although, I suppose the obviously incorrect use of the word understanding could be construed as "shady".
When two people or nations "reach an understanding" during a dispute, they match their interpretations of events that have transpired between them. They find common threads in their interpretations. These commonalities are only found when the two party's experiences are interpreted similarily by the two partys. There then begins to emerge a sense of truth about certain experiences that both partys have experienced. After much examination and investigation... an understanding (between two parties) is attained by the light of a common, cross-party, interpretation of the phenomenon, or specific components of the phenomenon, in question.
I completely agree. I've been considering getting into these things in my discussion with MF but found it difficult to express in a brief and clear manner.
I specifically wanted to get into the importance of "experience". While in theory I more or less agree with MF that you can input the experiencial data into the computer in a static form rather than making the computer gather the experience itself I have reservations regarding just how that might work out.
If you added the data that is the experience of "red" to an AI's knowledge base and you asked the AI "How do you know what red is?" will it be able to justify it's "understanding" and just how important is that justification to the actual "understanding"?
 
Physics news on Phys.org
  • #122
Reply to post #102 :
Tisthammerw said:
To recap the definitions I’ll be using:
In terms of a man understanding words, this is how I define understanding:
* The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.
This particular definition of understanding requires consciousness. The definition of consciousness I’ll be using goes as follows:
* Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:
* Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
So, if a person is aware of the meaning of words, then by definition the individual possesses consciousness. A cautionary note: it is entirely possible to create other definitions of the word “understand.” One could make define “to grasp the meaning of” in such a way that would not require consciousness, though this form of understanding would perhaps be more metaphorical (at least, metaphorical to the definition supplied above). For the purposes of this thread (and all others regarding this issue) these are the definitions I’ll be using, in part because “being aware of what the words mean” seems more applicable to strong AI.
A note at this point. As you know, I do not agree with your definition of understanding. My definition of understanding does not require consciousness. In my responses I may therefore refer to TH-Understanding and MF-Understanding to distinguish between understanding as defined by Tisthammerw and understanding as defined by MF repectively.
Tisthammerw said:
Let's call the “right” program that, if run on the robot, would produce literal understanding “program X.” Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.
You have merely “asserted” that no understanding is taking place. How would you “show” that no understanding (TH or MF type) is taking place? What test do you propose to carry out to support your assertion? In absence of any test then your assertion remains that – an assertion.
Tisthammerw said:
So it seems that even having the “right” rules and the “right” program is not enough even with a robot.
So it seems your thought experiment is based on “I assert the system as described does not possesses understanding, hence I conclude it does not possesses understanding”.
Where is your evidence? Where have you “shown” that it does not understand?
Tisthammerw said:
Some strong AI adherents claim that having “the right hardware and the right program” is enough for literal understanding to take place. In other words, it might not be enough just to have the right program. A critic could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But it isn’t clear why that would be a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?
The above paragraph is not relevant until the first question has been answered – ie How would you “show” that no understanding is taking place? What test do you propose to carry out to support your assertion?
MF
 
  • #123
TheStatutoryApe said:
Searle builds the CR in such a manner that it is a reactionary machine spitting out preformulated answers to predetermined questions. The machine is not "thoughtful" in this process, that is it does not contemplate the question or the answer, it merely follows it's command to replace one predetermined script with another preformulated script.
I’m not sure I fully agree here. Why do you think the specified machine is not “thoughtful”? Is this stipulated by Searle?
If the human brain is algorithmic then it follows that human thought can be encoded into a machine. Such a machine would be thoughtful in the same way that the human brain is thoughtful.
TheStatutoryApe said:
I do not believe that this constitutes "understanding" of the scripts contents.
I agree if the machine is not thoughtful then it does not understand. Where we seem to disagree is on whether the machine is thoughtful.
TheStatutoryApe said:
First of all as already pointed out the machine is purely reactionary.
I’m not sure exactly what you mean by reactionary. If you simply mean “lacking in thoughtfulness” then as I have said I disagree this is necessarily the case.
TheStatutoryApe said:
It doesn't "think" about the questions or the answers it only "thinks" about the rote process of interchanging them, if you would like to even consider that thinking.
Did Searle actually specify as one of the conditions of his thought-experiemnt that the machine was not permitted to “think about the questions”? This certainly is not a precondition of the Turing test, so if this is indeed one of Searle’s conditions then his test is clearly a false example or corruption of the Turing test.
TheStatutoryApe said:
The whole process is predetermined by the designer who has already done all of the thinking and formulating (this part is important to my explination but I will delve into that deeper later).
I don’t see how this is relevant, but will defer to later.
TheStatutoryApe said:
The CR never considers the meanings of the words only the predetermined relationships between the scripts.
Again is this stipulated by Searle as a precondition? If so then it is once again a false Turing test.
TheStatutoryApe said:
In fact the designer does not even give the CR the leaway by which to consider these things, it is merely programed to do the job of interchanging scripts.
Again is this stipulated by Searle as a precondition? If so then it is once again a false Turing test.
TheStatutoryApe said:
Next, the CR is not privy to the information that is represented by the words it interchanges. Why? It isn't programed with that information. It is only programed with the words.
Again is this stipulated by Searle as a precondition? If so then it is once again a false Turing test.
The rest of your post is rather lengthy, but I get the gist of it.
What we need to clarify, imho, is whether in fact the machine (or the CR) is in fact “deliberately constrained by Searle” in the way you have suggested. If this is indeed the case then I would agree with you that the machine does NOT possesses anything that could be called true understanding, but I would also suggest that it would fail the Turing test (eg adequate cross-examination of its understanding of semantics would reveal that it did not truly understand semantics).
TheStatutoryApe said:
The designer instead of a program could create a simple mechanical device. It could be a cabinet. On a table next to the cabinet there could be several wooden balls of various sizes. On these balls we can have questions printed, these parallel the predetermined questions in the CR program. There can be a chute opening on top of the cabinet where one is instructed to insert the ball with the question of their choice, paralleling the slot where the message from the outside world comes to the man in the CR. Inside the cabinet the chute leads to an inclined track with two rails that widen as the track progresses. At the point where the smallest of the balls will fall between the rails the ball will fall and hit a mechanism that will drop another ball through a chute and into a cup outside the cabinet where the questioner is awaiting an answer. On this ball there is printed an answer corresponding to the question printed on the smallest of the balls. The same is done for each of the balls of varying sizes. The cabinet is now answering questions put to it in fundamentally the same manner as the CR. The only elements lacking are the vastness of possible questions and answers and the illusion that the CR does not know the questions before they are asked.
So does that cabinet possesses understanding of the questions and answers? Or does it merely reflect the understanding of the designer?
My belief is that such a simplistic cabinet would fail a properly conducted Turing test. However, this is not to say that understanding could not in principle arise in a sufficiently complex mechanical processing device (the neural pathways in a human brain and the electronic gates in silicon chips are but two substrates for storing and processing information - the same processes of understanding could in principle be enacted by a mechanical device or even by an operator consulting a rule-book)
TheStatutoryApe said:
Now on the difference between a "perfect simulation" and the real thing.
I guess this depends really on your definition of "simulation". Mine personally is anything that is made to fool the observer into believing it is something other than it really is. A "perfect simulation" I would classify as something that fools the observer in every way until the simulation is deconstructed showing that the elements that seemed to be present were in fact not present. If something can be deconstructed completely and still be indestinguishable from the real thing then I would call that a "reproduction" as opposed to a "simulation".
Good point. Reproduction is a better word.
TheStatutoryApe said:
In the case of the CR I would say that once you have deconstructed the simulation you will find that the process occurring is not the same as the process of "understanding" as you define it (or at least as I define it) even though it produces the same output.
Yes, but I would not call the CR “as you just described it” a perfect simulation, it is instead a deliberately constrained translation device, I would not claim that it possesses understanding, and also would not claim that it would pass a properly constructed Turing test.
TheStatutoryApe said:
And if it produces the same output then what is the difference between the processes? If all you are concerned about is the output then none obviously. If you care about the manner in which the work then there are obviously differances. Also which process is more effective and economical? Which process is capable of creative and original thinking? Which process allows for freewill (if such a thing exists)? Which process is most dynamic and flexible? I could go on for a while here. It stands that the processes are fundamentally different and allow for differing possibilities though are equally capable of communicating in chinese.
OK, but simply having “two agents with fundamentally different processes” does not allow us to conclude that one agent understands and the other does not. I have no idea whether your process of understanding is the same as mine – indeed I am sure that our respective processes differ in many ways – but that does not mean that one of us understands and the other does not.
The proof of the pudding is in the eating – this is what the Turing test is all about – if an agent “demonstrates consistently and repeatedly in the face of all reasonable tests that it understands” then it understands. This is what I thought the CR argument (being supposedly based on the Turing test) was supposed to show. This is the only way I have of knowing whether another human being understands!

Excellent standard of post by the way,

MF
 
Last edited:
  • #124
As I've stated before, Searle has specifically built a construct that will fail to "understand". Also I do not believe that the CR was meant specifically to target the Turing test but to target a particular AI program of the time. He may have actually built it with perameters mirroring that of the program. I'll see if I can find the description of Searle's manuels for the CR...
Against "strong AI," Searle (1980a) asks you to imagine yourself a monolingual English speaker "locked in a room, and given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch." The rules "correlate one set of formal symbols with another set of formal symbols"; "formal" (or "syntactic") meaning you "can identify the symbols entirely by their shapes." A third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you, thereby, "to give back certain sorts of Chinese symbols with certain sorts of shapes in response." Those giving you the symbols "call the first batch 'a script' [a data structure with natural language processing applications], "they call the second batch 'a story', and they call the third batch 'questions'; the symbols you give back "they call . . . 'answers to the questions'"; "the set of rules in English . . . they call 'the program'"
This is probably the closest thing to the original I could find quickly.

First let me comment on some points you are questioning. Specifically the CR's "thoughtfulness" or lack there of and whether or not Searle specified this. The fact is that Searle does not specify anything about thoughtfulness on the part of the CR. The fact that he leaves this out automatically means the CR is incapable. If a program is to think it must be endowed with the ability and nothing in Searle's construction of the CR grants this, save for the human but the human really only represents a processor.
The "program" by the definition quoted above is completely static and I think we have already agreed that a static process does not really qualify as "understanding". The human is a dynamic factor but I do not believe Searle intends the human to tamper with the process occurring. If the human were to experiment by passing script that is not part of the program then it will obviously not be simulating a coherant conversation in chinese any longer. This is why I prefer my cabinet because it does fundamentally the same thing without muddling the experiment by considering the possible actions and qualities of the human involved.

MF said:
Yes, but I would not call the CR “as you just described it” a perfect simulation, it is instead a deliberately constrained translation device, I would not claim that it possesses understanding, and also would not claim that it would pass a properly constructed Turing test.
Yes I tried to point this out before but I guess you still did not understand what I meant about the way in which the CR works. I stated that figuring all of the possible questions and all of the possible answers (or stories) then cross referancing them in such a way as that the CR could simulate a coherant conversation in chinese would be impossible in reality. Since you stated that we are only talking about the principle rather than reality I decided to consider that if such a vast reserve of questions and answers were possible in theory it could carry on a coherant coversation and pass the Turing test. I suspended deciding whether or not the theory would actually work since it would be hard to argue that it wouldn't if we have a hypothetically infinite instruction manuel.
Come to think of it though we should find that it is incapable of learning. Should we happen to find a piece of information that it is not privy to it will not understand and have valid output regarding it. You could adjust the CR so that it can take in the new words and apply it to the existing rules but without giving it the ability to actually learn it will only parrot the new information back utilizing the same contexts that it was initially given. Then there's an issue of memory. Will it remember my name when I tell it my name and then be able to answer me when I ask it later? Ofcourse once you introduce these elements we are no longer talking about Searle's CR and are coming much closer to what you and I would define as "understanding".
The place where I think we can logically disagree with Searle's (and Tisthammerw's) conclusions about the CR is where he seems to believe that no matter how you change it it will always lack understanding. This is ofcourse in part due to the issue of his conclusions in regard to syntax and semantics.
MF said:
OK, but simply having “two agents with fundamentally different processes” does not allow us to conclude that one agent understands and the other does not. I have no idea whether your process of understanding is the same as mine – indeed I am sure that our respective processes differ in many ways – but that does not mean that one of us understands and the other does not.
The proof of the pudding is in the eating – this is what the Turing test is all about – if an agent “demonstrates consistently and repeatedly in the face of all reasonable tests that it understands” then it understands. This is what I thought the CR argument (being supposedly based on the Turing test) was supposed to show. This is the only way I have of knowing whether another human being understands!
The two of us may not think alike and learn in the same fashion but I believe the very basic elements of how we "understand" are fundamentally the same.
In the case of the CR as built by Searle the process theoretically yields the same results in conversation but likely does not yield the same results in other areas.
 
  • #125
MF said:
I’m not sure exactly what you mean by reactionary. If you simply mean “lacking in thoughtfulness” then as I have said I disagree this is necessarily the case.
Ah! This reminds me. I read an essay not that long ago on the issue of consciousness that a friend sent me. I'm not sure if I'll be able to find it agian but I will try. You may find it interesting.
It was written by a determinist who I believe was very against the notion of "freewill" and consciousness as being any sort of special property. He based this partly on the idea that the majority of the tasks we do daily (in his opinion all of them) are by rote, not requiring meaningful thought for any of them. In this I think he had a point though he, either intentionally or naively, did not discuss any tasks that seem very obviously to require meaningful thought such as problem solving.
He touches on the homunculus concept interlinked with his own personal theory on how the notion of "self" came to be. He believes that it has only been around for perhaps a couple thousand years. This latter bit ofcourse is based on rather flimsy evidence and I actually found rather amusing.

At any rate the concept that we do not invoke meaningful thought for the majority of our daily activities I thought was a good point and note worthy in such a discussion as we are having. I'll see if I can find it and if I can't I'll try to give you a summery of his argument as best I can remember it.
 
  • #126
TheStatutoryApe said:
I completely agree. I've been considering getting into these things in my discussion with MF but found it difficult to express in a brief and clear manner.
I specifically wanted to get into the importance of "experience". While in theory I more or less agree with MF that you can input the experiencial data into the computer in a static form rather than making the computer gather the experience itself I have reservations regarding just how that might work out.
If you added the data that is the experience of "red" to an AI's knowledge base and you asked the AI "How do you know what red is?" will it be able to justify it's "understanding" and just how important is that justification to the actual "understanding"?

If the Artificial Intelligence community lacks the social responsibility and imagination to produce an alternative word to describe being"understood by computer" then I would recommend the Artificial Intelligence community qualify the "type of understanding" they are referring to by terming it "Artificial Understanding" in the same manner that "Artificial Intelligence" distiquishes a quality of intelligence when compared to the human intelligence that created Artificial Intelligence.

Then, I'd like to see some tests whereby the human members of the Artificial Intelligence community attempt to prove that understanding can take place in an currently existing computer... perhaps by replacing one of the AI community member's psychotherapists with a computer or replacing a member's closest friend with a computer. If this involves downloading all the exeriencial information of the therapist or friend into the computer... by all means, knock yourself out.
 
  • #127
TheStatutoryApe said:
As I've stated before, Searle has specifically built a construct that will fail to "understand". Also I do not believe that the CR was meant specifically to target the Turing test but to target a particular AI program of the time. He may have actually built it with perameters mirroring that of the program. I'll see if I can find the description of Searle's manuels for the CR...
Here is a direct quote from Searle in his 1997 book “The Mystery of Consciousness” :
John Searle said:
Imagine I am locked in a room with a lot of boxes of Chinese symbols (the “database”). I get small bunches of Chinese symbols passed to me (questions in Chinese), and I look up in the rule book (the “program”) what I am supposed to do. I perform certain operations on the symbols in accordance with the rules (that is, I carry out the steps in the program) and give back small bunches of symbols (answers to the questions) to those outside the room.
The critical part is highlighted in bold. How do we interpret the phrase perform certain operations on the symbols? Does it mean that Searle simply manipulates a static information database (in which case I agree no dynamic variables, no thoughtfulness, no learning), or does it mean that Searle can create and manipulate dynamic variables as part of the process, and possibly also cause changes to the program itself (in which case I would argue that there could be learning, and this also opens the door to thoughtfulness). It all hinges on whether the database and program are static parts of the process, or whether both database and program are dynamic (ie changeable).
TheStatutoryApe said:
First let me comment on some points you are questioning. Specifically the CR's "thoughtfulness" or lack there of and whether or not Searle specified this. The fact is that Searle does not specify anything about thoughtfulness on the part of the CR. The fact that he leaves this out automatically means the CR is incapable. If a program is to think it must be endowed with the ability and nothing in Searle's construction of the CR grants this, save for the human but the human really only represents a processor.
With respect I think this is a matter of interpretation. If Searle did not “rule it out” but also “did not rule it in” then it is not clear to me that the CR is indeed incapable of thinking. “Thinking” is a process of symbol manipulation, and I think we already agree that the enactment of the CR includes the manpulation of symbols (because we agree the CR "understands" syntax). I see nothing that allows us to conclude “the symbol manipulation is purely syntactic, and there is no real thinking taking place”.
TheStatutoryApe said:
The "program" by the definition quoted above is completely static and I think we have already agreed that a static process does not really qualify as "understanding".
By static program and process I assume you mean that the program is not itself modified as part of the enactment of the CR?
Again I think this may be open to interpretation. I do agree that dynamic processing/manipulation of information and dynamic evolution of the program would be essential to enable thoughtfulness and learning.

If the CR as described is simply manipulating a static database, with input and output, but with no dynamic variables and no modification of the program then there could be no thoughtfulness and no learning.

As I suggested above, Searle’s brief description of the process of “performing certain operations on the symbols” is very vague, and does not specify whether any dynamic variables are created and manipulated as part of the process. I would agree that the absence of dynamic variables, and the absence of any “evolution” of the program itself (dynamic programming?), would be important missing elements, and such elements are probably essential for any “thinking” to take place, but it is not clear from Searle’s description that both dynamic variables and dynamic programming are indeed missing.
TheStatutoryApe said:
The human is a dynamic factor but I do not believe Searle intends the human to tamper with the process occurring. If the human were to experiment by passing script that is not part of the program then it will obviously not be simulating a coherant conversation in chinese any longer.
Agreed, but it is the human which “enacts” the process, turning the static program + static data into a dynamic process.
TheStatutoryApe said:
This is why I prefer my cabinet because it does fundamentally the same thing without muddling the experiment by considering the possible actions and qualities of the human involved.
OK, I understand. In the cabinet example the “process” is clearly dynamic, but again it would appear that the database is static and there are no dynamic “variables”, and certainly no dynamic programming – again the cabinet is simply taking input and translating that into output. No dynamic variables, no thinking possible.
TheStatutoryApe said:
figuring all of the possible questions and all of the possible answers (or stories) then cross referancing them in such a way as that the CR could simulate a coherant conversation in chinese would be impossible in reality.
Perhaps so, but it is supposed to be a thought experiment
TheStatutoryApe said:
Since you stated that we are only talking about the principle rather than reality I decided to consider that if such a vast reserve of questions and answers were possible in theory it could carry on a coherant coversation and pass the Turing test. I suspended deciding whether or not the theory would actually work since it would be hard to argue that it wouldn't if we have a hypothetically infinite instruction manuel.
That is an interesting point – if the instruction manual were potentially infinite then I guess it is conceivable that it could in principle perfectly simulate understanding, and pass the Turing test.
TheStatutoryApe said:
Come to think of it though we should find that it is incapable of learning.
Yes, this follows from the absence of any dynamic variables. But “inability to learn” does not by itself equate with “inability to understand”.
TheStatutoryApe said:
Should we happen to find a piece of information that it is not privy to it will not understand and have valid output regarding it. You could adjust the CR so that it can take in the new words and apply it to the existing rules but without giving it the ability to actually learn it will only parrot the new information back utilizing the same contexts that it was initially given. Then there's an issue of memory. Will it remember my name when I tell it my name and then be able to answer me when I ask it later? Ofcourse once you introduce these elements we are no longer talking about Searle's CR and are coming much closer to what you and I would define as "understanding".
Yes, to do the above we would need to introduce dynamic variables
TheStatutoryApe said:
The two of us may not think alike and learn in the same fashion but I believe the very basic elements of how we "understand" are fundamentally the same.
In the case of the CR as built by Searle the process theoretically yields the same results in conversation but likely does not yield the same results in other areas.
If the CR does not contain dynamic variables or dynamic programming then I agree it would be simply a “translating machine” with no understanding. Whether such a room could pass the Turing test is something I’m still not sure about.
MF
 
Last edited:
  • #128
moving finger said:
Tisthammerw said:
When calling it analytic, I have been specifically referring to my own definitions. I don’t know why you have insisted on ignoring this.

I have been ignoring nothing – I have been responding literally to the statement referred to. The statement “understanding requires consciousness” is a stand-alone statement.

If by “stand-alone” you mean it can be known as analytic without defining the terms I disagree. The terms have to be defined if they are to be considered analytic. And you have not been responding to the statement as it has been referred to: “understanding requires consciousness” refers to the terms I have defined them.


Imho what you meant to say (should have said) is “understanding as defined by Tisthammerw requires consciousness” is an analytic statement (then I would of course have agreed)

With all due respect, what the @#$% do you think I’ve been saying this whole time? At least, what the @#$% do you think I meant regarding statements like:

Tisthammerw said:
My definition of understanding is what you have called “TH-understanding.” Therefore, my conclusion could be rephrased as “consciousness is required for TH-understanding.” I have said time and time again that the statement “understanding requires consciousness” is analytic for my definitions; not necessarily yours.
[post #109 of this thread]

Tisthammerw said:
Remember, I was talking about the analytic statement using the terms as I have defined them.
[post #99 of this thread]

Tisthammerw said:
To reiterate, my “argument” when it comes to “understanding requires consciousness” is merely to show that the statement is analytical (using the terms as I mean them).
[post #99 of this thread]

Tisthammerw said:
[Justifying the analytic statement]
The first premise is the definition of understanding I'll be using….

To see why (given the terms as defined here) understanding requires consciousness

Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms. You may mean something different when you use the terms, but that doesn’t change the veracity of my premises. The argument here is quite sound.
[post #88 of this thread]

Tisthammerw said:
[Explicitly defines what I mean by understanding and consciousness]

To see why (given the terms as defined here) understanding requires consciousness…

Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement

….

My definition of understanding requires consciousness. Do we agree? Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that. Does your definition of understanding require consciousness? I'm not claiming that either. But understanding in the sense that I use it would seem to require consciousness.

[post #75 of this thread]

And this is just this thread, I am not counting all the times I explained this to you previously. So you can understand why I find your claim “I have been ignoring nothing” a bit hard to swallow.


Tisthammerw said:
That “TH-understanding requires consciousness” is an analytic statement is of course what I’ve been claiming for quite some time now.

With respect, this is incorrect.

With respect, you are wrong. You have called my definition of understanding “TH-understanding” but methinks I have been rather clear in pointing out that “understanding requires consciousness” is referring to my definition of the term “understanding.”


I hope you can see and understand the difference in the statements “TH-Understanding requires consciousness” and “understanding requires consciousness”? They are NOT the same.

They are the same if I explicitly point out that the definition of “understanding” I am using is my own explicitly defined definition (what you have called TH-understanding).


The last part, “’Understanding requires consciousness’ is a synthetic statement” is not quite true methinks, because it does not seem to be the sort of thing that can be determined by observation.

We may not yet be able to agree on a “test for understanding”, but that does not mean that such a test is impossible.

Doesn’t mean it’s possible either. My point is that you seem to have no grounds for claiming the statement “understanding requires consciousness” to be a synthetic statement.


Why should I accept that it is analytic just because you choose to define understanding differently?

Because I am claiming that “understanding requires consciousness” is analytic if we use my definition of the terms; I’m not claiming this statement is universally analytical for all people’s definitions of those terms.


Tisthammerw said:
Note: I do tend to read your complete answers, what makes you think I have not done so in this case (whatever case you are insinuating)?

Then I must assume that you are misreading or misunderstanding?

Then I must assume that question be better applied to you (see above for an example)?




Tisthammerw said:
The “premise” in this case is “understanding requires consciousness” but since this claim of mine rather explicitly refers to my definition of the terms (and not necessarily everybody else’s) you can understand my response. Please read my complete responses more carefully.

I am as far as I can be rigorous and methodical in my interpretation of logic and reasoning. If you assert that the statement “understanding requires consciousness” is analytic then I take this assertion at face value

You would do better to read it in context.

Tisthammerw said:
My method of attack isn’t showing that computers can’t possesses consciousness, merely that they do not possesses what you call “TH-understanding.”

How have you shown this?

Post #102, but then you respond to this later.


Tisthammerw said:
“Does the combination of the book, the paper, and the man somehow magically create a separate consciousness that understands Chinese? That isn’t plausible.” So the analytic statement can be useful for my rejoinders.

Why should it need to be conscious? MF-Understanding does not require consciousness in order to enable the CR to understand Chinese.

But I am not referring to your definition of understanding here, I am referring to mine. Please read what I say in context.


Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.

Firstly, if you read my entire post #91 of this thread (which I know you have done, because you have just told me that you DO read all my posts carefully) then you will see that “after cogitation” I refined my idea of “what kind of perception is required” for understanding. The only part of perception necessary for understanding is the kind of introspective perception of the kind “I perceive the truth of statement X”. The sense-dependent type of perception which most humans think of when “perceiving” is simply a means of information transfer and is NOT a fundamental part of understanding in itself.

So, is the answer yes? I’m just trying to wrap my head around your definition of “perceiving” that does not imply consciousness, since I find your definition a tad unusual. Hence my previous mark saying:

Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=perceive).
 
  • #129
moving finger said:
Reply to post #102 :
A note at this point. As you know, I do not agree with your definition of understanding.

You don't have to mean the same thing I do when I use the word “understanding,” just know what I’m talking about here.


Tisthammerw said:
Let's call the “right” program that, if run on the robot, would produce literal understanding “program X.” Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.

You have merely “asserted” that no understanding is taking place. How would you “show” that no understanding (TH or MF type) is taking place? What test do you propose to carry out to support your assertion?

In the context of post #102 I am of course referring to what you have called “TH-understanding.” But to answer you question, simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world. If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.

Do you wish to propose the systems reply for this thought experiment? If so, let me give yet another response, this time one similar to Searle’s. Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).

If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.
 
  • #130
TheStatutoryApe said:
I've been considering getting into these things in my discussion with MF but found it difficult to express in a brief and clear manner.
I specifically wanted to get into the importance of "experience". While in theory I more or less agree with MF that you can input the experiencial data into the computer in a static form rather than making the computer gather the experience itself I have reservations regarding just how that might work out.
Imho the real difficulty with directly programming experiential data is that the experiential data is completely subjective – the experiential data corresponding to “MF sees red” is peculiar to MF, and may be completely different to the experiential data corresponding to “TheStatutoryApe sees red”. Thus there is no universal set of data which corresponds to “this agent sees red”. I am not suggesting that "programming a machine with experiential data" would be a trivial task - but it would be possible in principle.
TheStatutoryApe said:
If you added the data that is the experience of "red" to an AI's knowledge base and you asked the AI "How do you know what red is?" will it be able to justify it's "understanding" and just how important is that justification to the actual "understanding"?
Ask the same agent “how do you know what x-rays are?” – will it be stumped because it has never seen x-rays? No.
Does “experiencing the sight of red” necessarily help an agent to “know what red is”? I suggest it does not.
I believe the issue of experiential data is irrelevant to the question of understanding. If I have never experienced seeing red, does that affect my understanding in any way? I would argue not.
I have no experience of seeing x-rays, yet I understand (syntactically and semantically) what x-rays are. The “red” part of the visible spectrum is simply another part of the electromagnetic spectrum, just as x-rays are. Why do I need to “experience seeing red” in order to understand, both syntactically and semantically, what red is if I do not need to experience x-rays in order to understand what x-rays are?
The source of confusion lies in the colloquial use of the verb “to know”. We say “I know that 2 + 2 = 4”, and we also say that “I know what red looks like”, but these are two completely different kinds of knowledge. The former contributes to understanding, but imho the latter contributes nothing to understanding.
When Mary “knew all there was to know about red, but had never seen red”, and she then “experienced the sight of red”, did she necessarily understand any more about red than she did before she had first seen red? Some might say “yes, of course, she finally knew what red looked like”. But “knowing what red looks like” is not anything to do with understanding, it is simply “knowing what red looks like”.
MF
 
  • #131
Tisthammerw said:
You don't have to mean the same thing I do when I use the word “understanding,” just know what I’m talking about here.
In the context of post #102 I am of course referring to what you have called “TH-understanding.” But to answer you question, simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world. If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.
Do you wish to propose the systems reply for this thought experiment? If so, let me give yet another response, this time one similar to Searle’s. Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).
If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.

There's something else that is providing clues to the human in Chinese Room with regard to the content and meaning of the caligraphy being handed to him under the door.

Chinese characters describe certain phenomena in the world. Sometimes the characters resemble the object or concept they describe because the character is a drawing of the object. Historically, the first Chinese characters were actual drawings of the object or concept they described. Like a hieroglyph. As their writing evolved, the drawings remained as a central theme around each character.

If Searle or the human in place of Searle has any personal experience with drawing or decifering images that have been stylized by drawing... they will... concsiously or unconsciously, be interpreting the Chinese character to mean "a man beside a house" or "a river with mountains" and this would demonstrate an understanding of the meaning of a Chinese character.

What I have pointed out here is another flaw in the Chinese Room thought experiment and brings us to what I perceive as a total 3 fatal flaws we could point out to John Searle about his experiment.

Using much less universal languages suchas Finnish or Hungarian may provide a better control for the experiment. However, I maintain that, in this experiment's set-up, the terminology is incorrect and this fact renders the experiment and any of its conclusions invalid.
 
  • #132
quantumcarl said:
There's something else that is providing clues to the human in Chinese Room with regard to the content and meaning of the caligraphy being handed to him under the door.
Chinese characters describe certain phenomena in the world. Sometimes the characters resemble the object or concept they describe because the character is a drawing of the object. Historically, the first Chinese characters were actual drawings of the object or concept they described. Like a hieroglyph. As their writing evolved, the drawings remained as a central theme around each character.
If Searle or the human in place of Searle has any personal experience with drawing or decifering images that have been stylized by drawing... they will... concsiously or unconsciously, be interpreting the Chinese character to mean "a man beside a house" or "a river with mountains" and this would demonstrate an understanding of the meaning of a Chinese character.

I find that a tad implausible (at least, if we are using my definition of understanding) and a bit speculative, and to the very least it's not a necessary truth. Searle's point is that the man can simulate a conversation in Chinese using the rulebook etc. without knowing the language, and I think this is true (it is logically possible for the man to faithfully use the rulebook without understanding Chinese). Personally (for the most part) I don't think I would understand the meaning of a Chinese character no matter how often I looked at it. For the most part (as far as I know) modern Chinese characters aren't the same as pictures or drawings; and a string of these characters would still leave me even more baffled. And don't forget the definition of understanding I am using: a person has to know what the word means. This clearly isn't the case here ex hypothesi.

And if need be we could give the person in the room a learning disability preventing him from learning a Chinese character merely through looking at it (and when I look at Chinese words, I sometimes think I have that learning disability).


What I have pointed out here is another flaw in the Chinese Room thought experiment and brings us to what I perceive as a total 3 fatal flaws we could point out to John Searle about his experiment.

And what flaws are these? My main beef with some of these arguments is that they show how a human can learn a new language. That's great, but it still seems a bit of an ignoratio elenchi and misses the point of the thought experiment big time. Remember, what’s at the dispute is not whether humans can learn, but whether computers can. Even if we teach computers language the “same” way we do with humans (e.g. we introduce sights and sounds to a computer programmed with learning algorithms) that doesn't appear to work at least in part because we're still following the same model of a complex set of instructions manipulating input etc. The story of program X seems to illustrate that point rather nicely.
 
  • #133
Hi Tisthammerw
I will again trim the fat that keeps coming into the posts.
Tisthammerw said:
When calling it analytic……
With respect, most of your post is along the lines that you seem to expect me to agree that the statement “understanding requires consciousness” is analytic, when it most definitely is synthetic. You seem to think that “understanding” is the same as “TH-Understanding”. We can spend the rest of our lives arguing back and forth on this, and I have better things to do than spend my time on arguing with someone who refuses to see the truth. Thus I move on. Perhaps you should do so too.
Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.
Perceiving in the sense-perception meaning of the word is the process of acquiring, interpreting, selecting, and organising (sensory) information. If the computer that you refer to is acquiring, interpreting, selecting, and organising (sensory) information then by definition it is perceiving. Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
I’m just trying to wrap my head around your definition of “perceiving” that does not imply consciousness, since I find your definition a tad unusual.
I think some of your definitions are unusual but you are of course entitled to your opinion.
Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to definition 2 of the Merriam-Webster dictionary).
I am not saying this, where did you get this idea?
I am saying that there are different meanings of “perception”. The statement “I perceive the truth of statement X” is a meaningful statement in the English language, but it has nothing whatsoever to do with experiential sense-perception, it has everything to do with introspective perception. Does your dictionary explain this to you? No? Then with respect perhaps you need to look for a better dictionary which offers less-fuzzy definitions?

MF
 
  • #134
Tisthammerw said:
simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world.
Thus you have established that the individual called Bob (or to be precise, Bob’s consciousness) does not understand. But you have not shown that the system (of which Bob’s consciousness is simply one component) does not possesses understanding.
Tisthammerw said:
If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.
If one thinks understanding is magic then this of course explains why one might think it is not plausible. One cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.
Tisthammerw said:
Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).
When you say in the above “Bob understands nothing”, are you referring now to Bob’s consciousness, or to the entire system which is cyborg Bob (which also now incorporates the rulebook)?
If the former, then my previous reply once again applies - one cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.
If the latter, how have you shown that the entire system which is cyborg Bob (and not just Bob’s consciousness) does not possesses understanding?
Tisthammerw said:
If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.
I have answered your questions. Please now answer mine.
MF
 
  • #135
Tisthammerw said:
simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world.
Thus you have established that the individual called Bob (or to be precise, Bob’s consciousness) does not understand. But you have not shown that the system (of which Bob’s consciousness is simply one component) does not possesses understanding.
Tisthammerw said:
If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.
If one thinks understanding is magic then this of course explains why one might think it is not plausible. One cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot establish whether Tisthammerw understands by interrogating one of the neurons in Tisthammerw's brain.
Tisthammerw said:
Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).
When you say in the above “Bob understands nothing”, are you referring now to Bob’s consciousness, or to the entire system which is cyborg Bob (which also now incorporates the rulebook)?
If the former, then my previous reply once again applies - one cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.
If the latter, how have you shown that the entire system which is cyborg Bob (and not just Bob’s consciousness) does not possesses understanding?
Tisthammerw said:
If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.
I have answered your questions. Please now answer mine.
MF
 
  • #136
Passive vs Creative Understanding

To make some inroads into “understanding what is going on in understanding” I suggest that we neeed to distinguish between different forms or qualities of understanding. I suggest that we can categorise examples of understanding as either passive or creative.

Let us define passive understanding of subject X as the type of understanding which an agent can claim to have by virtue of its existing (previously learned) knowledge of subject X. For example, an agent may be able to claim that it understands Pythagoras’s theorem by explaining how the theorem works logically and mathematically, based on its existing knowledge and information base. The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other. In the case of our example of Pythagoras’s theorem, it would be perfectly possible for an agent to “learn by rote” the detailed explanation of Pythagoras’s theorem, and therefore to “fool” an interrogator into thinking that it understands when in fact all it is doing is repeating (regurgitating) pre-existing information. To passively understand (or to simulate understanding of) subject X, an agent needs only a static database and static program (in other words, an agent needs neither a dynamic database nor a dynamic program in order to exhibit passive understanding).

Let us define creative understanding of subject Y as the type of new understanding which an agent develops during the course of learning a new subject – by definition therefore the agent does not already possesses understanding of subject Y prior to learning about subject Y, but instead develops this new understanding of subject Y during the course of its learning. Clearly, to creatively understand subject Y, an agent needs a dynamic database and possibly also a dynamic program (since the agent needs to learn and develop new information and knowledge associated with the new subject Y). An important question is : Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?

I suggest that the classic Turing test is usually aimed at testing only passive, and not creative, understanding – the Turing test interrogator asks the agent questions in order to test the agent’s existing understanding of already “known” subjects. I suggest also that the CR as defined by Searle is also aimed at testing only passive, and not creative, understanding (In his description of the CR Searle makes no reference to any ability of the room to learn any new information, knowledge or understanding).

But we have seen that it is possible, at least in principle, for an agent to “simulate” passive understanding, and thereby to “fool” both the Turing test interrogator and the CR interrogator.

It seems clear that to improve our model and to improve our understanding of “understanding” we need to modify both the Turing test and the CR experiment, to incorporate tests of creative understanding. How might this be done? Instead of simply asking questions to test the agent’s pre-existing (learned) understanding, the interrogator might “work together with” the agent to explore new concepts and ideas, incorporating new information and knowledge, leading to new understanding. It is important however that during this process of creatively understanding the new subject the interrogator must not always be in a position of “leading” or “teaching”, otherwise we are simply back in the situation where the agent can passively accept new information and thereby simulate new understanding. The interrogator must allow the agent to demonstrate that it is able to develop new understanding through its own processing of new information and knowledge, putting this new information and knowledge into correct context and association with pre-existing information and knowledge, and not simply “learn new knowledge by rote”.

IF the Turing test is expanded to incorporate such tests of creative understanding, would we then eliminate the possibility that the agent has “simulated understanding and fooled the interrogator by learning by rote”?

Constructive criticism please?MF
 
  • #137
moving finger said:
With respect, most of your post is along the lines that you seem to expect me to agree that the statement “understanding requires consciousness” is analytic, when it most definitely is synthetic. You seem to think that “understanding” is the same as “TH-Understanding”.

That is (as I’ve said many times before) the definition of understanding I am using when I claim that “understanding requires consciousness.” I am not claiming that “understanding requires consciousness” for all people’s definitions of those terms (as I’ve said many times). Do you understand this?


Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.
Perceiving in the sense-perception meaning of the word is the process of acquiring, interpreting, selecting, and organising (sensory) information. If the computer that you refer to is acquiring, interpreting, selecting, and organising (sensory) information then by definition it is perceiving.

So, is that a yes? Is this an example of a computer acquiring, selecting etc. and is thus perceiving even though it does not possesses consciousness?


Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?

That’s what I’m asking you.


Tisthammerw said:
I’m just trying to wrap my head around your definition of “perceiving” that does not imply consciousness, since I find your definition a tad unusual.

I think some of your definitions are unusual but you are of course entitled to your opinion.

My definition is found in Merriam-Webster’s dictionary remember? If anything, it’s your definition that is unconventional (an entity perceiving without possessing consciousness).


Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to definition 2 of the Merriam-Webster dictionary).

I am not saying this, where did you get this idea?

You. You said that your definition did not require consciousness, remember? If a person perceives in the sense that I’m using the word then the person by definition possesses consciousness. But if your definition does not require consciousness, then by all logic it must be different from my definition (please look it up in the dictionary I cited). So please answer my question regarding a person “perceiving” an intensely bright light.


The statement “I perceive the truth of statement X” is a meaningful statement in the English language, but it has nothing whatsoever to do with experiential sense-perception

True, hence (if you recall) I defined “perceive” as definitions 1a and 2 in Merriam-Webster’s dictionary. (Which definition applies is clear in the context of the sentence.)


Does your dictionary explain this to you?

Yes (see above).


No?

That is incorrect. My answer is yes.
 
  • #138
moving finger said:
Tisthammerw said:
simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world.

Thus you have established that the individual called Bob (or to be precise, Bob’s consciousness) does not understand. But you have not shown that the system (of which Bob’s consciousness is simply one component) does not possesses understanding.

At least we’re covering some ground here. We've established that Bob does not understand what's going on even though he runs program X.


Tisthammerw said:
If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.

If one thinks understanding is magic then this of course explains why one might think it is not plausible. One cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.

I only call it “magic” because that is what it sounds like because if its rank implausibility. Do you really believe that the combination of the man, the rulebook etc. actually creates a separate consciousness that understands Chinese? As for the assessment of understanding in the system by asking Bob’s consciousness, see below.


Tisthammerw said:
Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).

When you say in the above “Bob understands nothing”, are you referring now to Bob’s consciousness, or to the entire system which is cyborg Bob (which also now incorporates the rulebook)?

Both, actually. Remember, the definition of understanding I am using (which you have called TH-understanding) requires consciousness. Bob’s consciousness is fully aware of all the rules of the rulebook and indeed carries out those rules. Yet he still doesn’t possesses TH-understanding here.

So, now that we have that cleared up, please answer the rest of my questions in post #102.


If the former, then my previous reply once again applies - one cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands

I don’t think this reply applies, given that Bob’s consciousness is the only consciousness in the system. Think back to the Chinese room (and variants thereof). Does the system understand (in the sense of TH-understanding)? Does the combination of the man, the rulebook etc. create a separate consciousness that understands Chinese?


just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.

But I wasn’t asking Bob’s individual neurons “What does Chinese word X mean?” I was asking Bob’s consciousness. And Bob’s honest reply is “I have no idea.”


If the latter, how have you shown that the entire system which is cyborg Bob (and not just Bob’s consciousness) does not possesses understanding?

See above regarding Bob’s consciousness being the only consciousness in the system, unless you claim that the combination of…


I have answered your questions. Please now answer mine.
MF

You’ve only partially answered my questions (because apparently some clarification was needed). And what are your questions? Are there any more you would like answered that weren’t asked here?
 
  • #139
moving finger said:
[defines his terms]

Constructive criticism please?

Yes. All your definitions are wrong, because I don't agree with them. Everything you say is wrong, fallacious and circulus in demonstrado because I use those words differently than you do. I will of course ignore the fact that you are not referring to everybody's definitions, and for many posts to come I will repeatedly claim that your definitions are wrong in spite of any of your many clarification attempts to undo the severe misunderstanding of what you are saying.

HA! NOW YOU KNOW HOW IT FEELS!

Moving on:

moving finger said:
Let us define passive understanding of subject X as the type of understanding which an agent can claim to have by virtue of its existing (previously learned) knowledge of subject X. For example, an agent may be able to claim that it understands Pythagoras’s theorem by explaining how the theorem works logically and mathematically, based on its existing knowledge and information base. The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other.

That was kind of Searle's point. Just because an entity (as a computer program) can simulate understanding doesn't mean the computer actually has it.


Let us define creative understanding of subject Y as the type of new understanding which an agent develops during the course of learning a new subject – by definition therefore the agent does not already possesses understanding of subject Y prior to learning about subject Y, but instead develops this new understanding of subject Y during the course of its learning. Clearly, to creatively understand subject Y, an agent needs a dynamic database and possibly also a dynamic program (since the agent needs to learn and develop new information and knowledge associated with the new subject Y). An important question is : Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?

Yes. Variants of the Chinese room include learning a person's name etc. by the man in the room having many sheets of paper to store data and having the appropriate rules (think Turing machine).


It seems clear that to improve our model and to improve our understanding of “understanding” we need to modify both the Turing test and the CR experiment, to incorporate tests of creative understanding. How might this be done? Instead of simply asking questions to test the agent’s pre-existing (learned) understanding, the interrogator might “work together with” the agent to explore new concepts and ideas, incorporating new information and knowledge, leading to new understanding.

See above. The Chinese room can be modified as such, and yet there is still no (TH) understanding.


IF the Turing test is expanded to incorporate such tests of creative understanding, would we then eliminate the possibility that the agent has “simulated understanding and fooled the interrogator by learning by rote”?

The answer is no (at least when it comes to TH-understanding).
 
Last edited:
  • #140
Tisthammerw said:
I find that a tad implausible (at least, if we are using my definition of understanding) and a bit speculative, and to the very least it's not a necessary truth. Searle's point is that the man can simulate a conversation in Chinese using the rulebook etc. without knowing the language, and I think this is true (it is logically possible for the man to faithfully use the rulebook without understanding Chinese).

I see. Like in the example I gave of the person who only understands Mongolian but is translating French into English using translation texts, the only understanding that is present is the understanding the Mongolian has of everything else... excluding the two languages of French and English.

Tisthammerw said:
Personally (for the most part) I don't think I would understand the meaning of a Chinese character no matter how often I looked at it. For the most part (as far as I know) modern Chinese characters aren't the same as pictures or drawings; and a string of these characters would still leave me even more baffled. And don't forget the definition of understanding I am using: a person has to know what the word means. This clearly isn't the case here ex hypothesi.

I see what your getting at. I will still enter the contaminating effect of the pictorial nature of the Chinese language as a potential error in the experiment. The very fact that "subliminal influence" is a real influence in the bio-organic (human etc..) learning process suggests that what remains of the "pictorial nature" of Chinese caligraphy in modern Chinese caligraphy also presents a contamination to the experiment.

This brings me to the categories of "sub-consciousness" and consciousness. Your statement where "understanding requires consciousness" is made true by the fact that if the man in the room were "un-conscious" he would not only have no understanding of a language... but no understanding of his task or his curcumstances.

With regard to the sub-conscious, however, I doubt we can repeat what I've determined about "consciousness" and understanding above. I mentioned "subliminal influences" and these, I believe, rely on what is termed as the "sub-conscious". The subconsciousis a powerful function of the brain because it is able to compile information from every source simultaneously. It also organizes the information in its non-stop attempt to assist with the continued survival and existence of the organism which includes a brain.


Tisthammerw said:
And if need be we could give the person in the room a learning disability preventing him from learning a Chinese character merely through looking at it (and when I look at Chinese words, I sometimes think I have that learning disability).

Ha!

To do this experiment properly I believe we need a sorting machine... like at the post office. These machines read the Zip code and sort each piece of mail according to Zip. So, in this case, we'd slip a piece of paper under the door into my "MAIL ROOM THOUGHT experiment".

The piece of paper would have a ZIP code on it. When the piece of paper ended up in Palm Beach, California... we could ask... "did the room "know or understand" that it was to send my piece of paper with a zip code on it to Palm Beach?

Tisthammerw said:
And what flaws are these? My main beef with some of these arguments is that they show how a human can learn a new language. That's great, but it still seems a bit of an ignoratio elenchi and misses the point of the thought experiment big time. Remember, what’s at the dispute is not whether humans can learn, but whether computers can.

Yes... I see. Computers are programmed. They don't really have a choice if they are programmed or not. Humans learn. Some have the motivation to do so... some do not. There are many many factors behind the function of learning. I realize that at some fictitious point a computer might have the power to gather its own data and program itself... but, I prefer to deal with actualities rather that "what ifs".
 
  • #141
quantumcarl said:
Tisthammerw said:
Personally (for the most part) I don't think I would understand the meaning of a Chinese character no matter how often I looked at it. For the most part (as far as I know) modern Chinese characters aren't the same as pictures or drawings; and a string of these characters would still leave me even more baffled. And don't forget the definition of understanding I am using: a person has to know what the word means. This clearly isn't the case here ex hypothesi.


I will still enter the contaminating effect of the pictorial nature of the Chinese language as a potential error in the experiment. The very fact that "subliminal influence" is a real influence in the bio-organic (human etc..) learning process suggests that what remains of the "pictorial nature" of Chinese caligraphy in modern Chinese caligraphy also presents a contamination to the experiment.

That's still a little too speculative for me. If we ran some actual experiments showing people picking up Chinese language merely by reading the text you might have something then.


Tisthammerw said:
And what flaws are these? My main beef with some of these arguments is that they show how a human can learn a new language. That's great, but it still seems a bit of an ignoratio elenchi and misses the point of the thought experiment big time. Remember, what’s at the dispute is not whether humans can learn, but whether computers can.

Yes... I see. Computers are programmed. They don't really have a choice if they are programmed or not. Humans learn. Some have the motivation to do so... some do not. There are many many factors behind the function of learning. I realize that at some fictitious point a computer might have the power to gather its own data and program itself...

And that's kind of what the story of program X talks about. Let program X stand for any program (with all the learning algorithms you could ever want) that if run would produce understanding. Yet when the program is run, there is still no understanding (at least, not as how I've defined the term).
 
  • #142
Tisthammerw said:
That's still a little too speculative for me. If we ran some actual experiments showing people picking up Chinese language merely by reading the text you might have something then.

OK. I agree. And if we can rule out my speculation about the subconsciousunderstanding the man in the CR is experiencing... we can rule out the speculation that involves undeveloped and imagined "bots" or "programs" that (do not exist, at present, to...) specifically simulate human understanding.


Tisthammerw said:
And that's kind of what the story of program X talks about. Let program X stand for any program (with all the learning algorithms you could ever want) that if run would produce understanding. Yet when the program is run, there is still no understanding (at least, not as how I've defined the term).

Yes, Program X would be a collection of 6 billion mail rooms sending bits (of paper) to wherever the zip code has determined the bits go. The only understanding that would develope out of this program is the understanding it helped to foster in an bio-organic system such as a human's... and a conscious one at that!
 
  • #143
moving finger said:
To make some inroads into “understanding what is going on in understanding” I suggest that we neeed to distinguish between different forms or qualities of understanding. I suggest that we can categorise examples of understanding as either passive or creative.

Let us define passive understanding of subject X as the type of understanding which an agent can claim to have by virtue of its existing (previously learned) knowledge of subject X. For example, an agent may be able to claim that it understands Pythagoras’s theorem by explaining how the theorem works logically and mathematically, based on its existing knowledge and information base. The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other. In the case of our example of Pythagoras’s theorem, it would be perfectly possible for an agent to “learn by rote” the detailed explanation of Pythagoras’s theorem, and therefore to “fool” an interrogator into thinking that it understands when in fact all it is doing is repeating (regurgitating) pre-existing information. To passively understand (or to simulate understanding of) subject X, an agent needs only a static database and static program (in other words, an agent needs neither a dynamic database nor a dynamic program in order to exhibit passive understanding).

Let us define creative understanding of subject Y as the type of new understanding which an agent develops during the course of learning a new subject – by definition therefore the agent does not already possesses understanding of subject Y prior to learning about subject Y, but instead develops this new understanding of subject Y during the course of its learning. Clearly, to creatively understand subject Y, an agent needs a dynamic database and possibly also a dynamic program (since the agent needs to learn and develop new information and knowledge associated with the new subject Y). An important question is : Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?

I suggest that the classic Turing test is usually aimed at testing only passive, and not creative, understanding – the Turing test interrogator asks the agent questions in order to test the agent’s existing understanding of already “known” subjects. I suggest also that the CR as defined by Searle is also aimed at testing only passive, and not creative, understanding (In his description of the CR Searle makes no reference to any ability of the room to learn any new information, knowledge or understanding).

But we have seen that it is possible, at least in principle, for an agent to “simulate” passive understanding, and thereby to “fool” both the Turing test interrogator and the CR interrogator.

It seems clear that to improve our model and to improve our understanding of “understanding” we need to modify both the Turing test and the CR experiment, to incorporate tests of creative understanding. How might this be done? Instead of simply asking questions to test the agent’s pre-existing (learned) understanding, the interrogator might “work together with” the agent to explore new concepts and ideas, incorporating new information and knowledge, leading to new understanding. It is important however that during this process of creatively understanding the new subject the interrogator must not always be in a position of “leading” or “teaching”, otherwise we are simply back in the situation where the agent can passively accept new information and thereby simulate new understanding. The interrogator must allow the agent to demonstrate that it is able to develop new understanding through its own processing of new information and knowledge, putting this new information and knowledge into correct context and association with pre-existing information and knowledge, and not simply “learn new knowledge by rote”.

IF the Turing test is expanded to incorporate such tests of creative understanding, would we then eliminate the possibility that the agent has “simulated understanding and fooled the interrogator by learning by rote”?

Constructive criticism please?


MF
On the surface it makes plenty of sense. I myself at the beginning of this discussion may have attributed some level of understanding to a pocket calculator which would fit with the description here of "passive understanding". The problem I see with this is that if I were to write out an equation on a piece of paper we could similarly refer to that piece of paper along with the equation as possessing "passive understanding".
Ofcourse a calculator works off of an active process so there is a difference at least in this though I'm not sure if I can bring myself, at least not any longer, to think this makes that much of a difference when it comes to whether or not it possesses any sort of understanding. An abacus or a slide rule present an active process but I still wouldn't consider them to possesses any more understanding for it. I think that the idea I presented previously that such devices only "reflect the understanding of the designer" is probably much more logical than to state that they actually possess any kind of understanding in and of themselves.
I'm still not quite settled on it.
Actually I'm not even sure exactly how a calculator's program works. If I did I might still attribute some sort of understanding to it.
 
  • #144
Tisthammerw said:
Yes. All your definitions are wrong, because I don't agree with them. Everything you say is wrong, fallacious and circulus in demonstrado because I use those words differently than you do. I will of course ignore the fact that you are not referring to everybody's definitions, and for many posts to come I will repeatedly claim that your definitions are wrong in spite of any of your many clarification attempts to undo the severe misunderstanding of what you are saying.
HA! NOW YOU KNOW HOW IT FEELS!
Tisthammerw, you are just being very silly here. I NEVER claimed that your definitions were wrong, simply that I did not agree with them. You are entitled to your opinion, and I to mine.
Doesn't bother me that you are being silly. Does it bother you?
Will reply to rest of your post later.
MF
 
  • #145
Tisthammerw said:
I am not claiming that “understanding requires consciousness” for all people’s definitions of those terms (as I’ve said many times). Do you understand this?
Yes, and it follows from this that the statement “understanding requires consciousness” is synthetic (because, as you have agreed, it is NOT clear that all possible forms of understanding DO require consciousness) Do you understand this?

moving finger said:
Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
That’s what I’m asking you.
With respect, this is NOT what you are asking me. You are asking me whether the computer you have in mind “perceives”. I don’t know what computer you are referring to. This is YOUR example of a computer, not mine. How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?
A computer which acquires, interprets, selects, and organises (sensory) information is perceiving in my definition of the word perceiving.

Tisthammerw said:
My definition is found in Merriam-Webster’s dictionary remember?
Do you want to play that silly “I use the X dictionary so I must be right” game again?.

Tisthammerw said:
You said that your definition did not require consciousness, remember? If a person perceives in the sense that I’m using the word then the person by definition possesses consciousness. But if your definition does not require consciousness, then by all logic it must be different from my definition (please look it up in the dictionary I cited). So please answer my question regarding a person “perceiving” an intensely bright light.
You are taking an anthropocentric position again. A human agent requires consciousness in order to be able to report that it perceives anything. One cannot conclude from this that all possible agents require consciousness in order to be able to perceive.

The same applies to understanding. A human agent requires consciousness in order to be able to report that it understands anything. One cannot conclude from this that all possible agents require consciousness in order to be able to understand.

MF
 
Last edited:
  • #146
Tisthammerw said:
At least we’re covering some ground here. We've established that Bob does not understand what's going on even though he runs program X.
To be correct, we have established that Bob’s consciousness does not understand, and Bob’s consciousness is a “component of program X”, it is not “running program X”.

Tisthammerw said:
Do you really believe that the combination of the man, the rulebook etc. actually creates a separate consciousness that understands Chinese?
No, I never said that the “agent that understands” is necessarily conscious. My definition of understanding does not require consciousness, remember?

Tisthammerw said:
Remember, the definition of understanding I am using (which you have called TH-understanding) requires consciousness. Bob’s consciousness is fully aware of all the rules of the rulebook and indeed carries out those rules. Yet he still doesn’t possesses TH-understanding here.
If you are asking “can a non-conscious agent possesses TH-Understanding?” then I agree, it cannot. That is clear by definition of TH-Understanding. Is that what you have shown?

That’s pretty impressive I suppose. “A non-conscious agent cannot possesses TH-Understanding, because TH-Understanding is defined as requiring consciousness”. Quite a philosophical insight! :smile:

Can we move on now to discussing something that is NOT tautological?

MF
 
  • #147
moving finger said:
The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other.
Tisthammerw said:
That was kind of Searle's point. Just because an entity (as a computer program) can simulate understanding doesn't mean the computer actually has it.
You’ve got it. This is why I am looking at ways of possibly improving the basic Turing test, and the CR experiment, to enable us to distinguish between an agent which “understands” and one which is “simulating understanding”.
moving finger said:
Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?
Tisthammerw said:
Variants of the Chinese room include learning a person's name etc. by the man in the room having many sheets of paper to store data and having the appropriate rules (think Turing machine).
But “learning a person’s name” is NOT necessarily new understanding, it is simply memorising. This is exactly why I say later in my post :
moving finger said:
It is important however that during this process of creatively understanding the new subject the interrogator must not always be in a position of “leading” or “teaching”, otherwise we are simply back in the situation where the agent can passively accept new information and thereby simulate new understanding.
Tisthammerw said:
The Chinese room can be modified as such, and yet there is still no (TH) understanding.
There never will be TH-Understanding as long as there is no consciousness. If you insist that the agent must possesses TH-Understanding then it must by definition be conscious. I am not interested in getting into tautological time-wasting again :rolleyes:
MF
 
  • #148
TheStatutoryApe said:
On the surface it makes plenty of sense.
Thank you

TheStatutoryApe said:
I myself at the beginning of this discussion may have attributed some level of understanding to a pocket calculator which would fit with the description here of "passive understanding". The problem I see with this is that if I were to write out an equation on a piece of paper we could similarly refer to that piece of paper along with the equation as possessing "passive understanding".
Hmmmm. But to me it seems that understanding is a "process". I do not enact a process by writing an equation on a piece of paper (any more than I "run a computer program" by printing out the program). I don't see how we could claim that any static entity (such as an equation, or pile of bricks, or switched-off computer) has any type of understanding (passive or otherwise), because there are no processes going on.

TheStatutoryApe said:
Ofcourse a calculator works off of an active process so there is a difference at least in this though I'm not sure if I can bring myself, at least not any longer, to think this makes that much of a difference when it comes to whether or not it possesses any sort of understanding. An abacus or a slide rule present an active process but I still wouldn't consider them to possesses any more understanding for it.
To say that "understanding is a process" is not the same as saying that "all processes possesses understanding". It simply means that "being a process" is necessary, but not sufficient, for understanding. This would explain your "an abacus does not understand" position.

TheStatutoryApe said:
I think that the idea I presented previously that such devices only "reflect the understanding of the designer" is probably much more logical than to state that they actually possess any kind of understanding in and of themselves.
I agree that designed agents can and do reflect to some extent the understanding of the designer. But is this the same as saying that "no designed agent can be said to understand"? I think not.

MF
 
  • #149
In the interest of space, I'll combine posts.

moving finger said:
Tisthammerw said:
Yes. All your definitions are wrong, because I don't agree with them. Everything you say is wrong, fallacious and circulus in demonstrado because I use those words differently than you do. I will of course ignore the fact that you are not referring to everybody's definitions, and for many posts to come I will repeatedly claim that your definitions are wrong in spite of any of your many clarification attempts to undo the severe misunderstanding of what you are saying.

HA! NOW YOU KNOW HOW IT FEELS!

Tisthammerw, you are just being very silly here. I NEVER claimed that your definitions were wrong, simply that I did not agree with them.

You implied it when you said that you do not agree that my statement was analytic statement and did not agree with my definitions. In these circumstances, “disagree” usually means believing that the statement in question is false. In any case, I admit the above was an exaggeration, but not by much. You claimed a lot of what I said was wrong because you blatantly misunderstood what I was saying (e.g. ignoring the fact that I was not referring to everybody's definition when I said that “understanding requires consciousness” was an analytic statement).

moving finger said:
Tisthammerw said:
I am not claiming that “understanding requires consciousness” for all people’s definitions of those terms (as I’ve said many times). Do you understand this?

Yes, and it follows from this that the statement “understanding requires consciousness” is synthetic (because, as you have agreed, it is NOT clear that all possible forms of understanding DO require consciousness) Do you understand this?

No I do not. Why is the fact that other people use different definitions of the terms make the statement “understanding requires consciousness” synthetic? The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.

Tisthammerw said:
Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?


That’s what I’m asking you.

With respect, this is NOT what you are asking me.

Yes I am. I am asking whether or not this scenario meets your definitions of those characteristics.


You are asking me whether the computer you have in mind “perceives”. I don’t know what computer you are referring to.

I am asking if the computer I described “perceives” using your definition of the term. But since you seem to have forgotten the scenario (even though I described it in the very post you responded to), let me refresh your memory:

Tisthammerw said:
Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.

There. Now please answer my questions.


How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?

Because I have described it to you.


Tisthammerw said:
I think some of your definitions are unusual but you are of course entitled to your opinion.

My definition is found in Merriam-Webster’s dictionary remember? If anything, it’s your definition that is unconventional (an entity perceiving without possessing consciousness).

Do you want to play that silly “I use the X dictionary so I must be right” game again?.

No, I am merely pointing out that my definitions are not the ones that are unconventional. Please read what I say in context.


Tisthammerw said:
You said that your definition did not require consciousness, remember? If a person perceives in the sense that I’m using the word then the person by definition possesses consciousness. But if your definition does not require consciousness, then by all logic it must be different from my definition (please look it up in the dictionary I cited). So please answer my question regarding a person “perceiving” an intensely bright light.

You are taking an anthropocentric position again. A human agent requires consciousness in order to be able to report that it perceives anything. One cannot conclude from this that all possible agents require consciousness in order to be able to perceive.

Fine, but that still doesn’t answer my question:

Are you saying an entity can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to definition 2 of the Merriam-Webster dictionary).

moving finger said:
Tisthammerw said:
At least we’re covering some ground here. We've established that Bob does not understand what's going on even though he runs program X.

To be correct, we have established that Bob’s consciousness does not understand, and Bob’s consciousness is a “component of program X”, it is not “running program X”.

I think you may have misunderstood the situation. Programs are nothing more than set of instructions (albeit often a complex set of instructions). Bob is not an instruction, he is the processor of the program X.


Tisthammerw said:
Do you really believe that the combination of the man, the rulebook etc. actually creates a separate consciousness that understands Chinese?
No, I never said that the “agent that understands” is necessarily conscious. My definition of understanding does not require consciousness, remember?

As I have said before, this story is referring to TH-understanding, remember? I am not, repeat NOT using your definition here. Can a computer that runs program X have TH-understanding? I say no, and the whole point of this argument is to justify that claim.


Tisthammerw said:
Remember, the definition of understanding I am using (which you have called TH-understanding) requires consciousness. Bob’s consciousness is fully aware of all the rules of the rulebook and indeed carries out those rules. Yet he still doesn’t possesses TH-understanding here.

If you are asking “can a non-conscious agent possesses TH-Understanding?” then I agree, it cannot.

That is not what I am asking here in regards to this argument. Here is what I am asking: Can a computer (the model of a computer being manipulating input via a complex set of instructions etc.) have TH-understanding? This is what the argument is about. Now how about addressing its relevant questions?


Tisthammerw said:
Variants of the Chinese room include learning a person's name etc. by the man in the room having many sheets of paper to store data and having the appropriate rules (think Turing machine).

But “learning a person’s name” is NOT necessarily new understanding, it is simply memorising.

That would depend on how you define “understanding.” If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”
 
  • #150
More definitions of the concept "Understanding"

understanding
A noun
1* reason, understanding, intellect
* the capacity for rational thought or inference or discrimination; "we are told that man is endowed with reason and capable of distinguishing good from evil"
Category Tree:
psychological_feature
?cognition; knowledge; noesis
?ability; power
?faculty; mental_faculty; module
?reason, understanding, intellect
2* understanding, apprehension, discernment, savvy
* the cognitive condition of someone who understands; "he has virtually no understanding of social cause and effect"
Category Tree:
psychological_feature
?cognition; knowledge; noesis
?process; cognitive_process; mental_process; operation; cognitive_operation
?higher_cognitive_process
?knowing
?understanding, apprehension, discernment, savvy
?realization; realisation; recognition
?insight; brainstorm; brainwave
?hindsight
?grasping
?appreciation; grasp; hold
?smattering
?self-knowledge
?comprehension
3* sympathy, understanding
* an inclination to support or be loyal to or to agree with an opinion; "his sympathies were always with the underdog"; "I knew I could count on his understanding"
Category Tree:
psychological_feature
?cognition; knowledge; noesis
?attitude; mental_attitude
?inclination; disposition; tendency
?sympathy, understanding
4* agreement, understanding
* the statement (oral or written) of an exchange of promises; "they had an agreement that they would not interfere in each other's business"; "there was an understanding between management and the workers"
Category Tree:
abstraction
?relation
?social_relation
?communication
?message; content; subject_matter; substance
?statement
?agreement, understanding
?oral_contract
?entente; entente_cordiale
?submission
?written_agreement
?gentlemen's_agreement
?working_agreement
?bargain; deal
?sale; sales_agreement
?unilateral_contract
?covenant
?conspiracy; confederacy

B adjective
1* understanding
* characterized by understanding based on comprehension and discernment and empathy; "an understanding friend"
 

Similar threads

Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
44
Views
11K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 1 ·
Replies
1
Views
9K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
500
Views
92K