Can computers understand?Can understanding be simulated by computers?

  • Thread starter quantumcarl
  • Start date
  • Tags
    China
In summary, the conversation discusses the concept of the "Chinese Room" thought experiment and its implications on human understanding and artificial intelligence. John Searle, an American philosopher, argues that computers can only mimic understanding, while others argue that understanding is an emergent property of a system. The conversation also touches on the idea of conscious understanding and the potential of genetic algorithms in solving complex problems.
  • #71
quantumcarl said:
"High empathy"? Please explain. Is there such thing as a "low empathy"?
I would argue that there are degrees of empathy. One person can show "more empathy" than another. Or do you believe that empathy is an "all or nothing affair"? Do you believe that it is simply black and white, either one has complete empathy, or one has none at all?
In the same way, there are degrees of understanding. For example, agent "A" could claim to understand something about quantum physics, but "A" might nevertheless acknolwedge that "A" does not understand as much as agent "B", who is a quantum physics expert.
It would be wrong to conclude that both A and B had the same degree of understanding of quantum physics. It would also be wrong to conclude that A had no understanding and B had understanding.
quantumcarl said:
As far as I know, empathy is empathy. It is an ability to understand the curcumstances influencing another human being as well as the ability to identify with objects and animals other than humans. It is a part of understanding and a powerful by-product of consciousness.
You believe that empathy comes in binary? Either one has complete empathy, or one has none at all? Nothing in between?
quantumcarl said:
You don't need to empathize with a Frankophone to understand the french language?
Of course you do. Otherwise you wouldn't be learning french. As soon as the vowels and all those damn silent letters start forming in your mouth... and you have to twist an accent out of your tongue... you are on the path to empathizing with the French people... like it or not. You are assuming their role and method of communication. When you assume the role or... "walk in their shoes" (so to speak) you are truly standing under them... or... understanding the people and their language.
You seem to think that complete empathy is necessary for any kind of understanding.
You are entitled to your rather strange opinion, but I do not share it.
It can be argued that person X who understands the French language AND empathises strongly with the French people has a better understanding of the French language than person Y who also understands the French language but does not empathise strongly with the French people, but it would be wrong to conclude from this that Y does not understand the French language at all.
quantumcarl said:
Understanding describes a function in humans that is more complex than the simple ability to repeat words in a correct sequence so that communication in french or math or medicine is achieved. That is called comprehension and it is properly used by the Italians when they ask you if you "comprende?" as in "can you comprehend what I am saying?"
Perhaps you need to invent a new English word to encapsulate what you believe to be the case. As far as I am concerned, there are "degrees of understanding", it is not a "black and white" affair. Just as there are degrees of comprehension.
If "understanding of Z" was a black and white affair, then it should be possible to test a person's "understanding of Z" and always achieve either 0% or perfect 100% score (either they do understand Z, or they do not). The world does not work this way (even if you would want your ideal world to work like this, it doesn't).
quantumcarl said:
There is a reason there are different words to describe different functions... the differences between the meanings of words are slight... but they are there for a reason. Terminologies offer subtle shades that help to distinguish the speaker's or writer's references and descriptions.
Single words allow for subtle shades (which you seem to deny). If I say "it is snowing outside", that could mean anything from "there are a few snowflakes drifting about" to "there is a whiteout out there, you cannot see anything because of the blizzard".
An Eskimo might have different words for these two different types of snow, but in English "it is snowing outside" would be correct in both cases. "it is snowing outside" allows different shades in meaning. In the same way "X understands Y" allows for different shades in meaning - it might mean that X has a basic understanding of Y, it might mean that X is an expert in Y.
quantumcarl said:
That is why you see cell differenciation in the plant and animal kingdoms. Different cells function in different ways. They don't work in other organs or tissues. They must be used in the context they have evolved to serve. Much in the way languages develope specific terminology to describe specific functions.
I will pass on this, I cannot see the relevance.
quantumcarl said:
The alien term for understanding is different from the North American term "understanding". The alien terms describes a completely different function... they may use telepathy... they may have greater experiences they may hook up with parallel dimensions to ascertain the function of "ravlinz". For humans, and I'm not sure yet what the components of understanding are... but for humans we use experience, consciousness, empathy and knowledge in a slap-dash mixture that we call "understanding".
And I would still claim that BOTH the alien and the human understand, they just do it in different ways.
may your God go with you
MF
 
Physics news on Phys.org
  • #72
Tournesol said:
It is a test based on an explanation; I am saying we have to solve the hard problem first, before we can have a genuine test.
In other words, in absence of an explanation, it makes no sense to test for consciousness? There is thus no logical basis for Searle’s conclusion that the CR does not possesses consciousness, correct?
Tournesol said:
If manipulating symbols is all there is to understanding, and if consciousness is part of understanding, then there should be a conscious awareness of Chinese in the room (or in Searle's head, in the internalised case).
That is a big “if”. For Searle’s objection to carry any weight, it first needs to be shown that consciousness is necessary for understanding. This has not been done (except by “circulus in demonstrando”, which results in a fallacious argument)
Tournesol said:
You could claim that consciousness is not necessarly part of machine understanding; but that would be an admission that the CR's understanding is half-baked compared to human understanding...unless you claim that huamn understanding has nothing to do with consciousness either.
I am claiming that consciousness is not necessary for understanding in all possible agents. Consciousness may be necessary for understanding in humans, but it does not follow from this that this is the case in all possible agents.
To conclude from this that “understanding without consciousness is half baked” is an unsubstantiated anthropocentric (one might even say prejudiced?) opinion.
Tournesol said:
But consciousness is a defintional quality of understanding, just as being umarried is being a defintional quality of being a bachelor.
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.
Tournesol said:
If you understand something , you can report that you know it, explain how you know it. etc. That higher-level knowing-how-you-know is consciousness by definition.
I dispute that an agent needs to in detail “know how it knows” in order for it to possesses an “understanding of subject X”.
“To know” is “to possesses knowledge”. A computer can report that it “knows” X (in the sense that the knowledge X is contained in it’s memory and processes), it might (if it is sufficiently complex) also be able to explain how it came about that it possesses that knowledge. By your definition such a computer would then be conscious?
I think not. imho what you suggest may be necessary, but is not sufficient, for consciousness.
Allow me to speculate.
Consciousness also requires a certain level of internalised self-representation, such that the conscious entity internally manipulates (processes) symbols for “itself” which it can relate to other symbols for objects and processes in the “perceived outside world”; in doing this it creates an internalised representation of itself in juxtaposition to the perceived outside world, resulting in a self-sustaining internal model. This model can have an unlimited number of possible levels of self-reference, such that it is possible that “it knows that it knows”, “it knows that it knows that it knows” etc.
moving finger said:
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?
Tournesol said:
Consciousness.
moving finger said:
Is this merely your opinion, or can you provide any evidence that this is necessarily the case?
Tournesol said:
It is a matter of definition -- it is part of how we distinguish understanding from mere know-how.
I see. We first define understanding such that consciousness is necessary to understanding. And from our definition of understanding, we then conclude that understanding requires consciousness. Is that how its done?
Tournesol said:
Write down a definition of "red" that a blind person would understand.
Are you suggesting that a blind person would not be able to understand a definition of “red”? Sense-experience (the ability to experience the sensation of red) is a particular kind of knowledge, and is not synonymous with “understanding the concept of red”. Compare with the infamous “What Mary “Didn’t Know”” thought experiment.
Tournesol said:
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
I disagree. I do not need to have the power of flight to understand aerodynamics. Vision is simply an access to experiential information, a person who “sees red” does not necessarily understand anything about “red” apart from the experiential aspect (which imho is not “understanding”). Experiential information may be used as an aid to understanding in some agents, but I dispute that experiential information is necessary for understanding in all agents.
moving finger said:
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
Tournesol said:
They don't fully lack it, they don't fully have it. But remember that a computer is much more restricted.
More restricted in what sense?
Tournesol said:
The latter is critical to the ordinary, linguistic understanding of "red".
I dispute that an agent which simply “experiences the sight of red” necessarily underdstands anything about the colour red. I also dispute that “experiencing the sight of red” is necessary to achieve an understanding of red (just as I do not need to be able to fly in order to understand aerodynamics).
Tournesol said:
If the "information processing" sense falls short of full human understanding, and I maintain it does, the arguemnt for strong AI founders and Searle makes his case.
And I mainitain it does not. I can converse intelligently with a blind person about the colour “red”, and that person can understand everything there is to know about red, without ever “experiencing the sight of red”. Your argument seems to be that “being able to see red” is necessary for an understanding of red, which is like saying “being able to fly” is necessary for an understanding of flight.
moving finger said:
If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
Tournesol said:
They are necessary to learn the meaning of sensory language ITFP.
They are aids to understanding in the context of some agents (eg human beings), because that is exactly how human beings acquire some of their information. It is not obvious to me that “the only possible way that any agent can learn is via sense-experience”, is it to you?
Tournesol said:
if you lack the requisite sense, you cannot attach meaning to sensory language ITFP. If you disagree, define "red" in such a way that a person blind from birth could understand it.
We’ve covered this one already.
May your God go with you
MF
 
  • #73
MF said:
Semantic understanding arises from symbol manipulation. I would claim that the CR could carry out such symbol manipulation.
Perhaps our definitions are a bit mixed here. Since we are talking about Searle's CR I would suggest using his definitions. According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
The problem that I see with his reasoning as I've stated on the other two threads regarding the CR is that Searle never really defines this "semantic" property. I think you would likely agree with me that this "semantic" understanding arises from complex orders of "syntactic" information (at least in humans if nothing else). I'd have to say that including this adendum I agree with his definitions though obviously not his conclusions (that syntactic information can not yield semantic understanding).
I would have to disagree with you though that "semantic" understanding arises from symbol manipulation in and of itself. I'd have to call it "syntactic" understanding, if any sort of understanding. Unless you are implying more in your definition that isn't explicitly stated. I would agree that "semantic" understanding can develope based off of "syntactic" understanding coupled with experience (or memory, I'm not sure which term would be best suited for my usage here but experience seems to fit better for me personally).
MF said:
"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?
In fact, are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)
A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.
Our senses are aids to our understanding, they are not the sole and unique source of understanding.
I think I should clarify what I mean by "experience". As I noted earlier I'm not entirely sure if "experience" is better a word to use for what I mean than the word "memory".
What I mean by experience is not the instant of experience as is occurs but rather the accumulation of knowledge through experience (i.e. "Moving Finger has experience in debating"). I see memory as being static imprints of information and experience as an aggregate of memories crossreferanced.
Now "sense-experience". My example of something based on basic sensory information is only a matter of trying to simplify my point and use an "experience" common and easy to understand for us humans with sight. I focus on sensory information because, as far as we know, this is the only manner in which we humans can gather information with which to develope "experience" (with regard to my earlier definition). I do not preclude the possibility of some entity (even a human) to gather information in some other fashion with which to develope an experience and understanding of something. I only mean that the information must reach that enitity in some fashion. In the case of a blind person they have other senses by which to gather information and could possibly come to understand in some sense or another what "red" is but they will not, without help, be able to understand the experience of "red" that those with sight possess. Here is where the problem comes in...
As I stated before the purpose of language is to communicate the thoughts in a persons head. When a person says the word "red" they generally are not referring to the particular portion of the light spectrum which coresponds to the colour red but their own personal experience of the colour red.
I think I'm tangenting a bit here. Consider this. A blind person has been described in what manner possible what the colour red is. That blind person then under goes a procedure and is endowed with vision. Based solely off of that formerly blind person's knowledge gained about the colour red while blind will that person be able to identify "red" when he/she sees it?
So with regard to your last line. Our senses are, as far as we know, our only source for gaining information. Once we have that information we have it and our senses are no longer necessary to have an understanding of that information (this in regards to the absurd argument of whether or not we still understand if put into a sensory deprivation chamber). No one has said that your eyes are the source for understanding of the colour red, nor has anyone stated that our particular sensing organs are a unique prerequisite for understanding. So please stop with this strawman.
MF said:
The CR possesses information inside itself; the CR communicates with words.
I never said that it doesn't possesses information did I?
MF said:
Your argument seems to be based on the suggestion that the tools in the CR “lack any meaning” because their only purpose “is to shuffle about words”.
I disagree. The purpose of the tools in the CR is “to understand Chinese”.
From what does “meaning” arise?
I would suggest that “meaning” arises simply from a process of symbol manipulation.
On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?
I never said that the tools lack meaning, in fact I said...
TheStatutoryApe said:
The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.
There is "syntactic" meaning there just no "semantic". Also I am referring to Searle's argument here specifically and it's flaws. In Searle's argument, no matter the flaws, the CR possesses no understanding. It's built that way. The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
"Meaning" would be a difficult word to pin down and I have not tried to nor will I attempt to at this juncture. I've never stated that there exists no "meaning" in the CR. I asserted that the CR, as built by Searle, does not understand the meanings of the words it is using. Perhaps a better way of stating this would be to say that the words don't mean to the CR what they mean to people who speak/read chinese.
This is yet another problem with Searle's CR. It is not feasible to produce a computer that can be indestinguishable from a person who "understands" unless it really is capable of understanding. It would not be able to hold a coherant and indestinguishable conversation otherwise.
 
  • #74
moving finger said:
This may be true of simple pocket calculators, but there is no reason in principle why a "learning calculating machine" could not exist.
MF
Missing my point. I never stated that they couldn't exist or that they couldn't possibly "understand". I'm not arguing the possibility of machines being able to "understand", I'm asking questions about what QuantumCarl, or anyone else here for that matter, thinks are required properties for "understanding".

Do you agree that a dynamic process is necessary for "understanding"?
Do you think that when a human works math problems there is a fundamental difference in the process between the human and a calculator(a normal calculator)? If so what?
Do you think "learning" would be the significant dynamic property required for "understanding" or something else?
 
  • #75
quantumcarl said:
Biography:
John Searle is an American philosopher who is best known for his work on the human mind and human consciousness. According to Searle, the human mind and human consciousness cannot be reduced simply to physical events and brain states.

Searle is actually a physicalist. He claims that the human brain has unique causal powers to it enable real understanding, thus going beyond formal rules manipulating input.

And the thought experiment isn't called the China room, it's called the Chinese room.

Does the Chinese room possesses understanding? It all depends on how you define understanding. In terms of a man understanding words, here is the definition I’ll be using:


  • The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.

So in this definition, understanding is to be aware of the true meaning of what is communicated. For instance, a man understanding a Chinese word denotes that he is factually aware of what the word means. It is interesting to note that this particular definition of understanding requires consciousness. The definition of consciousness I’ll be using goes as follows:

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:

  • Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Given this particular definition of understanding, it seems clear that the man in the Chinese room does not know a word of Chinese. What about the systems reply? That the Chinese room as a whole understands Chinese? Searle’s response works well here. Let the man internalize the room and become the system (e.g. he memorizes the rulebook). He may be able to simulate a Chinese conversation, but he still doesn’t understand the language.

Tournesol said:
But consciousness is a defintional quality of understanding, just as being
umarried is being a defintional quality of being a bachelor.

That’s what I’ve been telling moving finger. But he doesn’t seem to understand the situation.

Whether or not consciousness is a definitional quality of understanding depends on how you define understanding. In my definition, it certainly is the case (and I suspect the same is true for yours). In moving finger’s definition, that is (apparently) not the case.

moving finger said:
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.

Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious. Moving finger, please see post #210 regarding this criticism. Or to save you the trip, I can reproduce my response here.

My definition of understanding requires consciousness. Do we agree? Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that. Does your definition of understanding require consciousness? I'm not claiming that either. But understanding in the sense that I use it would seem to require consciousness. Do we agree? It seems that we do. So why have we been arguing about this?

You have claimed that “understanding requires consciousness” is circulus demonstrato, a tautology and a fallacious argument. But please understand what’s going on here. Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"? Obviously not. Analytic statements are not fallacious.
 
  • #76
moving finger said:
I would argue that there are degrees of empathy. One person can show "more empathy" than another. Or do you believe that empathy is an "all or nothing affair"? Do you believe that it is simply black and white, either one has complete empathy, or one has none at all?

I'd say one is either able to empathize or not. Empathy is generally considered a human trait. However, there are humans who lack the ability through conditioning and genetics.

moving finger said:
In the same way, there are degrees of understanding. For example, agent "A" could claim to understand something about quantum physics, but "A" might nevertheless acknolwedge that "A" does not understand as much as agent "B", who is a quantum physics expert.
It would be wrong to conclude that both A and B had the same degree of understanding of quantum physics. It would also be wrong to conclude that A had no understanding and B had understanding.

You use the word undertanding here like you know what it means. In my case I would be saying that "A" has more knowledge about a particular field of quantum physics than "B". Then I would say that there is or can be an understanding between "A" and "B" where they can help each other with certain areas of further study. This will boost their collective comprehension of the ideas of quantum physics.


moving finger said:
You believe that empathy comes in binary?


moving finger said:
You seem to think that complete empathy is necessary for any kind of understanding.

I have indicated nothing of the kind. I did write that I think empathy may be a component of understanding along with some other components.


moving finger said:
You are entitled to your rather strange opinion, but I do not share it.

When you read my opinion, you are sharing it. Like it or not.


moving finger said:
It can be argued that person X who understands the French language AND empathises strongly with the French people has a better understanding of the French language than person Y who also understands the French language but does not empathise strongly with the French people, but it would be wrong to conclude from this that Y does not understand the French language at all.

The act of learning french is an act of empathy with the culture that created the language. Thats it.


moving finger said:
As far as I am concerned, there are "degrees of understanding", it is not a "black and white" affair. Just as there are degrees of comprehension.

If I see something that looks like a bear in the woods I then have an "understanding" that there is a bear in the woods. Later on I notice it is not a bear but a brown jacket on a stump. What "degree of understanding" did I have when I thought it was a bear and what "degree of understanding" did I have when I saw it was a jacket on a stump.?
 
  • #77
quantumcarl said:
I'd say one is either able to empathize or not.
You are entitled to your opinion, and we will have to agree to disagree
quantumcarl said:
I would be saying that "A" has more knowledge about a particular field of quantum physics than "B". Then I would say that there is or can be an understanding between "A" and "B" where they can help each other with certain areas of further study. This will boost their collective comprehension of the ideas of quantum physics.
Can there be, in your opinion, any "understanding" between "A" and "B" where "A" is a Vulcan and "B" is a human being?
quantumcarl said:
When you read my opinion, you are sharing it. Like it or not.
That depends on the intended meaning of "sharing" in the context in which the word was used
quantumcarl said:
The act of learning french is an act of empathy with the culture that created the language. Thats it.
Your opinion again. Curious.
quantumcarl said:
If I see something that looks like a bear in the woods I then have an "understanding" that there is a bear in the woods. Later on I notice it is not a bear but a brown jacket on a stump. What "degree of understanding" did I have when I thought it was a bear and what "degree of understanding" did I have when I saw it was a jacket on a stump.?
Your philosophy would seem to imply that the statement "quantumcarl understands" was either true or false, in each case. Did "quantumcarl understand" when he thought it was a bear? Did he "understand" when he thought it was a jacket?
Then what then happens when he even later discovers it was not a jacket after all, but a brown blanket? Does he now understand?
MF
 
  • #78
Consider this my edit page please:

moving finger said:
You believe that empathy comes in binary?

I don't know where you get that idea. You're grasping at straws that I haven't put out. Perhaps you're referring to another post by someone else... or yourself.

Originally Posted by quantumcarl
If I see something that looks like a bear in the woods I then have an "understanding" that there is a bear in the woods. Later on I notice it is not a bear but a brown jacket on a stump. What "degree of understanding" did I have when I thought it was a bear and what "degree of understanding" did I have when I saw it was a jacket on a stump.?

"Answer" posted by MF:

Your philosophy would seem to imply that the statement "quantumcarl understands" was either true or false, in each case. Did "quantumcarl understand" when he thought it was a bear? Did he "understand" when he thought it was a jacket?
Then what then happens when he even later discovers it was not a jacket after all, but a brown blanket? Does he now understand?
MF

You seem unable to answer my question. I'm asking how you define degrees of understanding. You're the one proposing they exist and yet, their definition eludes you so far.

I can empathize with your "answer a question with a question" defence because it is a difficult question.

As it goes, in my part of the world, understanding is only understanding when the math or the medical info or the dialect is true information and properly learned. If it is Bulle Shiite and improperly assimilated then, even if the person understands the jumble of information in their own head, no one else will. And after some experiences with this perplexing situation, the person will realize they actually did not understand one bit of the information in their head. The person was taught mis-information and the information has led to a mis-understanding of the topic.

So my definition of understanding is beginning to include these elements:

QuantumCarl's guide to Understanding

Correct (true) information
Experience (of that information)
Empathy (of the information)
Consciousness (of all of the above)

Welcome Tisthammerw you have raised some good points. I think the Chinese Room experiment has bit off more than it can chew with regards to the definition of "understanding".

I agree that its definition belongs in the realm of relative semantics however, this discussion has and can continue, in my view, to bring the many uses of the word a little closer together. As I've always stated, terminology exists because professionals need terms that identify origin and function. Words that offer a clear picture of what they describe also offer sound progress and swift decision in the increasingly murky milue of mankind. Thanks!
 
  • #79
TheStatutoryApe said:
Since we are talking about Searle's CR I would suggest using his definitions.
According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
With respect, the above is not a “definition”, this is a conclusion (that symbol manipulation cannot give rise to semantic understanding).
If you are saying that the CR cannot have semantic understanding *by definition” then the entire CR argument becomes fallacious (circulus in demonstrando).
TheStatutoryApe said:
I would agree that "semantic" understanding can develope based off of "syntactic" understanding coupled with experience (or memory, I'm not sure which term would be best suited for my usage here but experience seems to fit better for me personally).
Therefore symbol manipulation (wth the right information/knowledge base) CAN give rise to semantic understanding? Doesn’t this contradict what you said above?
I agree information and knowledge are also required – I assumed this as a given but can state it explicitly if it helps. “Memory” and “experience” are simply particular (anthropocentric) forms of information and knowledge.
TheStatutoryApe said:
I think I should clarify what I mean by "experience". As I noted earlier I'm not entirely sure if "experience" is better a word to use for what I mean than the word "memory".
What I mean by experience is not the instant of experience as is occurs but rather the accumulation of knowledge through experience (i.e. "Moving Finger has experience in debating"). I see memory as being static imprints of information and experience as an aggregate of memories crossreferanced.
In AI terms, simple memory might be equated with information, and experience with knowledge (knowledge = information plus rules of correlation between the information)..
TheStatutoryApe said:
In the case of a blind person they have other senses by which to gather information and could possibly come to understand in some sense or another what "red" is but they will not, without help, be able to understand the experience of "red" that those with sight possess. Here is where the problem comes in...
You use the curious phrase “understand the experience of red”. If I experience seeing red then all I have achieved is that I have experienced seeing red. The experience of seeing red does not in itself convey any understanding, therefore to say that one is able to “understand the experience of red” simply by seeing red is imho misleading.
I dispute that “experiencing seeing red” necessarily endows an “understanding” of red, or that an agent which cannot experience red cannot therefore understand red. Just as the experience of flying does not endow an understanding of flying, and an agent which cannot fly can nevertheless understand flight.
.
TheStatutoryApe said:
As I stated before the purpose of language is to communicate the thoughts in a persons head. When a person says the word "red" they generally are not referring to the particular portion of the light spectrum which coresponds to the colour red but their own personal experience of the colour red.
They can relate the word red to a particular subjective sense-experience, yes. But this in itself is not “understanding”.
If I instead say the word “X-ray” (another part of the electromagnetic spectrum), are you then saying that I do not understand what the word means because I have no sense-experience of seeing X-rays?
My understanding of red, and my understanding of X-ray, arise from the information and knowledge that I possesses which allows me to put these concepts into rational contextual relationships with other concepts to derive meaning – in other words semantics. I may be blind, but I can understand red just as much as I can understand X-ray.
TheStatutoryApe said:
A blind person has been described in what manner possible what the colour red is. That blind person then under goes a procedure and is endowed with vision. Based solely off of that formerly blind person's knowledge gained about the colour red while blind will that person be able to identify "red" when he/she sees it?
This is the infamous Mary argument (Mary knows everything there is to know about red, but has never experienced seeing red). The reason the argument is fallacious is because “experiencing red” is not synonymous with “knowing what red is”. Experiencing red is experiencing red, period. Does Mary know or understand any less about X-rays because she has never experienced seeing X-rays? Of course not.
TheStatutoryApe said:
So with regard to your last line. Our senses are, as far as we know, our only source for gaining information.
Yes, but this is a peculiar human limitation, and need not necessarily be the case in all possible agents. I can even speculate of a possible future where humans “acquire information” by direct transfer into the brain, bypassing all the sense organs. Would you say that such information is somehow invalid because it is not experiential information?
TheStatutoryApe said:
Once we have that information we have it and our senses are no longer necessary to have an understanding of that information
Senses do not convey or create understanding, they only act as conduits for information transfer.
Understanding is a process that takes place within the brain (or brain equivalent) when it processes information and knowledge in a particular way.
TheStatutoryApe said:
No one has said that your eyes are the source for understanding of the colour red,
Pardon? What did you just say above?
TheStatutoryApe said:
nor has anyone stated that our particular sensing organs are a unique prerequisite for understanding.
Excellent, so we agree that experiential information is but one possible “source of information” (not of understanding), and an agent does not necessarily need to experience seeing red (or X-rays) in order to understand what is meant by the term red (or X-rays)?
.
TheStatutoryApe said:
There is "syntactic" meaning there just no "semantic".
It has not been shown that there is no semantic meaning present, except possibly by definition (which as we have seen results in a fallacious argument)
TheStatutoryApe said:
The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
Understanding is a process. In terms of what the process achieves, there is no difference between “a process” and “a perfect simulation of that process”. If you think there is, Please explain why a perfect simulation of a process necessarily differs in any way from the original process?
TheStatutoryApe said:
I asserted that the CR, as built by Searle, does not understand the meanings of the words it is using.
Perhaps you have asserted this, but you have not shown it.
I can assert anything I wish, but in absence of rational and logical argument that is simply my opinion.
TheStatutoryApe said:
Perhaps a better way of stating this would be to say that the words don't mean to the CR what they mean to people who speak/read chinese.
Where has this been shown, and why should it matter anyway?
You and I do not share “perfect definitions of all the words we use” (we dispute some meanings of words in this thread), but that does not entitle either of us to accuse the other of not understanding English.
TheStatutoryApe said:
This is yet another problem with Searle's CR. It is not feasible to produce a computer that can be indestinguishable from a person who "understands" unless it really is capable of understanding.
And how would you know whether or not it is really capable of understanding, and not just (as you suggest) simulating understanding? How would you tell the difference?
TheStatutoryApe said:
It would not be able to hold a coherant and indestinguishable conversation otherwise.
The CR can hold a coherent and intelligent conversation. (not sure what you mean by “indistinguishable conversation”?). What do we conclude from this?
May your God go with you
MF
 
  • #80
One more note:

I've noticed that no one, other than myself, has broken down the word understanding into its two roots

under



standing



Imagine who came up with this word and what it represented to them when they brought these two roots together.

Any thoughts?
 
  • #81
TheStatutoryApe said:
Do you agree that a dynamic process is necessary for "understanding"?
It is not that a “dynamic process is necessary for understanding”, understanding IS a dynamic process (which is why questions such as “can a pile of bricks understand” demonstrate a profound lack of understanding of the meaning of “undertanding” on the part of the questioner)

TheStatutoryApe said:
Do you think that when a human works math problems there is a fundamental difference in the process between the human and a calculator(a normal calculator)? If so what?
There are many fundamental differences, yes. For example in the case of the human agent the process is likely to be ill-defined, rather erratic and irrational (depending on the complexity of the problem), ie will not necessarily follow a rational and easily reproducible algorithm, and will be accompanied by many auxiliary or associated side-processes. In the case of the simple machine agent the algorithm is likely to be (in comparison) very rational, straightforward and easily reproducible. Is this what you mean?

TheStatutoryApe said:
Do you think "learning" would be the significant dynamic property required for "understanding" or something else?
Please define what you mean by “learning”. It could mean “acquiring new knowledge”. Understanding requires a knowledge-base, therefore (following the above definition) an agent with the capacity to understand, but which does not understand because it lacks a suitable knowledge-base, might achieve understanding by learning. Is this what you mean?

May your God go with you

MF
 
  • #82
moving finger said:
You believe that empathy comes in binary?
quantumcarl said:
I don't know where you get that idea.
Let me explain. In other words Quantumcarl (QC) believes an agent either has perfect empathy (1) or it does not have any empathy at all (0). Nothing “in between” is poossible? Yes or no?
moving finger said:
Your philosophy would seem to imply that the statement "quantumcarl understands" was either true or false, in each case. Did "quantumcarl understand" when he thought it was a bear? Did he "understand" when he thought it was a jacket?
Then what then happens when he even later discovers it was not a jacket after all, but a brown blanket? Does he now understand?
quantumcarl said:
You seem unable to answer my question.
You misunderstand (or jump to conclusions). I “chose” not to answer your question, just as it seems that you choose not to answer most of the questions in my last post.
quantumcarl said:
I'm asking how you define degrees of understanding.
Then why didn’t you just say so?
“degrees of understanding” means “two agents can understand subject X, yet one agent may have more understanding of X than the other”.
Consider the statement “Agent A possesses more understanding of subject X than does Agent B, and yet both agents still possesses some understanding of subject X”. These are “degrees of understanding”. Quantumcarl’s philosophy would seem to be that the above statement is necessarily false (ie the situation described is impossible).
quantumcarl said:
it is a difficult question.
Perhaps for QC, not for MF. It’s a very easy question, and I just answered it. Now I’ll wait to see if you answer the questions I posed in my earler post.
quantumcarl said:
understanding is only understanding when the math or the medical info or the dialect is true information and properly learned.
Classical error.
“True information”? What is that? Does QC possesses true information, or does QC just think/believe that QC does? How would QC find out?
A) When QC saw the bear, did he possesses true information?
B) When QC realized it was a jacket and not a bear, did he now possesses true information?
C) When quantumcarl QC realized it was a blanket and not a jacket, did he now possesses true information?
All we can ever possesses is epistemic information. We may try to infer ontically from this, but we never have direct access to ontic information. Thus the best we can ever achieve is to “believe that we have true information”. In each case of A, B and C above, QC believed (at the time) that it possessed true information.
If QC insists that QC must possesses true information in order to understand (as opposed to simply believing that QC possesses true information) then QC will never be able to demonstrate that QC understands, because QC will never be able to demonstrate unequivocally and objectively that QC possesses true information.
quantumcarl said:
If it is Bulle Shiite and improperly assimilated then, even if the person understands the jumble of information in their own head, no one else will.
Does QC claim to possesses true information? How would QC test this?
quantumcarl said:
Correct (true) information
Does QC claim to possesses true information? How would QC test this?
If QC is unable to prove that QC possesses true information, does it follow that QC does not understand anything?
quantumcarl said:
Experience (of that information)
Experience is a possible source of information. Experience provides information. But I dispute that an agent needs experience in order to understand.
May your God go with you
MF
 
  • #83
Tisthammerw said:
Does the Chinese room possesses understanding? It all depends on how you define understanding.
Excellent start!
Now allow me to summarise the fundamental problem as I see it.
Take the conditional statement :
IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
I hope that everyone here agrees with this statement?
The question that remains to be answered is then : Is consciousness necessary for understanding?
How do we tackle this problem?
First, to construct an argument, we need to state our premises.
We might DEFINE UNDERSTANDING such that understanding requires consciousness. Since in this case we have not SHOWN that understanding requires consciousness, but instead we have DEFINED understanding this way, this definition then becomes one of our premises.
What this gives us is then :
1 Premise : We define understanding such that it requires consciousness
2 IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
3 Consciousness is necessary for understanding (from Premise 1)
4 Hence an agent which does not possesses consciousness also does not possesses understanding (from 2,3)
The above argument is an example of “circulus in demonstrando”, ie we have assumed what we wish to prove (that consciousness is necessary for understanding) in our premises, and (though the logic of the argument is perfect) it is a fallacious argument.
Tisthammerw said:
Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious.
With respect, I did not say the statement “understanding requires consciousness” is fallacious.
The statement “understanding requires consciousness” is also a premise.
I said the ARGUMENT is fallacious. Do you understand the difference between an argument and a statement and a premise?
Tisthammerw said:
My definition of understanding requires consciousness. Do we agree?
I agree that you have chosen to define understanding such that it requires consciousness.
Tisthammerw said:
Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that.
Excellent.
Tisthammerw said:
Does your definition of understanding require consciousness? I'm not claiming that either.
Excellent.
Tisthammerw said:
But understanding in the sense that I use it would seem to require consciousness. Do we agree?
You have defined it so
Tisthammerw said:
It seems that we do. So why have we been arguing about this?
You misunderstand. We are NOT arguing about your premise “understanding requires consciousness”.
We seem to disagree on whether the following ARGUMENT is fallacious or not :
“we take as a premise that understanding requires consciousness, it follows that a non-conscious agent is unable to understand”
This argument is a perfect example of “circulus in demonstrando”, ie the conclusion of the argument is already assumed in the premises, which is accepted in logic as being a fallacious argument.
Tisthammerw said:
You have claimed that “understanding requires consciousness” is circulus demonstrato, a tautology and a fallacious argument.
Again, I have NOT claimed the statement “understanding requires consciousness” is “circulus in demonstrando” – you seem confused about the difference between a statement and an argument.
Let me repeat again :
The following argument is an example of “circulus in demonstrando”, and is fallacious :
“we take as a premise that understanding requires consciousness, it follows that a non-conscious agent is unable to understand”
Tisthammerw said:
Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"?
Let’s look at it logically.
“all bachelors are unmarried” is not necessarily an argument. It could be a statement or a premise, or both.
To construct an argument we first need to state our premises, then we draw inferences from those premises, then we make a conclusion from the inferences and premises.

Let's do this.
First one must define what one means by the terms “bachelor”, and “unmarried”. (you may object "this is obvious", but that is beside the point. Strictly all terms in an argument must be clearly defined and agreed).
These definitions then become part of the premises to the argument.
If the conclusion of the argument is already contained in the premises, then by definition the argument is fallacious, by “circulus in demonstrando”.
For example :
"we take as a premise that "bachelor" is defined as an "unmarried male", it follows that the statement "all bachelors are unmarried" is true"
The above argument is completely logical, but fallacious due to “circulus in demonstrando”

Check it out yourself in any good book on logic, if you don’t believe me.
May your God go with you
MF
 
Last edited:
  • #84
Tournesol said:
But consciousness is a defintional quality of understanding
Tisthammerw said:
That’s what I’ve been telling moving finger.
Tisthammerw said:
Whether or not consciousness is a definitional quality of understanding depends on how you define understanding. In my definition, it certainly is the case (and I suspect the same is true for yours). In moving finger’s definition, that is (apparently) not the case.
Let’s try to take this one step at a time, to see if we can make progress.

Do we all (MF, Tournesol and Tisthammerw) agree that the following statement is true?

“whether or not consciousness is necessary for understanding is a matter of definition

True or false?

MF

(ps MF says imho it is true)
 
  • #85
Hi quantumcarl

I am conscious that our debate often gets so convoluted that we maybe lose sight of exactly what the issues are that we are debating.

Just to be sure that I properly understand (or should that be comprehend?) your position, and to make sure that I am not attacking something that is simply in my imagination, could you please examine the following statement :

Statement : "It is the case that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey" whereby a human being might have a partial understanding of the subject X."

(subject X could be the French language, for example)

Would quantumcarl agree that the above statement (according to quantumcarl's defininition of understanding) is true, or false?

Many thanks

MF
 
  • #86
quantumcarl said:
So my definition of understanding is beginning to include these elements:
QuantumCarl's guide to Understanding
Correct (true) information
Experience (of that information)
Empathy (of the information)
Consciousness (of all of the above)

Since we seem to be in "I'll show you mine if you show me yours" mode, then here is my quick shot at defining the verb "To Understand" :

To Understand (definition)
To know (= to possesses knowledge) and to comprehend the nature or meaning of something;
To perceive (an idea or situation) in terms of mental or informational representations/models;
To make sense of something (eg of a language);
To believe to be the case (as in "I understand it is getting late")

To know and to comprehend implies in turn possessing, manipulating, and making rational use of syntactic and semantic information and knowledge, to translate one meaning or interpretation into another according to a set of rules or laws.

Note I have (deliberately) omitted any reference to consciousness, awareness, experience, truth, and empathy, because (imho) I do not see any of these as necessary elements of understanding.
(this is not to say that, for example, empathy might not emerge from understanding - indeed it might - but I do not see empathy as a necessary prerequisite for understanding)

I don't expect that QC, Tournesol or Tisthammerw will agree with the definition - but that's OK, because I don't agree with their definitions either :smile:

May your God go with you

MF
 
Last edited:
  • #87
moving finger said:
Hi quantumcarl

I am conscious that our debate often gets so convoluted that we maybe lose sight of exactly what the issues are that we are debating.

Just to be sure that I properly understand (or should that be comprehend?) your position, and to make sure that I am not attacking something that is simply in my imagination, could you please examine the following statement :

Statement : "It is the case that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey" whereby a human being might have a partial understanding of the subject X."

(subject X could be the French language, for example)

Would quantumcarl agree that the above statement (according to quantumcarl's defininition of understanding) is true, or false?

Many thanks

MF

That's how I see it... as in... true.

Because, until a human or simulated human or partially simulated human or machine grasps a full understanding of a subject... there is no proper understanding of the subject.

Take the 5 blind men and the elephant as an example. Each man has his opinion and his set of data to describe the elephant. One thinks its like a snake. One thinks its like sand paper. One thinks its like a hairy rhino and so on. None of them understand that this is an elephant. What they do understand is that they are attempting to discern what the animal or phenomenon is... (because each man has consciousness and an empahty toward the function of researching the subject).

When the 5 men get together and discuss what they have gathered from their experience they form a knowledge which can be defined or constructed from each other's knowledge. However, I doubt that they have a conscious understanding of the elephant... that it could squash them all in a second, that it eats 2 tonnes of leaves a day and so on.

I believe there are steps toward understanding something and they include the evolution of the human brain. I believe there are steps toward learning empathy... and they include experiencing the ownership of a human brain.
 
  • #88
moving finger said:
IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
I hope that everyone here agrees with this statement?
The question that remains to be answered is then : Is consciousness necessary for understanding?

It all depends on how you define “understanding” and “consciousness.” If we use my definitions of those terms, then the answer is yes.


How do we tackle this problem?
First, to construct an argument, we need to state our premises.
We might DEFINE UNDERSTANDING such that understanding requires consciousness. Since in this case we have not SHOWN that understanding requires consciousness, but instead we have DEFINED understanding this way

Whether or not understanding requires consiousness is going to depend on how we define the terms anyway, so I don’t think this is a valid criticism. After all, if we use your logic here, we have not shown that all bachelors are unmarried even though that is an analytic statement.


Tisthammerw said:
Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious.
With respect, I did not say the statement “understanding requires consciousness” is fallacious.
The statement “understanding requires consciousness” is also a premise.
I said the ARGUMENT is fallacious. Do you understand the difference between an argument and a statement and a premise?

Yes, but I also understand that you have phrased my analytic statement in the form of an argument. This can be done to justify the analytic statement.


I agree that you have chosen to define understanding such that it requires consciousness.

Okay, so we agree that “understanding requires consciousness” (given the definitions I am using) is an analytic statement.

You misunderstand. We are NOT arguing about your premise “understanding requires consciousness”.
We seem to disagree on whether the following ARGUMENT is fallacious or not :
“we take as a premise that understanding requires consciousness, it follows that a non-conscious agent is unable to understand”
This argument is a perfect example of “circulus in demonstrando”, ie the conclusion of the argument is already assumed in the premises, which is accepted in logic as being a fallacious argument.

Well, in the context of my analytic statement “understanding requires consciousness” here is the “argument” I am using:

The first premise is the definition of understanding I'll be using (in terms of a man understanding words):

  • The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.

So in this definition, understanding is to be aware of the true meaning of what is communicated. For instance, a man understanding a Chinese word denotes that he is factually aware of what the word means.

The second premise is the definition of consciousness I’ll be using:

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

My conclusion: understanding requires consciousness.

To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:

  • Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms. You may mean something different when you use the terms, but that doesn’t change the veracity of my premises. The argument here is quite sound.


Tisthammerw said:
Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"?

Let’s look at it logically.
“all bachelors are unmarried” is not necessarily an argument. It could be a statement or a premise, or both.

Nonetheless it can be phrased as an argument, just as you yourself have done with my analytic statement “understanding requires consciousness” and just as I have done above.


To construct an argument we first need to state our premises, then we draw inferences from those premises, then we make a conclusion from the inferences and premises.

Let's do this.
First one must define what one means by the terms “bachelor”, and “unmarried”. (you may object "this is obvious", but that is beside the point. Strictly all terms in an argument must be clearly defined and agreed).
These definitions then become part of the premises to the argument.
If the conclusion of the argument is already contained in the premises, then by definition the argument is fallacious, by “circulus in demonstrando”.
For example :
"we take as a premise that "bachelor" is defined as an "unmarried male", it follows that the statement "all bachelors are unmarried" is true"
The above argument is completely logical, but fallacious due to “circulus in demonstrando”

You have a rather strange and confusing way of looking at analytic statements by phrasing them in the form of an argument and calling them “fallacious.” I am familiar with circular reasoning, but this objection doesn’t quite apply to analytic statements, and I don’t understand your insistence of phrasing my analytic statement “understanding requires consciousness” in the form of an argument when (a) you yourself admit that the analytic statement is true; (b) the argument is perfectly sound anyway, and (c) this analytical statement is itself a premise to larger and more relevant argument that you seem to be avoiding: the one regarding the Chinese room thought experiment.

Let’s take my example above regarding the “understanding requires consciousness” argument above. The conclusion logically follows from the premises, and the premises are true. The argument is sound. Doesn’t it seem odd then to call the argument “fallacious”?

Additionally, what about the matter at hand? That of the Chinese room thought experiment? Note what I said earlier:

Tisthammerw said:
Given this particular definition of understanding, it seems clear that the man in the Chinese room does not know a word of Chinese. What about the systems reply? That the Chinese room as a whole understands Chinese? Searle’s response works well here. Let the man internalize the room and become the system (e.g. he memorizes the rulebook). He may be able to simulate a Chinese conversation, but he still doesn’t understand the language.

For the next post:

moving finger said:
Do we all (MF, Tournesol and Tisthammerw) agree that the following statement is true?

“whether or not consciousness is necessary for understanding is a matter of definition

True or false?

If am understanding you correctly, then the answer is true: whether or not consciousness is necessary for understanding depends on how you define “consciousness” and “understanding.”

moving finger said:
here is my quick shot at defining the verb "To Understand" :

To Understand (definition)
To know (= to possesses knowledge) and to comprehend the nature or meaning of something;
To perceive (an idea or situation) in terms of mental or informational representations/models;
To make sense of something (eg of a language);
To believe to be the case (as in "I understand it is getting late")

There’s a problem here. If your definition of understanding does not require consciousness, it seems we are both using the word “perceive” quite differently, since if an entity perceives the entity possesses consciousness (using my definition of the word “consciousness”). BTW, I use “perceive” definition 1a and 2 in Merriam-Webster’s dictionary. It seems you are not using the conventional definition of the word “perceive” if you are trying to define understanding in such a way that it does not require consciousness. So what do you mean when you use the term “perceive”?
 
Last edited:
  • #89
I don't mean to butt in.. but i am going to...(throws ass in) this is all turning into a clever game of "a play on words". I think you guys should try and work together a little bit better instead of always trying to prove one another wrong on seemingly aggressive stances. Might get somewhere. Just my 2 cents, take it or leave it. Only trying to do what's morally right... but then again... what's the meaning of moral correctness? lol
 
  • #90
*sigh*
MF you seem to not be "understanding" quite a bit of what I am saying. In several instances you seem to think I am saying or implying almost the exact opposite of what I am saying. Hopefully I can clarify a few things here...

moving Finger said:
TheStatutoryApe said:
Since we are talking about Searle's CR I would suggest using his definitions.
According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
With respect, the above is not a “definition”, this is a conclusion (that symbol manipulation cannot give rise to semantic understanding).
If you are saying that the CR cannot have semantic understanding *by definition” then the entire CR argument becomes fallacious (circulus in demonstrando).
I realize that I was referring to Searle's conclusions but I was referring to them along with his definitions(which I have underlined in the above quote this time around). Also I do not fully agree with his definitions and conclusions as I pointed out here...
The problem that I see with his reasoning as I've stated on the other two threads regarding the CR is that Searle never really defines this "semantic" property. I think you would likely agree with me that this "semantic" understanding arises from complex orders of "syntactic" information (at least in humans if nothing else). I'd have to say that including this adendum I agree with his definitions though obviously not his conclusions (that syntactic information can not yield semantic understanding).
But for one reason or another you do not seem to have taken notice of this or the fact that I am simply restating Searle's argument and not my own when you say this here...
Therefore symbol manipulation (wth the right information/knowledge base) CAN give rise to semantic understanding? Doesn’t this contradict what you said above?
No it doesn't when you actually pay attention to what I am saying. It contradicts Searle's argument which I was pointing out I do not agree with and what about it I don't agree with.

MF said:
They can relate the word red to a particular subjective sense-experience, yes. But this in itself is not “understanding”.
If I instead say the word “X-ray” (another part of the electromagnetic spectrum), are you then saying that I do not understand what the word means because I have no sense-experience of seeing X-rays?
My understanding of red, and my understanding of X-ray, arise from the information and knowledge that I possesses which allows me to put these concepts into rational contextual relationships with other concepts to derive meaning – in other words semantics. I may be blind, but I can understand red just as much as I can understand X-ray.
___________________________________________________

Yes, but this is a peculiar human limitation, and need not necessarily be the case in all possible agents. I can even speculate of a possible future where humans “acquire information” by direct transfer into the brain, bypassing all the sense organs. Would you say that such information is somehow invalid because it is not experiential information?
Here you seem to very much ignore the definition I explicitly laid out for you on what I mean by experience. I gave a definition specifically to avoid this problem.
In response to your question above; my definition of "experience", which you alternately refer to as "knowledge", accepts this as valid being a manner of aquiring and crossreferancing information.

This here is a misunderstanding I thought was funny...
MF said:
TheStatutoryApe said:
Once we have that information we have it and our senses are no longer necessary to have an understanding of that information
Senses do not convey or create understanding, they only act as conduits for information transfer.
Understanding is a process that takes place within the brain (or brain equivalent) when it processes information and knowledge in a particular way.
TheStatutoryApe said:
No one has said that your eyes are the source for understanding of the colour red,
Pardon? What did you just say above?
You are more or less saying what I had been stating, though you have taken the quotes a bit out of context. I was saying that the eyes are only necessary for receiving the information and not for understanding and that once the information has been received that the eyes are no longer necessary. Or more exactly that the aquisition of information is necessary to understanding but is not the process of understanding in and of itself. Elsewhere in my post I even stated that there are other manners by which a person can acquire information about "red" which are still experiencial in nature (by my definition of experience).

Pretty much the majority of your last post has shown you seeing disagreements where there are none. One thing I can think of that may help in this would be if you were to treat each of my points as whole rather than trying to disect them line by line and out of context.
Now let's see if we can wade through the rest of your post...

MF said:
TheStatutoryApe said:
There is "syntactic" meaning there just no "semantic".
It has not been shown that there is no semantic meaning present, except possibly by definition (which as we have seen results in a fallacious argument)

TheStatutoryApe said:
The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
Understanding is a process. In terms of what the process achieves, there is no difference between “a process” and “a perfect simulation of that process”. If you think there is, Please explain why a perfect simulation of a process necessarily differs in any way from the original process?

TheStatutoryApe said:
I asserted that the CR, as built by Searle, does not understand the meanings of the words it is using.
Perhaps you have asserted this, but you have not shown it.
I can assert anything I wish, but in absence of rational and logical argument that is simply my opinion.

TheStatutoryApe said:
Perhaps a better way of stating this would be to say that the words don't mean to the CR what they mean to people who speak/read chinese.
Where has this been shown, and why should it matter anyway?
You and I do not share “perfect definitions of all the words we use” (we dispute some meanings of words in this thread), but that does not entitle either of us to accuse the other of not understanding English.

TheStatutoryApe said:
This is yet another problem with Searle's CR. It is not feasible to produce a computer that can be indestinguishable from a person who "understands" unless it really is capable of understanding.
And how would you know whether or not it is really capable of understanding, and not just (as you suggest) simulating understanding? How would you tell the difference?

TheStatutoryApe said:
It would not be able to hold a coherant and indestinguishable conversation otherwise.
The CR can hold a coherent and intelligent conversation. (not sure what you mean by “indistinguishable conversation”?). What do we conclude from this?
First off I am not arguing my own position here I am arguing that of Searle. His CR is built specifically to fail at understanding hence it does not understand. While Searle and Tisthammerw here would disagree that it can be altered in such a way as to possesses understanding I do not disagree with this. The only thing which I am discussing here is Searle's original unaltered Chinese Room.
Searle's original unaltered Chinese Room is built in such a way that it is solely reactionary and only spits out preformulated responses to predetermined questions. Searle creates a hypothetical manuel which supposedly can contain enough preformulated responses and predetermined questions that the responses you get from it are indestinguishable from the responses you would get from a human.
The problem here is that this is patently impossible. The number of possible questions and answers is astronomical. The possible number would greatly exceed the possible number of positions on a chess board and the mapping of a full chess game tree itself would take thousands of years. This is not even taking into account the time it would take to program which responses are the best responses to which questions and how long it would take a computer to search such a database for each question and it's proper response.
So in reality we would have to conclude that if a computer can speak indestinguishably from a human then it must be capable of "understanding" but this does not mean that Searle's CR actually possesses understanding. Searle's CR is simply a hypothetical model that could not actually be achieved in reality.
 
  • #91
moving finger said:
IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
I hope that everyone here agrees with this statement?
The question that remains to be answered is then : Is consciousness necessary for understanding?
Tisthammerw said:
It all depends on how you define “understanding” and “consciousness.” If we use my definitions of those terms, then the answer is yes.
And if one uses another definition of these terms then the answer could be no.
This (with respect) tells us nothing useful, execpt that the answer to the question depends on one’s definition of understanding. Period.
Tisthammerw said:
Whether or not understanding requires consiousness is going to depend on how we define the terms anyway, so I don’t think this is a valid criticism.
What criticism is that? The point I am trying to make is that “the conclusion depends on the definition”. Tisthammerw can use his definition and conclude that understanding is impossible in a non-conscious agent, MF can use his definition and conclude understanding is possible in a non-conscious agent. Each conclusion is equally valid. This gets us nowhere.
Tisthammerw said:
After all, if we use your logic here, we have not shown that all bachelors are unmarried even though that is an analytic statement.
First define your terms, then construct your argument. Then ask yourself whether or not it is a fallacious argument.
Tisthammerw said:
Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious.
moving finger said:
With respect, I did not say the statement “understanding requires consciousness” is fallacious.
The statement “understanding requires consciousness” is also a premise in your argument.
I said the ARGUMENT is fallacious. Do you understand the difference between an argument and a statement and a premise?
Tisthammerw said:
Yes, but I also understand that you have phrased my analytic statement in the form of an argument. This can be done to justify the analytic statement.
I can make the statement “the moon is made of cheese”. Is that statement true or false? How would we know? The only way to show whether it is true or false is to construct an argument to show how I arrive at the statement “the moon is made of cheese”. If my argument is “the moon is made of cheese because I define cheese as the main ingredient of moons” then the argument is circular, and fallacious.
Can you construct a non-fallacious (ie non-circular) argument to show whether your statement “understanding requires consciousness” is true or false?
Tisthammerw said:
in the context of my analytic statement “understanding requires consciousness” here is the “argument” I am using:
The first premise is the definition of understanding I'll be using (in terms of a man understanding words):
* The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.
So in this definition, understanding is to be aware of the true meaning of what is communicated. For instance, a man understanding a Chinese word denotes that he is factually aware of what the word means.
The second premise is the definition of consciousness I’ll be using:
* Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
My conclusion: understanding requires consciousness.
The conclusion is contained in the premises, hence circular, hence the argument is fallacious.
Tisthammerw said:
Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms.
Tisthammerw “asserts” that the premises are true – MF disputes that the premises are true.
Regardless of whether the premises are true or not, the argument as it stands is still circular, hence still fallacious.
The argument “all bachelors are unmarried because a bachelor is defined as an unmarried man” does not necessarily contain false premises, but the argument is still circular, hence fallacious.
One CANNOT prove anything useful with a circular argument, because the conclusion is already contained in the premises. This is the whole reason why circular arguments are fallacious.
Tisthammerw said:
You may mean something different when you use the terms, but that doesn’t change the veracity of my premises. The argument here is quite sound.
The argument is fallacious because it is circular, by definition.
The veracity of your premises is a matter of opinion. My opinion is different to yours.
moving finger said:
To construct an argument we first need to state our premises, then we draw inferences from those premises, then we make a conclusion from the inferences and premises.
Let's do this.
First one must define what one means by the terms “bachelor”, and “unmarried”. (you may object "this is obvious", but that is beside the point. Strictly all terms in an argument must be clearly defined and agreed).
These definitions then become part of the premises to the argument.
If the conclusion of the argument is already contained in the premises, then by definition the argument is fallacious, by “circulus in demonstrando”.
For example :
"we take as a premise that "bachelor" is defined as an "unmarried male", it follows that the statement "all bachelors are unmarried" is true"
The above argument is completely logical, but fallacious due to “circulus in demonstrando”
Tisthammerw said:
You have a rather strange and confusing way of looking at analytic statements by phrasing them in the form of an argument and calling them “fallacious.”
To draw a conclusion from premises, one must make an argument. If you think it is “strange” to construct an argument in logic in order to draw conclusions then I must ask where did you learn your logic? How else would you draw a conclusion?
The statement “understanding requires consciousness” is just that – a statement. Tisthammerw asserts this statement is true. MF asserts that it is not necessarily true. How can we know who is right?
Tisthammerw said:
I am familiar with circular reasoning, but this objection doesn’t quite apply to analytic statements, and I don’t understand your insistence of phrasing my analytic statement “understanding requires consciousness” in the form of an argument when (a) you yourself admit that the analytic statement is true
Where have I admitted that the stand-alone statement “understanding requires consciousness” is true?
Tisthammerw said:
(b) the argument is perfectly sound anyway
The argument is circular, hence by definition fallacious. Have you really studied circular arguments? They are generally accepted in logic as being fallacious. Perhaps you follow different rules of logic to the rest of us?
Tisthammerw said:
(c) this analytical statement is itself a premise to larger and more relevant argument that you seem to be avoiding: the one regarding the Chinese room thought experiment.
I have lost count of the number of times that I have said “I disagree with your premise”. I am tired of repeating it.
Tisthammerw said:
Let’s take my example above regarding the “understanding requires consciousness” argument above. The conclusion logically follows from the premises, and the premises are true.
There you go again. What did I just say? What part of “I disagree with your premise” is unclear?
Tisthammerw said:
The argument is sound. Doesn’t it seem odd then to call the argument “fallacious”?
A circular argument is fallacious, by definition. You have agreed that your argument is circular.
moving finger said:
Do we all (MF, Tournesol and Tisthammerw) agree that the following statement is true?
“whether or not consciousness is necessary for understanding is a matter of definition”
True or false?
Tisthammerw said:
If am understanding you correctly, then the answer is true: whether or not consciousness is necessary for understanding depends on how you define “consciousness” and “understanding.”
Thank you. We do agree on this. That is a step forward.
moving finger said:
here is my quick shot at defining the verb "To Understand" :
To Understand (definition)
To know (= to possesses knowledge) and to comprehend the nature or meaning of something;
To perceive (an idea or situation) in terms of mental or informational representations/models;
To make sense of something (eg of a language);
To believe to be the case (as in "I understand it is getting late")
Tisthammerw said:
There’s a problem here. If your definition of understanding does not require consciousness, it seems we are both using the word “perceive” quite differently, since if an entity perceives the entity possesses consciousness (using my definition of the word “consciousness”).
How about your definition of the word “perceive”.
BTW, I use “perceive” definition 1a and 2 in Merriam-Webster’s dictionary. It seems you are not using the conventional definition of the word “perceive” if you are trying to define understanding in such a way that it does not require consciousness. So what do you mean when you use the term “perceive”?
Is that the online version of the dictionary you refer to? With respect there are many more definitions of “to perceive” than contained in this disctionary. If one consults much larger and more comprehensive dictionaries one will find a number of alternative definitions of the verb.

In the Webster dictionary to perceive is defined in a number of ways, one of them being :
To obtain knowledge of through the senses; to receive impressions from by means of the bodily organs; to take cognizance of the existence, character, or identity of, by means of the senses; to see, hear, or feel;

In another dictionary (The New Penguin English Dictionary 2000) I find the following :
To perceive : To become aware of something through the senses, esp to see or observe; to regard somebody or something as something specified (eg she is perceived as being intelligent).

In yet another (The Collins Dictionary) I find :
To perceive : To become aware of (something) through the senses, to recognise or observe
Perception : The process by which an organism detects and interprets information from the external world by means of sensing receptors

There are thus very clear and accepted meanings of “to perceive” and "perception" which do not imply conscious perception.

The word perceive actually derives from the latine “per cipere” which means “to seize” or “to take”. Again, there is no requirement for consciousness contained in the roots of the word.

In psychology and the cognitive sciences, the word perception (= the act of perceiving) is defined as “the process of acquiring, interpreting, selecting, and organising (sensory) information”. It follows from this that “to perceive” is to acquire, interpret, select, and organise (sensory) information.

--------------------------------------------------------------

Actually, having cogitated on this issue for a little longer, I do not see that "perception" (ie the processing of data received from external sense-receptors) is a necessary part of understanding per se. I can imagine a completely self-contained agent which "understands Chinese", but has no sense-receptors at all - hence it couild not "perceive", and yet could still claim to understand Chinese. Therefore on reflection I now delete the requirement "to perceive" from my list of "necessary items" for understanding. :smile:

(on the other hand, there are other possible meanings to "perceive", for example "to perceive the truth of something, such as a statement" - an agent which understands is able to "perceive the truth of" things, therefore it necessarily perceives in this sense of the word)

With respect, we can argue about this until the cows come home. In the end, Tournesol is right (I don’t find myself agreeing with him often, so this is wonderful) – there is no “right” or “wrong” definition of a word in language, there are only more or less accepted definitions. And there are perfectly acceptable definitions of “to perceive” which do not associate perception with consciousness.

with respect

MF
 
Last edited:
  • #92
dgoodpasture2005 said:
I don't mean to butt in.. but i am going to...(throws ass in) this is all turning into a clever game of "a play on words". I think you guys should try and work together a little bit better instead of always trying to prove one another wrong on seemingly aggressive stances. Might get somewhere. Just my 2 cents, take it or leave it.
dgoodpasture2005 is right.

What we have is the following :
Tisthammerw defines understanding such that consciousness is necessary for understanding
Quantumcarl defines understanding such that "being human" is necessary for understanding
MF defines understanding such that neither consciousness nor "being human" is necessary for understanding
We could call these TH-understanding, QC-understanding and MF-understanding respectively.
Only conscious agents can have TH-understanding.
Only human agents can have QC-understanding.
and the China Room may have MF-understanding.

May your God go with you

MF
 
  • #93
TheStatutoryApe said:
Hopefully I can clarify a few things here...
OK, let’s try to understand each other.
TheStatutoryApe said:
Since we are talking about Searle's CR I would suggest using his definitions.
According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
moving finger said:
With respect, the above is not a “definition”, this is a conclusion (that symbol manipulation cannot give rise to semantic understanding).
If you are saying that the CR cannot have semantic understanding *by definition” then the entire CR argument becomes fallacious (circulus in demonstrando).
TheStatutoryApe said:
I realize that I was referring to Searle's conclusions but I was referring to them along with his definitions(which I have underlined in the above quote this time around). Also I do not fully agree with his definitions and conclusions as I pointed out here...
OK. Unfortunately I do not agree that the process of symbol manipulation (associated with the required information and knowledge) is necessarily a purely syntactic process, hence I disagree with his definition. Semantics is also all about symbol manipulation (just a different level or order of symbol manipulation to the syntactic level).
TheStatutoryApe said:
The problem that I see with his reasoning as I've stated on the other two threads regarding the CR is that Searle never really defines this "semantic" property. I think you would likely agree with me that this "semantic" understanding arises from complex orders of "syntactic" information (at least in humans if nothing else). I'd have to say that including this adendum I agree with his definitions though obviously not his conclusions (that syntactic information can not yield semantic understanding).
OK, we seem to agree on the conclusion, but possibly for slightly different reasons.
The problem I have is understanding the following : If you agree with his definition “symbol manipulation is PURELY syntactic”, how can you then conclude that symbol manipulation gives rise to semantic understanding?
TheStatutoryApe said:
I was saying that the eyes are only necessary for receiving the information and not for understanding and that once the information has been received that the eyes are no longer necessary. Or more exactly that the aquisition of information is necessary to understanding
We seem to agree on most of this. However I would not say that the “acquisition of” information is necessary to understanding, rather I would say that the “possession of” information is necessary to understanding. .
There are still areas where we seem to misunderstand each other, for example :
TheStatutoryApe said:
The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
moving finger said:
Understanding is a process. In terms of what the process achieves, there is no difference between “a process” and “a perfect simulation of that process”. If you think there is, Please explain why a perfect simulation of a process necessarily differs in any way from the original process?
May I ask some questions, to improve our understanding?
Do you believe the CR understands the meanings of the words it is using?
Do you believe that the words mean to the CR what they mean to people who speak/read chinese?
Do you believe the CR can hold a coherent and intelligent conversation in Chinese?
TheStatutoryApe said:
First off I am not arguing my own position here I am arguing that of Searle.
Whichever position you argue, your argument must be consistent and rigorous. Perhaps (with respect) some of the confusion between us arises because you on the one hand “argue the position of Searle” but at the same time when I disagree with ths argument your response is often to refer me to “your own definitions” of terms rather than Searle’s? Am I debating with TheStatutoryApe here, or with Searle’s stand-in? :smile:
TheStatutoryApe said:
His CR is built specifically to fail at understanding hence it does not understand.
I agree – but would phrase it slightly differently. He chooses his definitions associated with understanding such that understanding requires conscious awareness – which ensures that any agent that is not consciously aware fails to understand. By definition.
TheStatutoryApe said:
While Searle and Tisthammerw here would disagree that it can be altered in such a way as to possesses understanding I do not disagree with this.
Altered in what way?
TheStatutoryApe said:
The only thing which I am discussing here is Searle's original unaltered Chinese Room.
Searle's original unaltered Chinese Room is built in such a way that it is solely reactionary and only spits out preformulated responses to predetermined questions. Searle creates a hypothetical manuel which supposedly can contain enough preformulated responses and predetermined questions that the responses you get from it are indestinguishable from the responses you would get from a human.
The problem here is that this is patently impossible. The number of possible questions and answers is astronomical. The possible number would greatly exceed the possible number of positions on a chess board and the mapping of a full chess game tree itself would take thousands of years. This is not even taking into account the time it would take to program which responses are the best responses to which questions and how long it would take a computer to search such a database for each question and it's proper response.
The CR is supposed to be a thought-experiment, to argue matters of principle. Whether it is practical or even possible to build such a room “in reality” or not is beside the point.
TheStatutoryApe said:
So in reality we would have to conclude that if a computer can speak indestinguishably from a human then it must be capable of "understanding" but this does not mean that Searle's CR actually possesses understanding. Searle's CR is simply a hypothetical model that could not actually be achieved in reality.
My above remark again applies here.
With respect
MF
 
  • #94
moving finger said:
Statement : "It is the case that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey" whereby a human being might have a partial understanding of the subject X."

(subject X could be the French language, for example)

Would quantumcarl agree that the above statement (according to quantumcarl's defininition of understanding) is true, or false?

quantumcarl said:
That's how I see it... as in... true.

OK.

Now let us try a thought-experiment, which leads to another question.

Let us accept QC’s definition of understanding, ie that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey".

Suppose Mary comes along and claims that she understands the French language. (Mary is human by the way). Would you agree that it is possible that the statement “Mary understands the French language” could be a true statement?

(I assume you will answer “yes” to the above).

Thus, from QC’s definition of understanding, we have :

Mary EITHER has understanding of the French language, OR has no understanding of the the French language - there are NO "shades of grey"

In other words, the statement “Mary has understanding of the French language” is either true or false.

Now my question – how would QC propose to test whether the statement “Mary has understanding of the French language” is true or false?

With respect

MF
 
  • #95
Tentative Conclusion

Here is my tentative conclusion to our review of Searle's Chinese Room Thought experiment.

Searle has used incorrect terminology to describe the central function of the Chinese Room.

Therefore, Searle's thought experiiment is invalid.

I will concede, however, that the Chinese Room is capable of translation. In fact, I believe that is the word their looking for when referring to "understanding" chinese... in the case of Searle's Chinese Room.

In order to translate a language or art form or caligraphy... etc... one does not, nessesarily, have to understand what one is translating.

Take, for instance, the translation of code during war time. A translator will tranlate the code into another code which is passed on to a higher authority who has an understanding of the translated but secondarily encoded code.



The word "understand" has been bastardized in recent times and does not belong nor is it necessary to be used as a description of comprehending a language or math or logisitics etc...

§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?

To understand means to literally stand under a topic... as into stand in its shoes (an empathic metaphor) and become a part of its origins, functions, malfunctions an so on. One uses consciousness, empathy, previous and present experiences and knowledge of other's experiences to attain understanding. Until a person has discovered the whole story and all those stories that have made the story whole, they will not understand the story. And, even then, they will only understand the story according to their relative point of view.

Once more, my initial attempt at a conclusion for this thread...

Searle's CR thought experiment - its conclusion and its critics - errored when they utilized a term ("understanding") that, contextually, did not belong as part of the experiment. The term "translation" or "translating" would have been sufficient.

Do you think the CR can translate Chinese?

(Try "Babblefish" at "Altavista" for further research.)
 
  • #96
quantumcarl said:
In order to translate a language or art form or caligraphy... etc... one does not, nessesarily, have to understand what one is translating.
This is an interesting statement from you, QC.
I agree that understanding is not a prerequisite for translation, but I would suggest that to perform an accurate translation of one complex language into another it helps to understand both languages.
In the same way, I argue that empathy is not a prerequisite for understanding, but in order to properly and fully understand a language it helps to have empathy with the people using that language.
quantumcarl said:
Take, for instance, the translation of code during war time. A translator will tranlate the code into another code which is passed on to a higher authority who has an understanding of the translated but secondarily encoded code.
imho this is exactly what the purpose of each neuron or group of neurons in the brain - to accept incoming information (which it does not understand), to process it, then to pass information onto other neurons and groups of neurons. There is no microscopic part of the brain which "understands" what it is doing, just as none of the individual translators in your example "understands" what it is doing. Each component is simply following deterministic rules to generate output from input.
quantumcarl said:
One uses consciousness, empathy, previous and present experiences and knowledge of other's experiences to attain understanding. Until a person has discovered the whole story and all those stories that have made the story whole, they will not understand the story. And, even then, they will only understand the story according to their relative point of view.
imho consciousness and empathy are associated with understanding in humans, but it does not follow from this association that either consciousness or empathy are necessary to achieve understanding in all agents.
quantumcarl said:
Once more, my initial attempt at a conclusion for this thread...
Searle's CR thought experiment - its conclusion and its critics - errored when they utilized a term ("understanding") that, contextually, did not belong as part of the experiment. The term "translation" or "translating" would have been sufficient.
imho I disagree. When QC asks MF a question, and MF replies, one could argue that “all that happens is a process of translation – in the agent MF - from the question to the answer”, but though strictly correct, this would be too simplistic.
If one wishes to say that the CR is an example of no more than a translating machine, then I would assert that humans are also no more than a translating machine.
quantumcarl said:
Do you think the CR can translate Chinese?
"translate Chinese" into what?
By definition the CR understands only Chinese. It does not understand any other language.
MF
 
  • #97
moving finger said:
This is an interesting statement from you, QC.
I agree that understanding is not a prerequisite for translation, but I would suggest that to perform an accurate translation of one complex language into another it helps to understand both languages.

A person who uses Mongolian as a language can translate French into English using translation texts without ever possessing an understanding of either French or English and the origins of those two languages.


moving finger said:
In the same way, I argue that empathy is not a prerequisite for understanding, but in order to properly and fully understand a language it helps to have empathy with the people using that language.

What is the basis of your "argument"?
moving finger said:
imho
this is exactly what the purpose of each neuron or group of neurons in the brain - to accept incoming information (which it does not understand), to process it, then to pass information onto other neurons and groups of neurons. There is no microscopic part of the brain which "understands" what it is doing, just as none of the individual translators in your example "understands" what it is doing. Each component is simply following deterministic rules to generate output from input.

In my opinion you are not a neuroscientist and have little or no knowledge or experience to back up this statement. I could be wrong but, even neuroscientists are unsure of how or why neurons behave in certain ways and are able to preform certain functions in addition to preforming functions that were previously preformed by other, specified neurons.

moving finger said:
imho consciousness and empathy are associated with understanding in humans, but it does not follow from this association that either consciousness or empathy are necessary to achieve understanding in all agents.

If you are saying that understanding is not guilty by the association of relying on empathy and consciousness to be defined, in my opinion, you are wrong.


moving finger said:
If one wishes to say that the CR is an example of no more than a translating machine, then I would assert that humans are also no more than a translating machine.

And that would be your opinion and your way of dealing with my statement. What I would add to your statement is that humans are organic translators with empathy and consciousness. And this is what distinquishes our method of translation from a machine's. We call it "understanding".

moving finger said:
"translate Chinese" into what?
By definition the CR understands only Chinese. It does not understand any other language.MF

Thanks for reminding me.

The experiment is in error because the CR has a human (aka: conscious being) in the room... and it is also for that reason that I declare the Chinese Room Thought Experiment in error and defunct.

If the whole purpose of the experiment is to determine the difference between how a machine processes information and how a human process information... the experiment is lacking control by using a human in the CR.
 
  • #98
It is a test based on an explanation; I am saying we have to solve the hard problem first, before we can have a genuine test.
In other words, in absence of an explanation, it makes no sense to test for consciousness? There is thus no logical basis for Searle’s conclusion that the CR does not possesses consciousness, correct?
In the absence of an objective explanation there is no objective way of
testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.
If manipulating symbols is all there is to understanding, and if consciousness is part of understanding, then there should be a conscious awareness of Chinese in the room (or in Searle's head, in the internalised case).
That is a big “if”. For Searle’s objection to carry any weight, it first needs to be shown that consciousness is necessary for understanding. This has not been done (except by “circulus in demonstrando”, which results in a fallacious argument)
[/QUOTE]
That consciousness is part of understanding is established by the definitions
of the words and the way language is used. Using words correctly is not
fallaciously circular argumentation: "if there is a bachelor in the room, there is a man in the room" is a a perfectly valid argument. So is
"if there is a unicorn in the room, there is a horned animal in the
room". The conceptual, definitional, logical correctness of an
argument (validity) is a separate issue to its factual, empirical correctness (soundness).
If the conclusion to an arguemnt were not in some way contained in its
premisses, there would be no such things as logical arguements in the first
place.
The problem with circular arguemnts is not that the premiss contains the
conclusion, the problem is that it does so without being either analytically true
(e.g by defintion) or synthetically true (factually, empirically).
You could claim that consciousness is not necessarly part of machine understanding; but that would be an admission that the CR's understanding is half-baked compared to human understanding...unless you claim that huamn understanding has nothing to do with consciousness either.
I am claiming that consciousness is not necessary for understanding in all possible agents. Consciousness may be necessary for understanding in humans, but it does not follow from this that this is the case in all possible agents.
To conclude from this that “understanding without consciousness is half baked” is an unsubstantiated anthropocentric (one might even say prejudiced?) opinion.
As I have stated several times, the intelligence of an artificial intelligence
needs to be pinned to human intelligence (albeit not it in a way that makes it
trivially impossible) in order to make the claim of "artificiallity"
intelligible. Otherwise, the computer is just doing something -- something
that might as well be called infromation-processing,or symbol manipulation.
No-one can doubt that computers can do those things, and Searle doesn't
either. Detaching the intelligence of the CR from human intelligence does
nothing to counteract the argument of the CR; in fact it is suicidal to the
strong AI case.
But consciousness is a defintional quality of understanding, just as being umarried is being a defintional quality of being a bachelor.
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.
Is "bachelors are unmarried because bachelors are unmarried" viciously
circular too ? Or is it -- as every logician everywhere maintains -- a
necessary, analytical truth ?
Quote:
Originally Posted by Tournesol
If you understand something , you can report that you know it, explain how you know it. etc. That higher-level knowing-how-you-know is consciousness by definition.
I dispute that an agent needs to in detail “know how it knows” in order for it to possesses an “understanding of subject X”.
“To know” is “to possesses knowledge”. A computer can report that it “knows” X (in the sense that the knowledge X is contained in it’s memory and processes), it might (if it is sufficiently complex) also be able to explain how it came about that it possesses that knowledge. By your definition such a computer would then be conscious?
Maybe. The question is whether syntax is sufficient for semantics.
I think not. imho what you suggest may be necessary, but is not sufficient, for consciousness.
Allow me to speculate.
Consciousness also requires a certain level of internalised self-representation, such that the conscious entity internally manipulates (processes) symbols for “itself” which it can relate to other symbols for objects and processes in the “perceived outside world”; in doing this it creates an internalised representation of itself in juxtaposition to the perceived outside world, resulting in a self-sustaining internal model. This model can have an unlimited number of possible levels of self-reference, such that it is possible that “it knows that it knows”, “it knows that it knows that it knows” etc.
Not very relevant.
If it is necessary but insufficient criterion for consciousness, and the CR doesn't have it, the
CR doesn't have consciousness.
I see. We first define understanding such that consciousness is necessary to understanding. And from our definition of understanding, we then conclude that understanding requires consciousness. Is that how its done?
How else would you do it ? Test for understanding without knowing what
"understanding means". Beg the question in the other direction by
re-defining "understanding" to not require consciousness ?
Write down a definition of "red" that a blind person would understand.
Are you suggesting that a blind person would not be able to understand a definition of “red”?
No, I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality. (Inasmuch as you can write down a theoretical,
non-experiential defintion. a blnd person would be able to understand it).
Thus the argument that all words can be defined
in entirely symbolic terms fails, thus the assumption that symbol-manipulation
is sufficient for semantics fails.
Sense-experience (the ability to experience the sensation of red) is a particular kind of knowledge, and is not synonymous with “understanding the concept of red”. Compare with the infamous “What Mary “Didn’t Know”” thought experiment.
Well, quite. It is a particular kind of knowledge, and somehow who lacks that
particular kind of knowledge lacks full semantics. You seem to be saying that
non-experiential knowledge ("red light has a wavelentght of 500nm") *is*
understanding, and all there is to understanding, and experience is
something extraneous that does not belong to understanding at all
(in contradiction to the conclusion of "What Mary Knew").
Of course, that would be circular and question-begging.
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
I disagree. I do not need to have the power of flight to understand aerodynamics.
To theoretically understand it.
Vision is simply an access to experiential information, a person who “sees red” does not necessarily understand anything about “red” apart from the experiential aspect (which imho is not “understanding”).
How remarkably convenient. Tell me, is that true analytically, by defintion,
or is it an observed fact ?
Experiential information may be used as an aid to understanding in some agents, but I dispute that experiential information is necessary for understanding in all agents.
How can it fail to be necessary for a semantic understanding of words that refer
sepcifically to experiences ?
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
They don't fully lack it, they don't fully have it. But remember that a computer is much more restricted.
More restricted in what sense?
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but
more so.
The latter is critical to the ordinary, linguistic understanding of "red".
I dispute that an agent which simply “experiences the sight of red” necessarily underdstands anything about the colour red.
Well, that is just wrong; they understand just what Mary doesn't: what it
looks like. It may well be the case that they don't know any of the stuff that
Mary does know. However, I do not need to argue that non-experiential
knowledge is not knowledge.
I also dispute that “experiencing the sight of red” is necessary to achieve an understanding of red (just as I do not need to be able to fly in order to understand aerodynamics).
You don't need to fly in order to understand aerodynamics *theoretically*.
However, if you can do both you clearly have more understanding than someone
who can only do one or the other or neither.
(Would you want to fly in a plane piloted by someone who had never been in the
air before ?)
If the "information processing" sense falls short of full human understanding, and I maintain it does, the arguemnt for strong AI founders and Searle makes his case.
And I mainitain it does not. I can converse intelligently with a blind person about the colour “red”, and that person can understand everything there is to know about red, without ever “experiencing the sight of red”.
No they can't. They don't know what Mary doesn't know.
If what you say is true, it would be impossible for anyone to ever learn form,
or be surprised by, an experience. Having been informed that caviare is
sturgeon eggs, they would not be surprised by the taste of caviare.
But "Sturgeon eggs" conveys almost nothing about the taste of caviare.
Your argument seems to be that “being able to see red” is necessary for an understanding of red, which is like saying “being able to fly” is necessary for an understanding of flight.
It is necessary for full understanding.
If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
They are necessary to learn the meaning of sensory language ITFP.
They are aids to understanding in the context of some agents (eg human beings), because that is exactly how human beings acquire some of their information. It is not obvious to me that “the only possible way that any agent can learn is via sense-experience”, is it to you?
I didn't claim sensory experience was the only way to learn, simpliciter.
I claim that experience is necessary for a *full* understanding of *sensory*
language, and that an entity without sensory exprience therefore lacks full
semantics.
If you are going to counter this claim as stated, you need to rise to the
challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red". (ie show that if Mary had access to the right books -- the
ones containing this magic definition -- she would have had nothing left to learn).
 
  • #99
moving finger said:
Tisthammerw said:
It all depends on how you define “understanding” and “consciousness.” If we use my definitions of those terms, then the answer is yes.

And if one uses another definition of these terms then the answer could be no.

Well, yes. I have said many times that the answer to the question depends on the definition of “understanding” and “consciousness” used.


Tisthammerw said:
Whether or not understanding requires consiousness is going to depend on how we define the terms anyway, so I don’t think this is a valid criticism.

What criticism is that?

Criticisms like “circulus in demonstrando” and that the argument is “fallacious.”

Tisthammerw said:
After all, if we use your logic here, we have not shown that all bachelors are unmarried even though that is an analytic statement.
First define your terms, then construct your argument. Then ask yourself whether or not it is a fallacious argument.

Very well. I define a bachelor to be “a male who is not married.” I define unmarried to be “not married.” Therefore, all bachelors are unmarried (this is an analytic statement). This argument is sound, so I don’t think it’s appropriate to call it fallacious.


I can make the statement “the moon is made of cheese”. Is that statement true or false? How would we know?

I’m not sure what relevance this has, but I would say the statement is false. We can “know” it is false by sending astronauts up there.

The only way to show whether it is true or false is to construct an argument to show how I arrive at the statement “the moon is made of cheese”. If my argument is “the moon is made of cheese because I define cheese as the main ingredient of moons” then the argument is circular, and fallacious.

I’m not really sure you can call it fallacious, because in this case “the moon is made of cheese” is an analytic statement due to the rather bizarre definition of “cheese” in this case. By your logic any justification for the analytic statement “all bachelors are unmarried” is fallacious.


Can you construct a non-fallacious (ie non-circular) argument to show whether your statement “understanding requires consciousness” is true or false?

Again, I don’t know of any other way to justify that the phrase “understanding requires consciousness” is an analytic statement in such a way that you wouldn’t consider the justification “fallacious.”


Tisthammerw said:
Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms.

Tisthammerw “asserts” that the premises are true – MF disputes that the premises are true.

Let me try this again. “This is what I mean by ‘understanding’…” is this premise true or false? Obviously it is true, because that is what I mean when I use the term. You yourself may use the word “understanding” in a different sense, but that has no bearing on the veracity of the premise, because what I mean when I use the term hasn’t changed.


Tisthammerw said:
You have a rather strange and confusing way of looking at analytic statements by phrasing them in the form of an argument and calling them “fallacious.”

To draw a conclusion from premises, one must make an argument. If you think it is “strange” to construct an argument in logic in order to draw conclusions then I must ask where did you learn your logic?

I’m not saying that drawing a conclusion from premises is strange, I’m saying it is strange to call logically sound arguments that demonstrate a statement to be analytic fallacious.


Tisthammerw said:
I am familiar with circular reasoning, but this objection doesn’t quite apply to analytic statements, and I don’t understand your insistence of phrasing my analytic statement “understanding requires consciousness” in the form of an argument when (a) you yourself admit that the analytic statement is true

Where have I admitted that the stand-alone statement “understanding requires consciousness” is true?

Remember, I was talking about the analytic statement using the terms as I have defined them. Note for instance in post #221 where you said

moving finger said:
Given your definition of understanding, it logically follows that a non-conscious agent is unable to understand.


Tisthammerw said:
(b) the argument is perfectly sound anyway
The argument is circular, hence by definition fallacious.

Again, I find it very strange that you call a logically sound argument fallacious.


Have you really studied circular arguments?

Yes.


Perhaps you follow different rules of logic to the rest of us?

Different from you perhaps.

But let’s trim the fat here regarding this particular sort of circularity claim. In terms of justifying that a statement is analytic (by showing that the statement necessarily follows from the definitions of the terms), I deny that it is fallacious. If it were, all justifications for analytical statements would fail (as would most of mathematics). And in any case this is beside the point, since we already agree that the statement “understanding requires consciousness” is analytic.

To reiterate, my “argument” when it comes to “understanding requires consciousness” is merely to show that the statement is analytical (using the terms as I mean them). You can call it “fallacious’” if you want to but the fact remains that it is perfectly sound. And since we already agree that “understanding requires consciousness” is analytical, I suggest we simply move on.

Usually, circular arguments are fallacious and I recognize that. So you don’t need to preach to the choir regarding that point.


Tisthammerw said:
Let’s take my example above regarding the “understanding requires consciousness” argument above. The conclusion logically follows from the premises, and the premises are true.
There you go again. What did I just say? What part of “I disagree with your premise” is unclear?

It’s unclear how that has any bearing to the matter at hand (which I have pointed out many times). I understand that you don’t agree with my definition of “understanding” in that you mean something different when you use the term. But that is irrelevant to the matter at hand. I’m not saying computers can’t understand in your definition of the term, I’m talking about mine. Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.

You misquoted me slightly, so I’ve replaced the quote with what the post actually contains:


Tisthammerw said:
There’s a problem here. If your definition of understanding does not require consciousness, it seems we are both using the word “perceive” quite differently, since if an entity perceives the entity possesses consciousness (using my definition of the word “consciousness”). BTW, I use “perceive” definition 1a and 2 in Merriam-Webster’s dictionary. It seems you are not using the conventional definition of the word “perceive” if you are trying to define understanding in such a way that it does not require consciousness. So what do you mean when you use the term “perceive”?


Is that the online version of the dictionary you refer to? With respect there are many more definitions of “to perceive” than contained in this disctionary.

[lists a number of examples]

I never claimed otherwise, but that still doesn’t answer my question. What do you mean when you use the term “perceive”?


And there are perfectly acceptable definitions of “to perceive” which do not associate perception with consciousness.

Perhaps, but what’s your definition?

But getting to the more relevant point at hand, let’s revisit my question:

Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.
 
  • #100
MF said:
OK, we seem to agree on the conclusion, but possibly for slightly different reasons.
The problem I have is understanding the following : If you agree with his definition “symbol manipulation is PURELY syntactic”, how can you then conclude that symbol manipulation gives rise to semantic understanding?
I believe that all information, at it's root, is syntactic. To believe otherwise would require that we believe information has some sort of platonic essence which endows it with meaning. I believe that semantic meaning emerges from the aggregate syntactic context of information. The smallest bit of information has no meaning in and of itself except in relation to other bits of information. This relationship then, lacking semantic meaning on either part, is necessarily a syntactic pattern. Once an entity has acquired a large enough amount of syntactic information and stored it in context it has developed experience, or a knowledge base. When the entity can abstract or learn from these patterns of information it will find significance in the patterns. This significance equates to the "semantic meaning" of the information so far as I believe.

MF said:
We seem to agree on most of this. However I would not say that the “acquisition of” information is necessary to understanding, rather I would say that the “possession of” information is necessary to understanding.
It seems that I just prefer more "active" words.:smile:
To me they just seem to fit better.

MF said:
May I ask some questions, to improve our understanding?
Do you believe the CR understands the meanings of the words it is using?
Do you believe that the words mean to the CR what they mean to people who speak/read chinese?
Do you believe the CR can hold a coherent and intelligent conversation in Chinese?
No.
No.
Yes, but only given the leaway of it being a hypothetical situation.
I'll come back to this as soon as I am done with the rest of your post.

MF said:
Whichever position you argue, your argument must be consistent and rigorous. Perhaps (with respect) some of the confusion between us arises because you on the one hand “argue the position of Searle” but at the same time when I disagree with ths argument your response is often to refer me to “your own definitions” of terms rather than Searle’s? Am I debating with TheStatutoryApe here, or with Searle’s stand-in?
I've been trying to site Searle's argument and make my argument at the same time. Perhaps I haven't maintained a proper devision between what is his argumetn and what is mine but it seems you often think I agree with Searle when I do not. I tend to be under the impression that I have pointed out my disagrements with his arguments but apparently I haven't done a good enough job of that.

MF said:
I agree – but would phrase it slightly differently. He chooses his definitions associated with understanding such that understanding requires conscious awareness – which ensures that any agent that is not consciously aware fails to understand. By definition.
Personally I don't think it is a matter of definitions. The only one of his definitions that is problematic, in my opinion, is his insistance that "syntax" can not yield "semantics". This follows though from the conclusion of his argument. His definitions are coloured by the conclusions of his argument but his argument isn't one of definitions. His argument is a logical hypothetical construct. The fact that his construct does not parallel reality as it claims is the real problem.

MF said:
TheStatutoryApe said:
While Searle and Tisthammerw here would disagree that it can be altered in such a way as to possesses understanding I do not disagree with this.
Altered in what way?
So far there have been several ideas such as giving the man in the room access to sensory information via camera or allowing the man to be the entire system rather than just the processing chip. Every idea for altering the room though is constructed by Searle, or Tisthammerw, in a way that does not reflect reality properly and sets the man in the room up to fail. I know that you believe that the CR does in some sense possesses understanding which I do not agree with but I will have to come back to this to discuss that later.
 
  • #101
TheStatutoryApe said:
So far there have been several ideas such as giving the man in the room access to sensory information via camera or allowing the man to be the entire system rather than just the processing chip. Every idea for altering the room though is constructed by Searle, or Tisthammerw, in a way that does not reflect reality properly and sets the man in the room up to fail. I know that you believe that the CR does in some sense possesses understanding which I do not agree with but I will have to come back to this to discuss that later.

I do believe the Chinese Room possesses a component that has understanding. That component is the man in the room. He may not understand that he is answering questions in Chinese with answer's in Chinese but, he does possesses a conscious understanding, nevertheless.

The man in the room understands he is in the room. He understands (experiencially) the approx. temperature of the room and he has the ability to understand the effects the temperature in the room are having on him.

The man understands that, periodically, a piece of paper will slide under the door and he understands that he is to use the caligraphy on the paper to find, in a book, a matching symbol and its adjacent symbols. He understands that he is to copy down the adjacent symbols on to a blank piece of paper and pass this piece of paper under the door to where the previous piece of paper came from.

If the man does not understand that he is providing answers to questions then, with the evidence he has, he's not using it toward an understanding of his task... but, I have a feeling he'd get the picture at some point.

All of this understanding is supported by how the man understands he must get up off his chair to reach the door of the Chinese Room. He must take a posture of sitting in order to sit on the chair in the room... and so forth and so on.


That's why I find Searle's Chinese Room Thought Experiment to be in error. In Searle's attempt to demonstrate the lack of understanding in certain systems, he has chosen a faulty set-up for the experiment. He has included systems at both ends of the experiment that possesses the power of understanding.
This error includes using terminology (suchas "understanding") that is semantic, relative and (recently) diluted in nature. These two errors serve to offer us an unclear depiction of what Searle's point really is... but I am still holding off on declaring his experiment "invalid".

For the time being, thank you.

PS. We have a rousing disscussion going on here, thanks to all of you!
 
  • #102
TheStatutoryApe said:
Every idea for altering the room though is constructed by Searle, or Tisthammerw, in a way that does not reflect reality properly and sets the man in the room up to fail. I know that you believe that the CR does in some sense possesses understanding which I do not agree with but I will have to come back to this to discuss that later.

Part of the reason is that nobody disputes the fact that a man can learn another language. So opening up the door and teaching him Chinese only works if you can apply that analogy to computers. And when you don't apply it to computers, I modify the experiment to model the computer (and thus set up the man to fail).

To recap the definitions I’ll be using:

In terms of a man understanding words, this is how I define understanding:
  • The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.
This particular definition of understanding requires consciousness. The definition of consciousness I’ll be using goes as follows:
  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:
  • Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
So, if a person is aware of the meaning of words, then by definition the individual possesses consciousness. A cautionary note: it is entirely possible to create other definitions of the word “understand.” One could make define “to grasp the meaning of” in such a way that would not require consciousness, though this form of understanding would perhaps be more metaphorical (at least, metaphorical to the definition supplied above). For the purposes of this thread (and all others regarding this issue) these are the definitions I’ll be using, in part because “being aware of what the words mean” seems more applicable to strong AI.

Suppose we have a robot with cameras, microphones, etc. Would the “right” program (with learning algorithms and whatever else you want) run on here produce literal understanding (keeping in mind my definitions)? My claim is no, and to justify it I appeal to the following thought experiment.

Let's call the “right” program that, if run on the robot, would produce literal understanding “program X.” Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place. So it seems that even having the “right” rules and the “right” program is not enough even with a robot.

Some strong AI adherents claim that having “the right hardware and the right program” is enough for literal understanding to take place. In other words, it might not be enough just to have the right program. A critic could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But it isn’t clear why that would be a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?

For TheStatutoryApe: if you think robot would possesses literal understanding with the “normal” processor and the “right” program, please answer my above questions.
 
  • #103
Tisthammerw said:
For TheStatutoryApe: if you think robot would possesses literal understanding with the “normal” processor and the “right” program, please answer my above questions.
The problem here is that you have set up your questions in such a manner that the answers are obvious. You have already delineated everything and made sure that I must abide by what YOU believe is true. I have contended before that your constructs do not parallel reality as I see it. I have made suggestions about other things to consider but you do not believe that my suggestions parallel reality. We simply don't agree and that is why I have ceased arguing the matter with you.

If all you wish is for someone to tell you that your argument is logically consistent then sure by all means it is and by the parameters of your argument I can not refute you. The same thing goes for Searle's Chinese Room. According to the parameters of the argument itself I can not refute it and it is logically cosistent.
I must maintain though that I do not agree with the parameters of either argument and therefore I do not agree that the conclusions carry weight in reality.
That is the best I can do for you.
Now I have to get going and do some thinking before I come back so as to make my last attempt at explaining to Mr. Finger why I do not agree that the CR possesses "understanding".
Thank you for your time.
 
  • #104
TheStatutoryApe said:
Tisthammerw said:
For TheStatutoryApe: if you think robot would possesses literal understanding with the “normal” processor and the “right” program, please answer my above questions.

The problem here is that you have set up your questions in such a manner that the answers are obvious.

It is not at all obvious to me what your answers to my questions are. Please tell me what your answers are. Or do you really believe that a computer, even with the “right” program, cannot understand (which is what my answer to the argument is)?


I have contended before that your constructs do not parallel reality as I see it. I have made suggestions about other things to consider but you do not believe that my suggestions parallel reality.

Well, one of your constructs was teaching the man Chinese: you open the door, introduce him to objects, speak Chinese words while pointing to the objects etc. You claimed this paralleled reality for a computer. It seemed you were saying that a robot (with cameras and microphones) with the “right” program would thus be able to understand Chinese. But this proposal completely ignores the problem presented by the robot and program X. Here we do have the “right” program in a machine with sensors etc. and still there is (I claim) no understanding.

Do you think that this robot with the “right” program with its “normal” processor would possesses literal understanding? I would really like to know your answer here. And if the answer is “yes” please answer my other questions.


I must maintain though that I do not agree with the parameters of either argument and therefore I do not agree that the conclusions carry weight in reality.

But how is that possible? Especially in the latter case where the robot is using its “normal” processor? The robot has eyes (cameras) ears (microphones) and the “right” program (with any and all learning algorithms that you could want). What more could you ask for? Is there “something else” required that I don’t know about?
 
Last edited:
  • #105
moving finger said:
I agree that understanding is not a prerequisite for translation, but I would suggest that to perform an accurate translation of one complex language into another it helps to understand both languages.
quantumcarl said:
A person who uses Mongolian as a language can translate French into English using translation texts without ever possessing an understanding of either French or English and the origins of those two languages.
It would seem you did not bother to read my reply.
I agreed that understanding is not a prerequisite for translation, but I suggested that to perform an accurate translation of one complex language into another it helps to understand both languages. You seemed to ignore this. Do you dispute it?

quantumcarl said:
What is the basis of your "argument"?
There is no evidence that empathy is necessarily required for understanding to take place.

moving finger said:
There is no microscopic part of the brain which "understands" what it is doing, just as none of the individual translators in your example "understands" what it is doing. Each component is simply following deterministic rules to generate output from input.
quantumcarl said:
I could be wrong but, even neuroscientists are unsure of how or why neurons behave in certain ways and are able to preform certain functions in addition to preforming functions that were previously preformed by other, specified neurons.
Do you disagree with the statement “There is no microscopic part of the brain which "understands" what it is doing”?

quantumcarl said:
If you are saying that understanding is not guilty by the association of relying on empathy and consciousness to be defined, in my opinion, you are wrong.
“guilty by association”? What is that supposed to mean?

quantumcarl said:
We call it "understanding".
Correction.. “You” call it understanding.

moving finger said:
translate Chinese" into what?
quantumcarl said:
Thanks for reminding me.
Do you have an answer for the question?

quantumcarl said:
If the whole purpose of the experiment is to determine the difference between how a machine processes information and how a human process information... the experiment is lacking control by using a human in the CR.
Replace the human with a mechanical device – the CR performs exactly the same as before (because all the human is doing is manipulating symbols on paper – this is his sole function).

MF
 

Similar threads

  • General Discussion
Replies
5
Views
2K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • General Discussion
Replies
6
Views
2K
  • General Discussion
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
647
  • General Discussion
Replies
3
Views
811
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
Replies
4
Views
1K
Writing: Input Wanted Clone Ship vs. Generation Ship
  • Sci-Fi Writing and World Building
Replies
30
Views
2K
  • Special and General Relativity
Replies
1
Views
989
Back
Top