Can computers understand?Can understanding be simulated by computers?

  • Thread starter Thread starter quantumcarl
  • Start date Start date
  • Tags Tags
    China
AI Thread Summary
The discussion centers on John Searle's "Chinese Room" argument, which posits that computers cannot genuinely understand language or concepts, as they merely follow formal rules without comprehension. Critics argue that understanding may be an emergent property of complex systems, suggesting that the entire system, including the individual and the book, could possess understanding despite individual components lacking it. The conversation also explores the potential of genetic algorithms in artificial intelligence, questioning whether such systems can achieve a form of understanding without consciousness. Some participants believe that a sufficiently complex algorithm could surpass human understanding in specific contexts, while others maintain that true understanding requires consciousness. The debate highlights the need for clear definitions of "understanding" and "consciousness" to facilitate meaningful discussion on the capabilities of computers.
  • #51
Doctordick said:
I think you fundamentally understand my perspective even if you deny it.
With respect, which perspective is this? You have lost me here.

Doctordick said:
Asking probing questions is the essence of "getting the rest of the story" (that explanation I was talking about).
Asking probing questions is simply the means of testing, and trying to falsify, the hypothesis "this agent understands". I would not expect anyone to take for granted (for example) any claim that "my cat understands quantum physics" - the only way to establish whether such a claim is true or false is to put it to the test.

MF
 
Last edited:
Physics news on Phys.org
  • #52
moving finger said:
If "yes", then I would ask probing questions to test the quality of the simulation (ie whether they understand or not). If they pass the test, then I would have to conclude that they understand (regardless of whether this is put forward as a simulation or as a genuine case of understanding - the proof of the pudding is in the eating).

That would be a Turing Test then. Of course the China/Chinese room is specicifally designed as a response to the TT (or to the idea
that is an entirely sufficient criterion).

IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.
 
  • #53
Tournesol said:
That would be a Turing Test then. Of course the China/Chinese room is specicifally designed as a response to the TT (or to the idea
that is an entirely sufficient criterion).
IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.

Searle's opponents said:
A number of objections have been raised about his (Searle's) conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain.

This is wrong because the room is specific to interpreting chinese... an entire brain holds information about many other applicible or non-applicibble concepts. Sometimes the concepts offer input with regard to "understanding" the concept of chinese. The room, however, would equate to a simple set of rules about chinese characters, and that's it.

The more I look at Searle's experiment, the more it seems to lack controls and parallel comparisomes.

Searle's opponent's said:
Thus, although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by anyone part.

Righty oh. Mind you, the person in the room has an "entire brain" and still does not understand written Chinese. They are simply performing a rote task, matching scrawls of ink on paper to scrawls of ink on paper. There is no understanding taking place. As for the room... it is very silent through all of this... exhibiting no understanding that there is even someone in the room.


Searle's Opposition said:
Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of anyone component in the room - person, book, or paper.

So, we have seen behind the curtain. The great wizard really doesn't understand what he's telling us however, the sum of the parts or the "emergent property" of behind the curtain and the wizard and all his books is what we should gullably accept as "understanding".

I (still) disagree. If you have any comprehension of understanding... you already know why I disagree.
 
  • #54
doctordick said:
That means the story remains reasonable as one learns more.
If one is able to create such a story then it is the presumption of the listener that whoever created the story understood what they were talking about. Without the story, the feeling that one understands something is little more than self delusion.
I think your mincing the concept here. The "explination" or "story" is not requisite for "understanding". These elements are only necessary for an observer to ascertain whether a subject possesses "understanding". You might assert an observer created universe but then we'd get muddled in discussing whether or not the subject is capable of observing itself.

QC said:
Tournesol said:
That would be a Turing Test then. Of course the China/Chinese room is specicifally designed as a response to the TT (or to the idea
that is an entirely sufficient criterion).
IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.
Searle's opponents said:
A number of objections have been raised about his (Searle's) conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain.
This is wrong because the room is specific to interpreting chinese... an entire brain holds information about many other applicible or non-applicibble concepts. Sometimes the concepts offer input with regard to "understanding" the concept of chinese. The room, however, would equate to a simple set of rules about chinese characters, and that's it.

The more I look at Searle's experiment, the more it seems to lack controls and parallel comparisomes.
But this isn't wrong. It's one of the fundamental flaws of the CR. It is exactly this which prevents the CR from "understanding" chinese.
Langauge is purely representative. Words do not have an inherant semantic property. Those "applicable or non-applicable concepts" you mention are in fact all applicable to the understanding of any given language because these concepts are what define the words. The word "red" is defined by all of the concepts in your brain relating to the experience of what we label "red". Since we are not telepathes we use words as tools to communicate the information inside our brains. The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.
 
  • #55
moving finger said:
Oh dear. Might I humbly suggest (with respect) that I do sense here some “sentimentality” with respect to the concept “only humans have any right to claim understanding”.

Humans created the word understanding and they have the right to apply it to what they want. My position is that there is not generic use of the word understanding where it applies to a pocket calculator. When you step on the gas in your car, your car does not understand you want to move faster in a direction... it simply responds properly to your sensory input.
moving finger said:
Can you perhaps explain exactly why you wish to avoid such a scenario?

Until humans have an understanding of one another, French, Iranian, Iraqis, Sunnis, ****es, Egyptians and Mongolians included I think we won't have a good comprehension of what understanding is. Since we do not have a good definition of the word understanding and therefore employ the word without really knowing what it means...we should hold off on applying it to computers while it either evolves a more solid and universal meaning or dissappears from the language completely.


moving finger said:
I know a few humans who could be described as “half-baked”, and in some cases I doubt their ability to understand certain things, but that would not allow me to conclude that homo sapiens in general is incapable of understanding.

I'm glad you are not being falacious about humans.


moving finger said:
If it can be demonstrated that the “motobot” in question does indeed understand much more than other humans about (for example) the medical diagnosis of human ailments,
then why (apart from your emotional repugnance at the thought) would you NOT want to see people seeking medical assistance from such an agent?

I don't mind seeking med. assist. from a bot, however, the information and treatment, if any, from the bot would only demonstrate to me that the bot is a tool that is helping me understand my situation... I would not assume that the bot understands my situation. As far as I know motobot is only responding correctly to various stimuli and that, as far as I know, is not the definition of understanding.


moving finger said:
Whilst I agree that a (perfectly) simulated object is not synonymous with the original object, I do not see how a (perfectly) simulated process differs in any significant way from the original process.

And that must be frustrating for you.


moving finger said:
And you believe that machines cannot (in principle) “care” and “nourish”?
Apart from this, when I say “I understand Quantum Mechanics”, does that mean there is necessarily any care and nourishment associated with my understanding?With respect
MF


That depends if you put any care into your study of QM and if you nourished certain relationships and concepts surrounding your studies.
 
  • #56
TheStatutoryApe said:
I think your mincing the concept here. The "explination" or "story" is not requisite for "understanding". These elements are only necessary for an observer to ascertain whether a subject possesses "understanding". You might assert an observer created universe but then we'd get muddled in discussing whether or not the subject is capable of observing itself.
But this isn't wrong. It's one of the fundamental flaws of the CR. It is exactly this which prevents the CR from "understanding" chinese.
Langauge is purely representative. Words do not have an inherant semantic property. Those "applicable or non-applicable concepts" you mention are in fact all applicable to the understanding of any given language because these concepts are what define the words. The word "red" is defined by all of the concepts in your brain relating to the experience of what we label "red". Since we are not telepathes we use words as tools to communicate the information inside our brains. The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.

Does the machine understand it is shuffling about ink on paper... or in the case of a computer, does the computer understand that it is collecting data and computing and answer?

This is where understanding becomes a question of consciousness... so consciousness must be clearly defined as well...

I also believe the word "experience" must play into this discussion because all understanding is a result of experience.

Thanks!
 
  • #57
QC said:
Does the machine understand it is shuffling about ink on paper... or in the case of a computer, does the computer understand that it is collecting data and computing and answer?
Does a pocket calculator understand math? I'm thinking that you would say no (this is admittedly only an assumption). If so then considering a human who understands math (which I am assuming you would agree is possible) what is the fundamental difference between a human working a math problem and a calculator working a math problem?

I'm trying to determine for myself what the difference is as well and would like your input. I've read an essay that's logic would assert a human does not utilize a "conscious" act in such an activity. I have a hard time refuting that so I more or less except it but it makes the idea of "understanding" even more elusive.
I'm thinking that a true definition of what we call "understanding" relies on a dynamic process such as "learning". The element of the calculator's psuedo-understanding of math is static. What do you think?
 
  • #58
Tournesol said:
That would be a Turing Test then.
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.
Tournesol said:
IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.
Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator" (he uses the rulebook to answer questions in Chinese), but it does not follow that the man in the CR is also the "agent which possesses understanding". The operator in this case is following a lot of (to him) meaningless rules, because the understanding is in the entire CR, not in the operator.
In the same way, if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.
May your God go with you.
MF
 
  • #59
Searle’s opponents said:
A number of objections have been raised about his (Searle's) conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain.
quantumcarl said:
This is wrong because the room is specific to interpreting chinese... an entire brain holds information about many other applicible or non-applicibble concepts.
Incorrect. In practice we observe that brains possesses an understanding of more than “just Chinese”, but this once again is due to our anthropocentric perspective. I would argue that in principle a brain could exist which simply “understands Chinese” in the same way that the CR understands Chinese, with no understanding (for example) of non-Chinese topics.
quantumcarl said:
Sometimes the concepts offer input with regard to "understanding" the concept of chinese. The room, however, would equate to a simple set of rules about chinese characters, and that's it.
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?
Can you give an example of a concept which (a) offers input to understanding the concept of Chinese, but at the same time (b) could not possibly be a part of the rules of the CR?
Searle's opponents said:
Thus, although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by anyone part.
quantumcarl said:
the person in the room has an "entire brain" and still does not understand written Chinese. They are simply performing a rote task, matching scrawls of ink on paper to scrawls of ink on paper. There is no understanding taking place.
In the same way, each neuron in the brain “performs a rote task”. Can you identify exactly where the “homunculus that understands” sits in the brain?
quantumcarl said:
As for the room... it is very silent through all of this... exhibiting no understanding that there is even someone in the room.
The CR is certainly NOT silent. Ask it any question in Chinese, and it will answer correctly and rationally. How is this “silent”?
Searle's Opposition said:
Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of anyone component in the room - person, book, or paper.
quantumcarl said:
So, we have seen behind the curtain. The great wizard really doesn't understand what he's telling us however, the sum of the parts or the "emergent property" of behind the curtain and the wizard and all his books is what we should gullably accept as "understanding".
And if the explanation is unacceptable to you, then what (pray) do you accept as evidence that a human “understands”?
quantumcarl said:
I (still) disagree. If you have any comprehension of understanding... you already know why I disagree.
With respect, your statement here is tantamount to “I cannot explain what understanding is, but I KNOW what it is” …….and all the rest is handwaving……
May your God go with you
MF
 
  • #60
TheStatutoryApe said:
Langauge is purely representative. Words do not have an inherant semantic property.
Semantic understanding arises from symbol manipulation. I would claim that the CR could carry out such symbol manipulation.
TheStatutoryApe said:
The word "red" is defined by all of the concepts in your brain relating to the experience of what we label "red".
"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?
In fact, are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)

A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.
Our senses are aids to our understanding, they are not the sole and unique source of understanding.
TheStatutoryApe said:
Since we are not telepathes we use words as tools to communicate the information inside our brains.
The CR possesses information inside itself; the CR communicates with words.
TheStatutoryApe said:
The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.
Your argument seems to be based on the suggestion that the tools in the CR “lack any meaning” because their only purpose “is to shuffle about words”.
I disagree. The purpose of the tools in the CR is “to understand Chinese”.
From what does “meaning” arise?
I would suggest that “meaning” arises simply from a process of symbol manipulation.
On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?
May your God go with you
MF
 
Last edited:
  • #61
quantumcarl said:
When you step on the gas in your car, your car does not understand you want to move faster in a direction... it simply responds properly to your sensory input.
And it is simply from these rather simplistic analogies you conclude that “no machine can ever understand”?
quantumcarl said:
Until humans have an understanding of one another, French, Iranian, Iraqis, Sunnis, ****es, Egyptians and Mongolians included I think we won't have a good comprehension of what understanding is.
But (with respect) your argument is based on a premise which contradicts this statement, which is that “quantumcarl comprehends what understanding is”, and you define it such that only humans can possesses understanding.
quantumcarl said:
Since we do not have a good definition of the word understanding and therefore employ the word without really knowing what it means...we should hold off on applying it to computers while it either evolves a more solid and universal meaning or dissappears from the language completely.
I disagree. The correct (logical) conclusion from your argument should be “we should hold off making any definitive statements about whether or not a machine can understand, until we understand what understanding is”. This seems not to be the position that you take.
quantumcarl said:
I don't mind seeking med. assist. from a bot, however, the information and treatment, if any, from the bot would only demonstrate to me that the bot is a tool that is helping me understand my situation...
And my GP (that’s General Practitioner over here in England, otherwise known as family doctor) is in a very real sense “a tool that is helping me understand my (medical) situation” – but so what?
quantumcarl said:
I would not assume that the bot understands my situation. As far as I know motobot is only responding correctly to various stimuli and that, as far as I know, is not the definition of understanding.
And similarly I have no idea whether my GP really “understands my situation” in the sense that he does not necessarily know all about my background, my childhood, my hopes, fears, beliefs, prejudices, fantasies, aberrations, fetishes……etc etc….. but that does not mean that I conclude from this that my GP “does not understand my medical condition”. Why would a bot need to fully “understand your situation” (any more than a human doctor does) in order to demonstrate an understanding of medicine?
moving finger said:
I do not see how a (perfectly) simulated process differs in any significant way from the original process.
quantumcarl said:
And that must be frustrating for you.
Actually it is very satisfying.
Are you suggesting that you do see how a (perfectly) simulated process differs in a significant way from the original process? Can you explain?
moving finger said:
when I say “I understand Quantum Mechanics”, does that mean there is necessarily any care and nourishment associated with my understanding?
quantumcarl said:
That depends if you put any care into your study of QM and if you nourished certain relationships and concepts surrounding your studies.
Why do you think a machine could necessarily not put care into its study of QM and could not nourish certain relationships and concepts surrounding its studies?
May your God go with you
MF
 
  • #62
TheStatutoryApe said:
I'm thinking that a true definition of what we call "understanding" relies on a dynamic process such as "learning". The element of the calculator's psuedo-understanding of math is static. What do you think?
This may be true of simple pocket calculators, but there is no reason in principle why a "learning calculating machine" could not exist.

MF
 
  • #63
moving finger said:
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.

Yes: figure out how the brain produces consciousness physically, and
see if the AI has the right kind of physics to produce consciousness.

Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator"

He is as a matter of definition.

(he uses the rulebook to answer questions in Chinese), but it does not follow that the man in the CR is also the "agent which possesses understanding". The operator in this case is following a lot of (to him) meaningless rules, because the understanding is in the entire CR, not in the operator.

...and supposing the operator internalises the rules...


In the same way, if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.

...and supposing consciousness requires understanding; then the operator
does consciously understand Chinese, because he understands Chineseby virtue of manipulating the rules; and the operator doesn't understand
Chinese, becuase he has no conscious awareness of understanding Chinese.

This is Searle's reductio of the Systems Response.

Well, you say, understanding doesn't require consciousness.

But it does; there is a difference between competencies that are displayed
instinctively, or learned by rote, and those that are understood.
 
  • #64
moving finger said:
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?

Consciousness.
 
  • #65
moving finger said:
Semantic understanding arises from symbol manipulation.

You have no reason to suppose that is the only requisite.
Whether it does or not is very much open to question.


"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?

It is perfectly reasonable to suggest that anyone needs normal vision in
order to fully understand colour terms in any language.

In fact, are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)

The latter is critical to the ordinary, linguistic understanding of "red".

A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.

It does not mean they are completely incapable; it does not mean
they are as capable as a sightd person.

Are yo conceding that an AI's understandign would be half-baked ?


Our senses are aids to our understanding, they are not the sole and unique source of understanding.

They can be a necessary condition without being a sufficient condition.
If an AI lacks them, it would not have full human semantics ("If a lion could sepak, we would not be able to understand it")

The CR possesses information inside itself; the CR communicates with words.


I would suggest that “meaning” arises simply from a process of symbol manipulation.

Are you claiming that is sufficient, or only necessary.

On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?

Presumably on the basis that while it has one necessary-but-insufficient ingredient, symbol manipulation, it lacks another: sensation.
 
  • #66
moving finger said:
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.
Tournesol said:
Yes: figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness.
What you suggest (with respect) is not a test, let alone a test of understanding. What you suggest is an explanation (of consciousness, not of understanding per se).
moving finger said:
Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator"
Tournesol said:
He is as a matter of definition.
? I’m not sure what you mean here. Are you saying that you disagree with my above statement?
moving finger said:
if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.
Tournesol said:
...and supposing consciousness requires understanding; then the operator does consciously understand Chinese, because he understands Chineseby virtue of manipulating the rules; and the operator doesn't understand Chinese, becuase he has no conscious awareness of understanding Chinese.
This does not follow. Why should the man’s consciousness necessarily understand anything of what it is manipulating in the internalised rulebook (any more than the man in the CR understands anything of Chinese – the man in this case is consciously aware of manipulating chinese characters, but he has no understanding of them)?
Tournesol said:
there is a difference between competencies that are displayed
instinctively, or learned by rote, and those that are understood.
None of the above shows that understanding requires consciousness, only that there is more to understanding than simply being able to repeat a few phrases.
moving finger said:
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?
Tournesol said:
Consciousness.
Is this merely your opinion, or can you provide any evidence that this is necessarily the case?
moving finger said:
Semantic understanding arises from symbol manipulation.
Tournesol said:
You have no reason to suppose that is the only requisite.
Whether it does or not is very much open to question.
What is missing (in your opinion)? Oh yes, consciousness. But I don’t see why consciousness is required.
moving finger said:
"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?
Tournesol said:
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
moving finger said:
are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)
Tournesol said:
The latter is critical to the ordinary, linguistic understanding of "red".
It has nothing to do with the information-processing sense of understanding what red is, it has only to do with the sense-experience of red.
moving finger said:
A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.
Tournesol said:
It does not mean they are completely incapable; it does not mean they are as capable as a sightd person.
Are yo conceding that an AI's understandign would be half-baked ?
Where have I conceded that? But you seem to be implying that a blind person’s understanding would be half-baked.
moving finger said:
Our senses are aids to our understanding, they are not the sole and unique source of understanding.
Tournesol said:
They can be a necessary condition without being a sufficient condition.
If an AI lacks them, it would not have full human semantics ("If a lion could sepak, we would not be able to understand it")
I dispute they are a necessary condition. If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
Are you suggesting that a blind person does not have full human semantics?
Does this mean a blind person is incapable of understanding?
moving finger said:
On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?
Tournesol said:
Presumably on the basis that while it has one necessary-but-insufficient ingredient, symbol manipulation, it lacks another: sensation.
Sensation is a necessary ingredient of understanding? Therefore if you place me in a state of sensory-deprivation it follows that I will lose all understanding, is that correct?
May your God go with you
MF
 
  • #67
TheStatutoryApe said:
Does a pocket calculator understand math? I'm thinking that you would say no (this is admittedly only an assumption). If so then considering a human who understands math (which I am assuming you would agree is possible) what is the fundamental difference between a human working a math problem and a calculator working a math problem?
I'm trying to determine for myself what the difference is as well and would like your input. I've read an essay that's logic would assert a human does not utilize a "conscious" act in such an activity. I have a hard time refuting that so I more or less except it but it makes the idea of "understanding" even more elusive.
I'm thinking that a true definition of what we call "understanding" relies on a dynamic process such as "learning". The element of the calculator's psuedo-understanding of math is static. What do you think?

I have also had the idea that learning is a product of understanding. Learning implies experience that is stored and readily available even when the task does not require the learned experience. (yet as you say, it is the culmination of information in an entire brain that lends itself to understanding)

I have mentioned genetic algorythms as a set of programs that actually builds with the data it is fed in a manner that exhibits the same leaps and "ah ha" as a human brain.

A bit of history with the genetic algorythm: I was experimenting with the idea of the use of the genetic algorythm as a 24/7 research item that would theoretically test various untested and unformulated forms of treating cancer. I wanted to find examples of its use and only found one other person utilizing the program. This was a laser scientist who was working on Star Wars in the mid-nineties. He said it did help a lot with his math and geometric calculations as well as the simulations of situations he was setting up. In the end, as you know, the program was deemed a waste of money.

Theoretically I could set up a genetic algorythm and enter info on the fine art masters and info on how to reproduce or surpass their works, then, electronically, it could probably produce a masterpiece a day for as long as there was electricity available to it. Each piece would be as individual and as compositionally intracate as any of the master's works (theoretically).

However, I could also frame a sand-dune with a camera lens and everyday I would get a perfectly composed and individual visual piece of art ... if not every few seconds ( in a wind storm!)...

What my question is does not have to do, at the moment, with whether a calculator, sand dune or extremely intricate computer "understands" math or art or biology or economics... the way we do (because I don't think it is a fair comparisome)... but, whether the calculator etc... experiences itself and the steps it is taking to offer us a correct response to stimuli.

As I understand it, understanding only comes from someone who understands a topic through their experience of it and through their experience as a consciously existing being. Understanding between humans is only possible because of the common experience they share which is... being human.

Are the sand-dune, computer or cell phone conscious of their experience and existence? If this can be convincingly demonstrated, (which is practically impossible to prove, in even in humans) then, I think we have a foot-hold on what understanding is and whether it could equally be applied to a heap of silicon chips as well as the minerals of which a human is composed.

Sorry, out of time.
 
  • #68
quantumcarl said:
understanding only comes from someone who understands a topic through their experience of it and through their experience as a consciously existing being.
Why should this necessarily be the case?

quantumcarl said:
Understanding between humans is only possible because of the common experience they share which is... being human.
I think you are talking of a special kind of understanding here, one with high empathy. But I do not need to empathise with a Frenchman in order to understand the French language.

MF
 
  • #69
MF said:
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.

Yes: figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness.


What you suggest (with respect) is not a test, let alone a test of understanding. What you suggest is an explanation (of consciousness, not of understanding per se).

It is a test based on an explanation; I am saying we have to solve the hard
problem first, before we can have a genuine test.


Quote:
Originally Posted by moving finger
Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator"

Quote:
Originally Posted by Tournesol
He is as a matter of definition.


? I’m not sure what you mean here. Are you saying that you disagree with my above statement?

I am saying "the operator" means "the man in the room".



if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.

...and supposing consciousness requires understanding; then the operator does consciously understand Chinese, because he understands Chineseby virtue of manipulating the rules; and the operator doesn't understand Chinese, becuase he has no conscious awareness of understanding Chinese.

This does not follow. Why should the man’s consciousness necessarily understand anything of what it is manipulating in the internalised rulebook (any more than the man in the CR understands anything of Chinese – the man in this case is consciously aware of manipulating chinese characters, but he has no understanding of them)?

If manipulating symbols is all there is to understanding, and if consciousness
is part of understanding, then there should be a conscious awareness of
Chinese in the room (or in Searle's head, in the internalised case).

But, by the original hypothesis, there isn't.

You could claim that consciousness is not necessarly part of machine understanding;
but that would be an admission that the CR's understanding is half-baked
compared to human understanding...unless you claim that huamn understanding
has nothing to do with consciousness either.

But consciousness is a defintional quality of understanding, just as being
umarried is being a defintional quality of being a bachelor.



there is a difference between competencies that are displayed
instinctively, or learned by rote, and those that are understood.

None of the above shows that understanding requires consciousness, only that there is more to understanding than simply being able to repeat a few phrases.

If you understand something , you can report that you know it, explain how you
know it. etc. That higher-level knowing-how-you-know is consciousness by
definition.


The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?

Consciousness.

Is this merely your opinion, or can you provide any evidence that this is necessarily the case?

It is a matter of definition -- it is part of how we distinguish understanding
from mere know-how.


Quote:
Originally Posted by moving finger
Semantic understanding arises from symbol manipulation.

Quote:
Originally Posted by Tournesol
You have no reason to suppose that is the only requisite.
Whether it does or not is very much open to question.


What is missing (in your opinion)? Oh yes, consciousness. But I don’t see why consciousness is required.


Write down a definition of "red" that a blind person would understand.

It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.


Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?

They don't fully lack it, they don't fully have it. But remember that a
computer is much more restricted.


Quote:
Originally Posted by moving finger
are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)

Quote:
Originally Posted by Tournesol
The latter is critical to the ordinary, linguistic understanding of "red".


It has nothing to do with the information-processing sense of understanding what red is, it has only to do with the sense-experience of red.

If the "information processing" sense falls short of full human understanding,
and I maintain it does, the arguemnt for strong AI founders and Searle makes
his case. Remember , he is not attacking weak AI, the idea that computers
can come up with some half-baked approxiamtion to human understanding.




Where have I conceded that? But you seem to be implying that a blind person’s understanding would be half-baked.

Yes.


Quote:
Originally Posted by moving finger
Our senses are aids to our understanding, they are not the sole and unique source of understanding.

Quote:
Originally Posted by Tournesol
They can be a necessary condition without being a sufficient condition.
If an AI lacks them, it would not have full human semantics ("If a lion could sepak, we would not be able to understand it")


I dispute they are a necessary condition. If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.

They are necessary to learn the meaning of sensory language ITFP. Once learnt,
they are no longer necessary -- people who become blind in adulthood
do not learned the meanings of colour-words.

Are you suggesting that a blind person does not have full human semantics?
Yes -- neither does someone who has never been in love, given birth, tasted caviare and so on.
Of course they may have "good enough" semantics -- hardly anyone has full
semantics. But a silicon computer would be much more semantically limited than
a person.

Does this mean a blind person is incapable of understanding?

Not on the "good enough" basis. But the case of a computer, or chinese room,
is much more extreme.


Sensation is a necessary ingredient of understanding? Therefore if you place me in a state of sensory-deprivation it follows that I will lose all understanding, is that correct?

No: if you lack the requisite sense, you cannot attach meaning to sensory
language ITFP. If you disagree, define "red" in such a way that a person
blind from birth could understand it.
 
Last edited:
  • #70
moving finger said:
I think you are talking of a special kind of understanding here, one with high empathy. But I do not need to empathise with a Frenchman in order to understand the French language.MF

"High empathy"? Please explain. Is there such thing as a "low empathy"?

As far as I know, empathy is empathy. It is an ability to understand the curcumstances influencing another human being as well as the ability to identify with objects and animals other than humans. It is a part of understanding and a powerful by-product of consciousness.

You don't need to empathize with a Frankophone to understand the french language?

Of course you do. Otherwise you wouldn't be learning french. As soon as the vowels and all those damn silent letters start forming in your mouth... and you have to twist an accent out of your tongue... you are on the path to empathizing with the French people... like it or not. You are assuming their role and method of communication. When you assume the role or... "walk in their shoes" (so to speak) you are truly standing under them... or... understanding the people and their language.

Understanding describes a function in humans that is more complex than the simple ability to repeat words in a correct sequence so that communication in french or math or medicine is achieved. That is called comprehension and it is properly used by the Italians when they ask you if you "comprende?" as in "can you comprehend what I am saying?"

There is a reason there are different words to describe different functions... the differences between the meanings of words are slight... but they are there for a reason. Terminologies offer subtle shades that help to distinguish the speaker's or writer's references and descriptions.

That is why you see cell differenciation in the plant and animal kingdoms. Different cells function in different ways. They don't work in other organs or tissues. They must be used in the context they have evolved to serve. Much in the way languages develope specific terminology to describe specific functions.

The alien term for understanding is different from the North American term "understanding". The alien terms describes a completely different function... they may use telepathy... they may have greater experiences they may hook up with parallel dimensions to ascertain the function of "ravlinz". For humans, and I'm not sure yet what the components of understanding are... but for humans we use experience, consciousness, empathy and knowledge in a slap-dash mixture that we call "understanding".

Thanks!
 
  • #71
quantumcarl said:
"High empathy"? Please explain. Is there such thing as a "low empathy"?
I would argue that there are degrees of empathy. One person can show "more empathy" than another. Or do you believe that empathy is an "all or nothing affair"? Do you believe that it is simply black and white, either one has complete empathy, or one has none at all?
In the same way, there are degrees of understanding. For example, agent "A" could claim to understand something about quantum physics, but "A" might nevertheless acknolwedge that "A" does not understand as much as agent "B", who is a quantum physics expert.
It would be wrong to conclude that both A and B had the same degree of understanding of quantum physics. It would also be wrong to conclude that A had no understanding and B had understanding.
quantumcarl said:
As far as I know, empathy is empathy. It is an ability to understand the curcumstances influencing another human being as well as the ability to identify with objects and animals other than humans. It is a part of understanding and a powerful by-product of consciousness.
You believe that empathy comes in binary? Either one has complete empathy, or one has none at all? Nothing in between?
quantumcarl said:
You don't need to empathize with a Frankophone to understand the french language?
Of course you do. Otherwise you wouldn't be learning french. As soon as the vowels and all those damn silent letters start forming in your mouth... and you have to twist an accent out of your tongue... you are on the path to empathizing with the French people... like it or not. You are assuming their role and method of communication. When you assume the role or... "walk in their shoes" (so to speak) you are truly standing under them... or... understanding the people and their language.
You seem to think that complete empathy is necessary for any kind of understanding.
You are entitled to your rather strange opinion, but I do not share it.
It can be argued that person X who understands the French language AND empathises strongly with the French people has a better understanding of the French language than person Y who also understands the French language but does not empathise strongly with the French people, but it would be wrong to conclude from this that Y does not understand the French language at all.
quantumcarl said:
Understanding describes a function in humans that is more complex than the simple ability to repeat words in a correct sequence so that communication in french or math or medicine is achieved. That is called comprehension and it is properly used by the Italians when they ask you if you "comprende?" as in "can you comprehend what I am saying?"
Perhaps you need to invent a new English word to encapsulate what you believe to be the case. As far as I am concerned, there are "degrees of understanding", it is not a "black and white" affair. Just as there are degrees of comprehension.
If "understanding of Z" was a black and white affair, then it should be possible to test a person's "understanding of Z" and always achieve either 0% or perfect 100% score (either they do understand Z, or they do not). The world does not work this way (even if you would want your ideal world to work like this, it doesn't).
quantumcarl said:
There is a reason there are different words to describe different functions... the differences between the meanings of words are slight... but they are there for a reason. Terminologies offer subtle shades that help to distinguish the speaker's or writer's references and descriptions.
Single words allow for subtle shades (which you seem to deny). If I say "it is snowing outside", that could mean anything from "there are a few snowflakes drifting about" to "there is a whiteout out there, you cannot see anything because of the blizzard".
An Eskimo might have different words for these two different types of snow, but in English "it is snowing outside" would be correct in both cases. "it is snowing outside" allows different shades in meaning. In the same way "X understands Y" allows for different shades in meaning - it might mean that X has a basic understanding of Y, it might mean that X is an expert in Y.
quantumcarl said:
That is why you see cell differenciation in the plant and animal kingdoms. Different cells function in different ways. They don't work in other organs or tissues. They must be used in the context they have evolved to serve. Much in the way languages develope specific terminology to describe specific functions.
I will pass on this, I cannot see the relevance.
quantumcarl said:
The alien term for understanding is different from the North American term "understanding". The alien terms describes a completely different function... they may use telepathy... they may have greater experiences they may hook up with parallel dimensions to ascertain the function of "ravlinz". For humans, and I'm not sure yet what the components of understanding are... but for humans we use experience, consciousness, empathy and knowledge in a slap-dash mixture that we call "understanding".
And I would still claim that BOTH the alien and the human understand, they just do it in different ways.
may your God go with you
MF
 
  • #72
Tournesol said:
It is a test based on an explanation; I am saying we have to solve the hard problem first, before we can have a genuine test.
In other words, in absence of an explanation, it makes no sense to test for consciousness? There is thus no logical basis for Searle’s conclusion that the CR does not possesses consciousness, correct?
Tournesol said:
If manipulating symbols is all there is to understanding, and if consciousness is part of understanding, then there should be a conscious awareness of Chinese in the room (or in Searle's head, in the internalised case).
That is a big “if”. For Searle’s objection to carry any weight, it first needs to be shown that consciousness is necessary for understanding. This has not been done (except by “circulus in demonstrando”, which results in a fallacious argument)
Tournesol said:
You could claim that consciousness is not necessarly part of machine understanding; but that would be an admission that the CR's understanding is half-baked compared to human understanding...unless you claim that huamn understanding has nothing to do with consciousness either.
I am claiming that consciousness is not necessary for understanding in all possible agents. Consciousness may be necessary for understanding in humans, but it does not follow from this that this is the case in all possible agents.
To conclude from this that “understanding without consciousness is half baked” is an unsubstantiated anthropocentric (one might even say prejudiced?) opinion.
Tournesol said:
But consciousness is a defintional quality of understanding, just as being umarried is being a defintional quality of being a bachelor.
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.
Tournesol said:
If you understand something , you can report that you know it, explain how you know it. etc. That higher-level knowing-how-you-know is consciousness by definition.
I dispute that an agent needs to in detail “know how it knows” in order for it to possesses an “understanding of subject X”.
“To know” is “to possesses knowledge”. A computer can report that it “knows” X (in the sense that the knowledge X is contained in it’s memory and processes), it might (if it is sufficiently complex) also be able to explain how it came about that it possesses that knowledge. By your definition such a computer would then be conscious?
I think not. imho what you suggest may be necessary, but is not sufficient, for consciousness.
Allow me to speculate.
Consciousness also requires a certain level of internalised self-representation, such that the conscious entity internally manipulates (processes) symbols for “itself” which it can relate to other symbols for objects and processes in the “perceived outside world”; in doing this it creates an internalised representation of itself in juxtaposition to the perceived outside world, resulting in a self-sustaining internal model. This model can have an unlimited number of possible levels of self-reference, such that it is possible that “it knows that it knows”, “it knows that it knows that it knows” etc.
moving finger said:
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?
Tournesol said:
Consciousness.
moving finger said:
Is this merely your opinion, or can you provide any evidence that this is necessarily the case?
Tournesol said:
It is a matter of definition -- it is part of how we distinguish understanding from mere know-how.
I see. We first define understanding such that consciousness is necessary to understanding. And from our definition of understanding, we then conclude that understanding requires consciousness. Is that how its done?
Tournesol said:
Write down a definition of "red" that a blind person would understand.
Are you suggesting that a blind person would not be able to understand a definition of “red”? Sense-experience (the ability to experience the sensation of red) is a particular kind of knowledge, and is not synonymous with “understanding the concept of red”. Compare with the infamous “What Mary “Didn’t Know”” thought experiment.
Tournesol said:
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
I disagree. I do not need to have the power of flight to understand aerodynamics. Vision is simply an access to experiential information, a person who “sees red” does not necessarily understand anything about “red” apart from the experiential aspect (which imho is not “understanding”). Experiential information may be used as an aid to understanding in some agents, but I dispute that experiential information is necessary for understanding in all agents.
moving finger said:
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
Tournesol said:
They don't fully lack it, they don't fully have it. But remember that a computer is much more restricted.
More restricted in what sense?
Tournesol said:
The latter is critical to the ordinary, linguistic understanding of "red".
I dispute that an agent which simply “experiences the sight of red” necessarily underdstands anything about the colour red. I also dispute that “experiencing the sight of red” is necessary to achieve an understanding of red (just as I do not need to be able to fly in order to understand aerodynamics).
Tournesol said:
If the "information processing" sense falls short of full human understanding, and I maintain it does, the arguemnt for strong AI founders and Searle makes his case.
And I mainitain it does not. I can converse intelligently with a blind person about the colour “red”, and that person can understand everything there is to know about red, without ever “experiencing the sight of red”. Your argument seems to be that “being able to see red” is necessary for an understanding of red, which is like saying “being able to fly” is necessary for an understanding of flight.
moving finger said:
If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
Tournesol said:
They are necessary to learn the meaning of sensory language ITFP.
They are aids to understanding in the context of some agents (eg human beings), because that is exactly how human beings acquire some of their information. It is not obvious to me that “the only possible way that any agent can learn is via sense-experience”, is it to you?
Tournesol said:
if you lack the requisite sense, you cannot attach meaning to sensory language ITFP. If you disagree, define "red" in such a way that a person blind from birth could understand it.
We’ve covered this one already.
May your God go with you
MF
 
  • #73
MF said:
Semantic understanding arises from symbol manipulation. I would claim that the CR could carry out such symbol manipulation.
Perhaps our definitions are a bit mixed here. Since we are talking about Searle's CR I would suggest using his definitions. According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
The problem that I see with his reasoning as I've stated on the other two threads regarding the CR is that Searle never really defines this "semantic" property. I think you would likely agree with me that this "semantic" understanding arises from complex orders of "syntactic" information (at least in humans if nothing else). I'd have to say that including this adendum I agree with his definitions though obviously not his conclusions (that syntactic information can not yield semantic understanding).
I would have to disagree with you though that "semantic" understanding arises from symbol manipulation in and of itself. I'd have to call it "syntactic" understanding, if any sort of understanding. Unless you are implying more in your definition that isn't explicitly stated. I would agree that "semantic" understanding can develope based off of "syntactic" understanding coupled with experience (or memory, I'm not sure which term would be best suited for my usage here but experience seems to fit better for me personally).
MF said:
"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?
In fact, are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)
A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.
Our senses are aids to our understanding, they are not the sole and unique source of understanding.
I think I should clarify what I mean by "experience". As I noted earlier I'm not entirely sure if "experience" is better a word to use for what I mean than the word "memory".
What I mean by experience is not the instant of experience as is occurs but rather the accumulation of knowledge through experience (i.e. "Moving Finger has experience in debating"). I see memory as being static imprints of information and experience as an aggregate of memories crossreferanced.
Now "sense-experience". My example of something based on basic sensory information is only a matter of trying to simplify my point and use an "experience" common and easy to understand for us humans with sight. I focus on sensory information because, as far as we know, this is the only manner in which we humans can gather information with which to develope "experience" (with regard to my earlier definition). I do not preclude the possibility of some entity (even a human) to gather information in some other fashion with which to develope an experience and understanding of something. I only mean that the information must reach that enitity in some fashion. In the case of a blind person they have other senses by which to gather information and could possibly come to understand in some sense or another what "red" is but they will not, without help, be able to understand the experience of "red" that those with sight possess. Here is where the problem comes in...
As I stated before the purpose of language is to communicate the thoughts in a persons head. When a person says the word "red" they generally are not referring to the particular portion of the light spectrum which coresponds to the colour red but their own personal experience of the colour red.
I think I'm tangenting a bit here. Consider this. A blind person has been described in what manner possible what the colour red is. That blind person then under goes a procedure and is endowed with vision. Based solely off of that formerly blind person's knowledge gained about the colour red while blind will that person be able to identify "red" when he/she sees it?
So with regard to your last line. Our senses are, as far as we know, our only source for gaining information. Once we have that information we have it and our senses are no longer necessary to have an understanding of that information (this in regards to the absurd argument of whether or not we still understand if put into a sensory deprivation chamber). No one has said that your eyes are the source for understanding of the colour red, nor has anyone stated that our particular sensing organs are a unique prerequisite for understanding. So please stop with this strawman.
MF said:
The CR possesses information inside itself; the CR communicates with words.
I never said that it doesn't possesses information did I?
MF said:
Your argument seems to be based on the suggestion that the tools in the CR “lack any meaning” because their only purpose “is to shuffle about words”.
I disagree. The purpose of the tools in the CR is “to understand Chinese”.
From what does “meaning” arise?
I would suggest that “meaning” arises simply from a process of symbol manipulation.
On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?
I never said that the tools lack meaning, in fact I said...
TheStatutoryApe said:
The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.
There is "syntactic" meaning there just no "semantic". Also I am referring to Searle's argument here specifically and it's flaws. In Searle's argument, no matter the flaws, the CR possesses no understanding. It's built that way. The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
"Meaning" would be a difficult word to pin down and I have not tried to nor will I attempt to at this juncture. I've never stated that there exists no "meaning" in the CR. I asserted that the CR, as built by Searle, does not understand the meanings of the words it is using. Perhaps a better way of stating this would be to say that the words don't mean to the CR what they mean to people who speak/read chinese.
This is yet another problem with Searle's CR. It is not feasible to produce a computer that can be indestinguishable from a person who "understands" unless it really is capable of understanding. It would not be able to hold a coherant and indestinguishable conversation otherwise.
 
  • #74
moving finger said:
This may be true of simple pocket calculators, but there is no reason in principle why a "learning calculating machine" could not exist.
MF
Missing my point. I never stated that they couldn't exist or that they couldn't possibly "understand". I'm not arguing the possibility of machines being able to "understand", I'm asking questions about what QuantumCarl, or anyone else here for that matter, thinks are required properties for "understanding".

Do you agree that a dynamic process is necessary for "understanding"?
Do you think that when a human works math problems there is a fundamental difference in the process between the human and a calculator(a normal calculator)? If so what?
Do you think "learning" would be the significant dynamic property required for "understanding" or something else?
 
  • #75
quantumcarl said:
Biography:
John Searle is an American philosopher who is best known for his work on the human mind and human consciousness. According to Searle, the human mind and human consciousness cannot be reduced simply to physical events and brain states.

Searle is actually a physicalist. He claims that the human brain has unique causal powers to it enable real understanding, thus going beyond formal rules manipulating input.

And the thought experiment isn't called the China room, it's called the Chinese room.

Does the Chinese room possesses understanding? It all depends on how you define understanding. In terms of a man understanding words, here is the definition I’ll be using:


  • The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.

So in this definition, understanding is to be aware of the true meaning of what is communicated. For instance, a man understanding a Chinese word denotes that he is factually aware of what the word means. It is interesting to note that this particular definition of understanding requires consciousness. The definition of consciousness I’ll be using goes as follows:

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:

  • Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Given this particular definition of understanding, it seems clear that the man in the Chinese room does not know a word of Chinese. What about the systems reply? That the Chinese room as a whole understands Chinese? Searle’s response works well here. Let the man internalize the room and become the system (e.g. he memorizes the rulebook). He may be able to simulate a Chinese conversation, but he still doesn’t understand the language.

Tournesol said:
But consciousness is a defintional quality of understanding, just as being
umarried is being a defintional quality of being a bachelor.

That’s what I’ve been telling moving finger. But he doesn’t seem to understand the situation.

Whether or not consciousness is a definitional quality of understanding depends on how you define understanding. In my definition, it certainly is the case (and I suspect the same is true for yours). In moving finger’s definition, that is (apparently) not the case.

moving finger said:
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.

Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious. Moving finger, please see post #210 regarding this criticism. Or to save you the trip, I can reproduce my response here.

My definition of understanding requires consciousness. Do we agree? Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that. Does your definition of understanding require consciousness? I'm not claiming that either. But understanding in the sense that I use it would seem to require consciousness. Do we agree? It seems that we do. So why have we been arguing about this?

You have claimed that “understanding requires consciousness” is circulus demonstrato, a tautology and a fallacious argument. But please understand what’s going on here. Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"? Obviously not. Analytic statements are not fallacious.
 
  • #76
moving finger said:
I would argue that there are degrees of empathy. One person can show "more empathy" than another. Or do you believe that empathy is an "all or nothing affair"? Do you believe that it is simply black and white, either one has complete empathy, or one has none at all?

I'd say one is either able to empathize or not. Empathy is generally considered a human trait. However, there are humans who lack the ability through conditioning and genetics.

moving finger said:
In the same way, there are degrees of understanding. For example, agent "A" could claim to understand something about quantum physics, but "A" might nevertheless acknolwedge that "A" does not understand as much as agent "B", who is a quantum physics expert.
It would be wrong to conclude that both A and B had the same degree of understanding of quantum physics. It would also be wrong to conclude that A had no understanding and B had understanding.

You use the word undertanding here like you know what it means. In my case I would be saying that "A" has more knowledge about a particular field of quantum physics than "B". Then I would say that there is or can be an understanding between "A" and "B" where they can help each other with certain areas of further study. This will boost their collective comprehension of the ideas of quantum physics.


moving finger said:
You believe that empathy comes in binary?


moving finger said:
You seem to think that complete empathy is necessary for any kind of understanding.

I have indicated nothing of the kind. I did write that I think empathy may be a component of understanding along with some other components.


moving finger said:
You are entitled to your rather strange opinion, but I do not share it.

When you read my opinion, you are sharing it. Like it or not.


moving finger said:
It can be argued that person X who understands the French language AND empathises strongly with the French people has a better understanding of the French language than person Y who also understands the French language but does not empathise strongly with the French people, but it would be wrong to conclude from this that Y does not understand the French language at all.

The act of learning french is an act of empathy with the culture that created the language. Thats it.


moving finger said:
As far as I am concerned, there are "degrees of understanding", it is not a "black and white" affair. Just as there are degrees of comprehension.

If I see something that looks like a bear in the woods I then have an "understanding" that there is a bear in the woods. Later on I notice it is not a bear but a brown jacket on a stump. What "degree of understanding" did I have when I thought it was a bear and what "degree of understanding" did I have when I saw it was a jacket on a stump.?
 
  • #77
quantumcarl said:
I'd say one is either able to empathize or not.
You are entitled to your opinion, and we will have to agree to disagree
quantumcarl said:
I would be saying that "A" has more knowledge about a particular field of quantum physics than "B". Then I would say that there is or can be an understanding between "A" and "B" where they can help each other with certain areas of further study. This will boost their collective comprehension of the ideas of quantum physics.
Can there be, in your opinion, any "understanding" between "A" and "B" where "A" is a Vulcan and "B" is a human being?
quantumcarl said:
When you read my opinion, you are sharing it. Like it or not.
That depends on the intended meaning of "sharing" in the context in which the word was used
quantumcarl said:
The act of learning french is an act of empathy with the culture that created the language. Thats it.
Your opinion again. Curious.
quantumcarl said:
If I see something that looks like a bear in the woods I then have an "understanding" that there is a bear in the woods. Later on I notice it is not a bear but a brown jacket on a stump. What "degree of understanding" did I have when I thought it was a bear and what "degree of understanding" did I have when I saw it was a jacket on a stump.?
Your philosophy would seem to imply that the statement "quantumcarl understands" was either true or false, in each case. Did "quantumcarl understand" when he thought it was a bear? Did he "understand" when he thought it was a jacket?
Then what then happens when he even later discovers it was not a jacket after all, but a brown blanket? Does he now understand?
MF
 
  • #78
Consider this my edit page please:

moving finger said:
You believe that empathy comes in binary?

I don't know where you get that idea. You're grasping at straws that I haven't put out. Perhaps you're referring to another post by someone else... or yourself.

Originally Posted by quantumcarl
If I see something that looks like a bear in the woods I then have an "understanding" that there is a bear in the woods. Later on I notice it is not a bear but a brown jacket on a stump. What "degree of understanding" did I have when I thought it was a bear and what "degree of understanding" did I have when I saw it was a jacket on a stump.?

"Answer" posted by MF:

Your philosophy would seem to imply that the statement "quantumcarl understands" was either true or false, in each case. Did "quantumcarl understand" when he thought it was a bear? Did he "understand" when he thought it was a jacket?
Then what then happens when he even later discovers it was not a jacket after all, but a brown blanket? Does he now understand?
MF

You seem unable to answer my question. I'm asking how you define degrees of understanding. You're the one proposing they exist and yet, their definition eludes you so far.

I can empathize with your "answer a question with a question" defence because it is a difficult question.

As it goes, in my part of the world, understanding is only understanding when the math or the medical info or the dialect is true information and properly learned. If it is Bulle Shiite and improperly assimilated then, even if the person understands the jumble of information in their own head, no one else will. And after some experiences with this perplexing situation, the person will realize they actually did not understand one bit of the information in their head. The person was taught mis-information and the information has led to a mis-understanding of the topic.

So my definition of understanding is beginning to include these elements:

QuantumCarl's guide to Understanding

Correct (true) information
Experience (of that information)
Empathy (of the information)
Consciousness (of all of the above)

Welcome Tisthammerw you have raised some good points. I think the Chinese Room experiment has bit off more than it can chew with regards to the definition of "understanding".

I agree that its definition belongs in the realm of relative semantics however, this discussion has and can continue, in my view, to bring the many uses of the word a little closer together. As I've always stated, terminology exists because professionals need terms that identify origin and function. Words that offer a clear picture of what they describe also offer sound progress and swift decision in the increasingly murky milue of mankind. Thanks!
 
  • #79
TheStatutoryApe said:
Since we are talking about Searle's CR I would suggest using his definitions.
According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
With respect, the above is not a “definition”, this is a conclusion (that symbol manipulation cannot give rise to semantic understanding).
If you are saying that the CR cannot have semantic understanding *by definition” then the entire CR argument becomes fallacious (circulus in demonstrando).
TheStatutoryApe said:
I would agree that "semantic" understanding can develope based off of "syntactic" understanding coupled with experience (or memory, I'm not sure which term would be best suited for my usage here but experience seems to fit better for me personally).
Therefore symbol manipulation (wth the right information/knowledge base) CAN give rise to semantic understanding? Doesn’t this contradict what you said above?
I agree information and knowledge are also required – I assumed this as a given but can state it explicitly if it helps. “Memory” and “experience” are simply particular (anthropocentric) forms of information and knowledge.
TheStatutoryApe said:
I think I should clarify what I mean by "experience". As I noted earlier I'm not entirely sure if "experience" is better a word to use for what I mean than the word "memory".
What I mean by experience is not the instant of experience as is occurs but rather the accumulation of knowledge through experience (i.e. "Moving Finger has experience in debating"). I see memory as being static imprints of information and experience as an aggregate of memories crossreferanced.
In AI terms, simple memory might be equated with information, and experience with knowledge (knowledge = information plus rules of correlation between the information)..
TheStatutoryApe said:
In the case of a blind person they have other senses by which to gather information and could possibly come to understand in some sense or another what "red" is but they will not, without help, be able to understand the experience of "red" that those with sight possess. Here is where the problem comes in...
You use the curious phrase “understand the experience of red”. If I experience seeing red then all I have achieved is that I have experienced seeing red. The experience of seeing red does not in itself convey any understanding, therefore to say that one is able to “understand the experience of red” simply by seeing red is imho misleading.
I dispute that “experiencing seeing red” necessarily endows an “understanding” of red, or that an agent which cannot experience red cannot therefore understand red. Just as the experience of flying does not endow an understanding of flying, and an agent which cannot fly can nevertheless understand flight.
.
TheStatutoryApe said:
As I stated before the purpose of language is to communicate the thoughts in a persons head. When a person says the word "red" they generally are not referring to the particular portion of the light spectrum which coresponds to the colour red but their own personal experience of the colour red.
They can relate the word red to a particular subjective sense-experience, yes. But this in itself is not “understanding”.
If I instead say the word “X-ray” (another part of the electromagnetic spectrum), are you then saying that I do not understand what the word means because I have no sense-experience of seeing X-rays?
My understanding of red, and my understanding of X-ray, arise from the information and knowledge that I possesses which allows me to put these concepts into rational contextual relationships with other concepts to derive meaning – in other words semantics. I may be blind, but I can understand red just as much as I can understand X-ray.
TheStatutoryApe said:
A blind person has been described in what manner possible what the colour red is. That blind person then under goes a procedure and is endowed with vision. Based solely off of that formerly blind person's knowledge gained about the colour red while blind will that person be able to identify "red" when he/she sees it?
This is the infamous Mary argument (Mary knows everything there is to know about red, but has never experienced seeing red). The reason the argument is fallacious is because “experiencing red” is not synonymous with “knowing what red is”. Experiencing red is experiencing red, period. Does Mary know or understand any less about X-rays because she has never experienced seeing X-rays? Of course not.
TheStatutoryApe said:
So with regard to your last line. Our senses are, as far as we know, our only source for gaining information.
Yes, but this is a peculiar human limitation, and need not necessarily be the case in all possible agents. I can even speculate of a possible future where humans “acquire information” by direct transfer into the brain, bypassing all the sense organs. Would you say that such information is somehow invalid because it is not experiential information?
TheStatutoryApe said:
Once we have that information we have it and our senses are no longer necessary to have an understanding of that information
Senses do not convey or create understanding, they only act as conduits for information transfer.
Understanding is a process that takes place within the brain (or brain equivalent) when it processes information and knowledge in a particular way.
TheStatutoryApe said:
No one has said that your eyes are the source for understanding of the colour red,
Pardon? What did you just say above?
TheStatutoryApe said:
nor has anyone stated that our particular sensing organs are a unique prerequisite for understanding.
Excellent, so we agree that experiential information is but one possible “source of information” (not of understanding), and an agent does not necessarily need to experience seeing red (or X-rays) in order to understand what is meant by the term red (or X-rays)?
.
TheStatutoryApe said:
There is "syntactic" meaning there just no "semantic".
It has not been shown that there is no semantic meaning present, except possibly by definition (which as we have seen results in a fallacious argument)
TheStatutoryApe said:
The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
Understanding is a process. In terms of what the process achieves, there is no difference between “a process” and “a perfect simulation of that process”. If you think there is, Please explain why a perfect simulation of a process necessarily differs in any way from the original process?
TheStatutoryApe said:
I asserted that the CR, as built by Searle, does not understand the meanings of the words it is using.
Perhaps you have asserted this, but you have not shown it.
I can assert anything I wish, but in absence of rational and logical argument that is simply my opinion.
TheStatutoryApe said:
Perhaps a better way of stating this would be to say that the words don't mean to the CR what they mean to people who speak/read chinese.
Where has this been shown, and why should it matter anyway?
You and I do not share “perfect definitions of all the words we use” (we dispute some meanings of words in this thread), but that does not entitle either of us to accuse the other of not understanding English.
TheStatutoryApe said:
This is yet another problem with Searle's CR. It is not feasible to produce a computer that can be indestinguishable from a person who "understands" unless it really is capable of understanding.
And how would you know whether or not it is really capable of understanding, and not just (as you suggest) simulating understanding? How would you tell the difference?
TheStatutoryApe said:
It would not be able to hold a coherant and indestinguishable conversation otherwise.
The CR can hold a coherent and intelligent conversation. (not sure what you mean by “indistinguishable conversation”?). What do we conclude from this?
May your God go with you
MF
 
  • #80
One more note:

I've noticed that no one, other than myself, has broken down the word understanding into its two roots

under



standing



Imagine who came up with this word and what it represented to them when they brought these two roots together.

Any thoughts?
 
  • #81
TheStatutoryApe said:
Do you agree that a dynamic process is necessary for "understanding"?
It is not that a “dynamic process is necessary for understanding”, understanding IS a dynamic process (which is why questions such as “can a pile of bricks understand” demonstrate a profound lack of understanding of the meaning of “undertanding” on the part of the questioner)

TheStatutoryApe said:
Do you think that when a human works math problems there is a fundamental difference in the process between the human and a calculator(a normal calculator)? If so what?
There are many fundamental differences, yes. For example in the case of the human agent the process is likely to be ill-defined, rather erratic and irrational (depending on the complexity of the problem), ie will not necessarily follow a rational and easily reproducible algorithm, and will be accompanied by many auxiliary or associated side-processes. In the case of the simple machine agent the algorithm is likely to be (in comparison) very rational, straightforward and easily reproducible. Is this what you mean?

TheStatutoryApe said:
Do you think "learning" would be the significant dynamic property required for "understanding" or something else?
Please define what you mean by “learning”. It could mean “acquiring new knowledge”. Understanding requires a knowledge-base, therefore (following the above definition) an agent with the capacity to understand, but which does not understand because it lacks a suitable knowledge-base, might achieve understanding by learning. Is this what you mean?

May your God go with you

MF
 
  • #82
moving finger said:
You believe that empathy comes in binary?
quantumcarl said:
I don't know where you get that idea.
Let me explain. In other words Quantumcarl (QC) believes an agent either has perfect empathy (1) or it does not have any empathy at all (0). Nothing “in between” is poossible? Yes or no?
moving finger said:
Your philosophy would seem to imply that the statement "quantumcarl understands" was either true or false, in each case. Did "quantumcarl understand" when he thought it was a bear? Did he "understand" when he thought it was a jacket?
Then what then happens when he even later discovers it was not a jacket after all, but a brown blanket? Does he now understand?
quantumcarl said:
You seem unable to answer my question.
You misunderstand (or jump to conclusions). I “chose” not to answer your question, just as it seems that you choose not to answer most of the questions in my last post.
quantumcarl said:
I'm asking how you define degrees of understanding.
Then why didn’t you just say so?
“degrees of understanding” means “two agents can understand subject X, yet one agent may have more understanding of X than the other”.
Consider the statement “Agent A possesses more understanding of subject X than does Agent B, and yet both agents still possesses some understanding of subject X”. These are “degrees of understanding”. Quantumcarl’s philosophy would seem to be that the above statement is necessarily false (ie the situation described is impossible).
quantumcarl said:
it is a difficult question.
Perhaps for QC, not for MF. It’s a very easy question, and I just answered it. Now I’ll wait to see if you answer the questions I posed in my earler post.
quantumcarl said:
understanding is only understanding when the math or the medical info or the dialect is true information and properly learned.
Classical error.
“True information”? What is that? Does QC possesses true information, or does QC just think/believe that QC does? How would QC find out?
A) When QC saw the bear, did he possesses true information?
B) When QC realized it was a jacket and not a bear, did he now possesses true information?
C) When quantumcarl QC realized it was a blanket and not a jacket, did he now possesses true information?
All we can ever possesses is epistemic information. We may try to infer ontically from this, but we never have direct access to ontic information. Thus the best we can ever achieve is to “believe that we have true information”. In each case of A, B and C above, QC believed (at the time) that it possessed true information.
If QC insists that QC must possesses true information in order to understand (as opposed to simply believing that QC possesses true information) then QC will never be able to demonstrate that QC understands, because QC will never be able to demonstrate unequivocally and objectively that QC possesses true information.
quantumcarl said:
If it is Bulle Shiite and improperly assimilated then, even if the person understands the jumble of information in their own head, no one else will.
Does QC claim to possesses true information? How would QC test this?
quantumcarl said:
Correct (true) information
Does QC claim to possesses true information? How would QC test this?
If QC is unable to prove that QC possesses true information, does it follow that QC does not understand anything?
quantumcarl said:
Experience (of that information)
Experience is a possible source of information. Experience provides information. But I dispute that an agent needs experience in order to understand.
May your God go with you
MF
 
  • #83
Tisthammerw said:
Does the Chinese room possesses understanding? It all depends on how you define understanding.
Excellent start!
Now allow me to summarise the fundamental problem as I see it.
Take the conditional statement :
IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
I hope that everyone here agrees with this statement?
The question that remains to be answered is then : Is consciousness necessary for understanding?
How do we tackle this problem?
First, to construct an argument, we need to state our premises.
We might DEFINE UNDERSTANDING such that understanding requires consciousness. Since in this case we have not SHOWN that understanding requires consciousness, but instead we have DEFINED understanding this way, this definition then becomes one of our premises.
What this gives us is then :
1 Premise : We define understanding such that it requires consciousness
2 IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
3 Consciousness is necessary for understanding (from Premise 1)
4 Hence an agent which does not possesses consciousness also does not possesses understanding (from 2,3)
The above argument is an example of “circulus in demonstrando”, ie we have assumed what we wish to prove (that consciousness is necessary for understanding) in our premises, and (though the logic of the argument is perfect) it is a fallacious argument.
Tisthammerw said:
Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious.
With respect, I did not say the statement “understanding requires consciousness” is fallacious.
The statement “understanding requires consciousness” is also a premise.
I said the ARGUMENT is fallacious. Do you understand the difference between an argument and a statement and a premise?
Tisthammerw said:
My definition of understanding requires consciousness. Do we agree?
I agree that you have chosen to define understanding such that it requires consciousness.
Tisthammerw said:
Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that.
Excellent.
Tisthammerw said:
Does your definition of understanding require consciousness? I'm not claiming that either.
Excellent.
Tisthammerw said:
But understanding in the sense that I use it would seem to require consciousness. Do we agree?
You have defined it so
Tisthammerw said:
It seems that we do. So why have we been arguing about this?
You misunderstand. We are NOT arguing about your premise “understanding requires consciousness”.
We seem to disagree on whether the following ARGUMENT is fallacious or not :
“we take as a premise that understanding requires consciousness, it follows that a non-conscious agent is unable to understand”
This argument is a perfect example of “circulus in demonstrando”, ie the conclusion of the argument is already assumed in the premises, which is accepted in logic as being a fallacious argument.
Tisthammerw said:
You have claimed that “understanding requires consciousness” is circulus demonstrato, a tautology and a fallacious argument.
Again, I have NOT claimed the statement “understanding requires consciousness” is “circulus in demonstrando” – you seem confused about the difference between a statement and an argument.
Let me repeat again :
The following argument is an example of “circulus in demonstrando”, and is fallacious :
“we take as a premise that understanding requires consciousness, it follows that a non-conscious agent is unable to understand”
Tisthammerw said:
Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"?
Let’s look at it logically.
“all bachelors are unmarried” is not necessarily an argument. It could be a statement or a premise, or both.
To construct an argument we first need to state our premises, then we draw inferences from those premises, then we make a conclusion from the inferences and premises.

Let's do this.
First one must define what one means by the terms “bachelor”, and “unmarried”. (you may object "this is obvious", but that is beside the point. Strictly all terms in an argument must be clearly defined and agreed).
These definitions then become part of the premises to the argument.
If the conclusion of the argument is already contained in the premises, then by definition the argument is fallacious, by “circulus in demonstrando”.
For example :
"we take as a premise that "bachelor" is defined as an "unmarried male", it follows that the statement "all bachelors are unmarried" is true"
The above argument is completely logical, but fallacious due to “circulus in demonstrando”

Check it out yourself in any good book on logic, if you don’t believe me.
May your God go with you
MF
 
Last edited:
  • #84
Tournesol said:
But consciousness is a defintional quality of understanding
Tisthammerw said:
That’s what I’ve been telling moving finger.
Tisthammerw said:
Whether or not consciousness is a definitional quality of understanding depends on how you define understanding. In my definition, it certainly is the case (and I suspect the same is true for yours). In moving finger’s definition, that is (apparently) not the case.
Let’s try to take this one step at a time, to see if we can make progress.

Do we all (MF, Tournesol and Tisthammerw) agree that the following statement is true?

“whether or not consciousness is necessary for understanding is a matter of definition

True or false?

MF

(ps MF says imho it is true)
 
  • #85
Hi quantumcarl

I am conscious that our debate often gets so convoluted that we maybe lose sight of exactly what the issues are that we are debating.

Just to be sure that I properly understand (or should that be comprehend?) your position, and to make sure that I am not attacking something that is simply in my imagination, could you please examine the following statement :

Statement : "It is the case that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey" whereby a human being might have a partial understanding of the subject X."

(subject X could be the French language, for example)

Would quantumcarl agree that the above statement (according to quantumcarl's defininition of understanding) is true, or false?

Many thanks

MF
 
  • #86
quantumcarl said:
So my definition of understanding is beginning to include these elements:
QuantumCarl's guide to Understanding
Correct (true) information
Experience (of that information)
Empathy (of the information)
Consciousness (of all of the above)

Since we seem to be in "I'll show you mine if you show me yours" mode, then here is my quick shot at defining the verb "To Understand" :

To Understand (definition)
To know (= to possesses knowledge) and to comprehend the nature or meaning of something;
To perceive (an idea or situation) in terms of mental or informational representations/models;
To make sense of something (eg of a language);
To believe to be the case (as in "I understand it is getting late")

To know and to comprehend implies in turn possessing, manipulating, and making rational use of syntactic and semantic information and knowledge, to translate one meaning or interpretation into another according to a set of rules or laws.

Note I have (deliberately) omitted any reference to consciousness, awareness, experience, truth, and empathy, because (imho) I do not see any of these as necessary elements of understanding.
(this is not to say that, for example, empathy might not emerge from understanding - indeed it might - but I do not see empathy as a necessary prerequisite for understanding)

I don't expect that QC, Tournesol or Tisthammerw will agree with the definition - but that's OK, because I don't agree with their definitions either :smile:

May your God go with you

MF
 
Last edited:
  • #87
moving finger said:
Hi quantumcarl

I am conscious that our debate often gets so convoluted that we maybe lose sight of exactly what the issues are that we are debating.

Just to be sure that I properly understand (or should that be comprehend?) your position, and to make sure that I am not attacking something that is simply in my imagination, could you please examine the following statement :

Statement : "It is the case that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey" whereby a human being might have a partial understanding of the subject X."

(subject X could be the French language, for example)

Would quantumcarl agree that the above statement (according to quantumcarl's defininition of understanding) is true, or false?

Many thanks

MF

That's how I see it... as in... true.

Because, until a human or simulated human or partially simulated human or machine grasps a full understanding of a subject... there is no proper understanding of the subject.

Take the 5 blind men and the elephant as an example. Each man has his opinion and his set of data to describe the elephant. One thinks its like a snake. One thinks its like sand paper. One thinks its like a hairy rhino and so on. None of them understand that this is an elephant. What they do understand is that they are attempting to discern what the animal or phenomenon is... (because each man has consciousness and an empahty toward the function of researching the subject).

When the 5 men get together and discuss what they have gathered from their experience they form a knowledge which can be defined or constructed from each other's knowledge. However, I doubt that they have a conscious understanding of the elephant... that it could squash them all in a second, that it eats 2 tonnes of leaves a day and so on.

I believe there are steps toward understanding something and they include the evolution of the human brain. I believe there are steps toward learning empathy... and they include experiencing the ownership of a human brain.
 
  • #88
moving finger said:
IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
I hope that everyone here agrees with this statement?
The question that remains to be answered is then : Is consciousness necessary for understanding?

It all depends on how you define “understanding” and “consciousness.” If we use my definitions of those terms, then the answer is yes.


How do we tackle this problem?
First, to construct an argument, we need to state our premises.
We might DEFINE UNDERSTANDING such that understanding requires consciousness. Since in this case we have not SHOWN that understanding requires consciousness, but instead we have DEFINED understanding this way

Whether or not understanding requires consiousness is going to depend on how we define the terms anyway, so I don’t think this is a valid criticism. After all, if we use your logic here, we have not shown that all bachelors are unmarried even though that is an analytic statement.


Tisthammerw said:
Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious.
With respect, I did not say the statement “understanding requires consciousness” is fallacious.
The statement “understanding requires consciousness” is also a premise.
I said the ARGUMENT is fallacious. Do you understand the difference between an argument and a statement and a premise?

Yes, but I also understand that you have phrased my analytic statement in the form of an argument. This can be done to justify the analytic statement.


I agree that you have chosen to define understanding such that it requires consciousness.

Okay, so we agree that “understanding requires consciousness” (given the definitions I am using) is an analytic statement.

You misunderstand. We are NOT arguing about your premise “understanding requires consciousness”.
We seem to disagree on whether the following ARGUMENT is fallacious or not :
“we take as a premise that understanding requires consciousness, it follows that a non-conscious agent is unable to understand”
This argument is a perfect example of “circulus in demonstrando”, ie the conclusion of the argument is already assumed in the premises, which is accepted in logic as being a fallacious argument.

Well, in the context of my analytic statement “understanding requires consciousness” here is the “argument” I am using:

The first premise is the definition of understanding I'll be using (in terms of a man understanding words):

  • The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.

So in this definition, understanding is to be aware of the true meaning of what is communicated. For instance, a man understanding a Chinese word denotes that he is factually aware of what the word means.

The second premise is the definition of consciousness I’ll be using:

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

My conclusion: understanding requires consciousness.

To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:

  • Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms. You may mean something different when you use the terms, but that doesn’t change the veracity of my premises. The argument here is quite sound.


Tisthammerw said:
Is the tautology “all bachelors are unmarried” a fallacious argument and "circulus in demonstrado"?

Let’s look at it logically.
“all bachelors are unmarried” is not necessarily an argument. It could be a statement or a premise, or both.

Nonetheless it can be phrased as an argument, just as you yourself have done with my analytic statement “understanding requires consciousness” and just as I have done above.


To construct an argument we first need to state our premises, then we draw inferences from those premises, then we make a conclusion from the inferences and premises.

Let's do this.
First one must define what one means by the terms “bachelor”, and “unmarried”. (you may object "this is obvious", but that is beside the point. Strictly all terms in an argument must be clearly defined and agreed).
These definitions then become part of the premises to the argument.
If the conclusion of the argument is already contained in the premises, then by definition the argument is fallacious, by “circulus in demonstrando”.
For example :
"we take as a premise that "bachelor" is defined as an "unmarried male", it follows that the statement "all bachelors are unmarried" is true"
The above argument is completely logical, but fallacious due to “circulus in demonstrando”

You have a rather strange and confusing way of looking at analytic statements by phrasing them in the form of an argument and calling them “fallacious.” I am familiar with circular reasoning, but this objection doesn’t quite apply to analytic statements, and I don’t understand your insistence of phrasing my analytic statement “understanding requires consciousness” in the form of an argument when (a) you yourself admit that the analytic statement is true; (b) the argument is perfectly sound anyway, and (c) this analytical statement is itself a premise to larger and more relevant argument that you seem to be avoiding: the one regarding the Chinese room thought experiment.

Let’s take my example above regarding the “understanding requires consciousness” argument above. The conclusion logically follows from the premises, and the premises are true. The argument is sound. Doesn’t it seem odd then to call the argument “fallacious”?

Additionally, what about the matter at hand? That of the Chinese room thought experiment? Note what I said earlier:

Tisthammerw said:
Given this particular definition of understanding, it seems clear that the man in the Chinese room does not know a word of Chinese. What about the systems reply? That the Chinese room as a whole understands Chinese? Searle’s response works well here. Let the man internalize the room and become the system (e.g. he memorizes the rulebook). He may be able to simulate a Chinese conversation, but he still doesn’t understand the language.

For the next post:

moving finger said:
Do we all (MF, Tournesol and Tisthammerw) agree that the following statement is true?

“whether or not consciousness is necessary for understanding is a matter of definition

True or false?

If am understanding you correctly, then the answer is true: whether or not consciousness is necessary for understanding depends on how you define “consciousness” and “understanding.”

moving finger said:
here is my quick shot at defining the verb "To Understand" :

To Understand (definition)
To know (= to possesses knowledge) and to comprehend the nature or meaning of something;
To perceive (an idea or situation) in terms of mental or informational representations/models;
To make sense of something (eg of a language);
To believe to be the case (as in "I understand it is getting late")

There’s a problem here. If your definition of understanding does not require consciousness, it seems we are both using the word “perceive” quite differently, since if an entity perceives the entity possesses consciousness (using my definition of the word “consciousness”). BTW, I use “perceive” definition 1a and 2 in Merriam-Webster’s dictionary. It seems you are not using the conventional definition of the word “perceive” if you are trying to define understanding in such a way that it does not require consciousness. So what do you mean when you use the term “perceive”?
 
Last edited:
  • #89
I don't mean to butt in.. but i am going to...(throws ass in) this is all turning into a clever game of "a play on words". I think you guys should try and work together a little bit better instead of always trying to prove one another wrong on seemingly aggressive stances. Might get somewhere. Just my 2 cents, take it or leave it. Only trying to do what's morally right... but then again... what's the meaning of moral correctness? lol
 
  • #90
*sigh*
MF you seem to not be "understanding" quite a bit of what I am saying. In several instances you seem to think I am saying or implying almost the exact opposite of what I am saying. Hopefully I can clarify a few things here...

moving Finger said:
TheStatutoryApe said:
Since we are talking about Searle's CR I would suggest using his definitions.
According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
With respect, the above is not a “definition”, this is a conclusion (that symbol manipulation cannot give rise to semantic understanding).
If you are saying that the CR cannot have semantic understanding *by definition” then the entire CR argument becomes fallacious (circulus in demonstrando).
I realize that I was referring to Searle's conclusions but I was referring to them along with his definitions(which I have underlined in the above quote this time around). Also I do not fully agree with his definitions and conclusions as I pointed out here...
The problem that I see with his reasoning as I've stated on the other two threads regarding the CR is that Searle never really defines this "semantic" property. I think you would likely agree with me that this "semantic" understanding arises from complex orders of "syntactic" information (at least in humans if nothing else). I'd have to say that including this adendum I agree with his definitions though obviously not his conclusions (that syntactic information can not yield semantic understanding).
But for one reason or another you do not seem to have taken notice of this or the fact that I am simply restating Searle's argument and not my own when you say this here...
Therefore symbol manipulation (wth the right information/knowledge base) CAN give rise to semantic understanding? Doesn’t this contradict what you said above?
No it doesn't when you actually pay attention to what I am saying. It contradicts Searle's argument which I was pointing out I do not agree with and what about it I don't agree with.

MF said:
They can relate the word red to a particular subjective sense-experience, yes. But this in itself is not “understanding”.
If I instead say the word “X-ray” (another part of the electromagnetic spectrum), are you then saying that I do not understand what the word means because I have no sense-experience of seeing X-rays?
My understanding of red, and my understanding of X-ray, arise from the information and knowledge that I possesses which allows me to put these concepts into rational contextual relationships with other concepts to derive meaning – in other words semantics. I may be blind, but I can understand red just as much as I can understand X-ray.
___________________________________________________

Yes, but this is a peculiar human limitation, and need not necessarily be the case in all possible agents. I can even speculate of a possible future where humans “acquire information” by direct transfer into the brain, bypassing all the sense organs. Would you say that such information is somehow invalid because it is not experiential information?
Here you seem to very much ignore the definition I explicitly laid out for you on what I mean by experience. I gave a definition specifically to avoid this problem.
In response to your question above; my definition of "experience", which you alternately refer to as "knowledge", accepts this as valid being a manner of aquiring and crossreferancing information.

This here is a misunderstanding I thought was funny...
MF said:
TheStatutoryApe said:
Once we have that information we have it and our senses are no longer necessary to have an understanding of that information
Senses do not convey or create understanding, they only act as conduits for information transfer.
Understanding is a process that takes place within the brain (or brain equivalent) when it processes information and knowledge in a particular way.
TheStatutoryApe said:
No one has said that your eyes are the source for understanding of the colour red,
Pardon? What did you just say above?
You are more or less saying what I had been stating, though you have taken the quotes a bit out of context. I was saying that the eyes are only necessary for receiving the information and not for understanding and that once the information has been received that the eyes are no longer necessary. Or more exactly that the aquisition of information is necessary to understanding but is not the process of understanding in and of itself. Elsewhere in my post I even stated that there are other manners by which a person can acquire information about "red" which are still experiencial in nature (by my definition of experience).

Pretty much the majority of your last post has shown you seeing disagreements where there are none. One thing I can think of that may help in this would be if you were to treat each of my points as whole rather than trying to disect them line by line and out of context.
Now let's see if we can wade through the rest of your post...

MF said:
TheStatutoryApe said:
There is "syntactic" meaning there just no "semantic".
It has not been shown that there is no semantic meaning present, except possibly by definition (which as we have seen results in a fallacious argument)

TheStatutoryApe said:
The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
Understanding is a process. In terms of what the process achieves, there is no difference between “a process” and “a perfect simulation of that process”. If you think there is, Please explain why a perfect simulation of a process necessarily differs in any way from the original process?

TheStatutoryApe said:
I asserted that the CR, as built by Searle, does not understand the meanings of the words it is using.
Perhaps you have asserted this, but you have not shown it.
I can assert anything I wish, but in absence of rational and logical argument that is simply my opinion.

TheStatutoryApe said:
Perhaps a better way of stating this would be to say that the words don't mean to the CR what they mean to people who speak/read chinese.
Where has this been shown, and why should it matter anyway?
You and I do not share “perfect definitions of all the words we use” (we dispute some meanings of words in this thread), but that does not entitle either of us to accuse the other of not understanding English.

TheStatutoryApe said:
This is yet another problem with Searle's CR. It is not feasible to produce a computer that can be indestinguishable from a person who "understands" unless it really is capable of understanding.
And how would you know whether or not it is really capable of understanding, and not just (as you suggest) simulating understanding? How would you tell the difference?

TheStatutoryApe said:
It would not be able to hold a coherant and indestinguishable conversation otherwise.
The CR can hold a coherent and intelligent conversation. (not sure what you mean by “indistinguishable conversation”?). What do we conclude from this?
First off I am not arguing my own position here I am arguing that of Searle. His CR is built specifically to fail at understanding hence it does not understand. While Searle and Tisthammerw here would disagree that it can be altered in such a way as to possesses understanding I do not disagree with this. The only thing which I am discussing here is Searle's original unaltered Chinese Room.
Searle's original unaltered Chinese Room is built in such a way that it is solely reactionary and only spits out preformulated responses to predetermined questions. Searle creates a hypothetical manuel which supposedly can contain enough preformulated responses and predetermined questions that the responses you get from it are indestinguishable from the responses you would get from a human.
The problem here is that this is patently impossible. The number of possible questions and answers is astronomical. The possible number would greatly exceed the possible number of positions on a chess board and the mapping of a full chess game tree itself would take thousands of years. This is not even taking into account the time it would take to program which responses are the best responses to which questions and how long it would take a computer to search such a database for each question and it's proper response.
So in reality we would have to conclude that if a computer can speak indestinguishably from a human then it must be capable of "understanding" but this does not mean that Searle's CR actually possesses understanding. Searle's CR is simply a hypothetical model that could not actually be achieved in reality.
 
  • #91
moving finger said:
IF consciousness is necessary for understanding THEN it follows that an agent which does not possesses consciousness also does not possesses understanding.
I hope that everyone here agrees with this statement?
The question that remains to be answered is then : Is consciousness necessary for understanding?
Tisthammerw said:
It all depends on how you define “understanding” and “consciousness.” If we use my definitions of those terms, then the answer is yes.
And if one uses another definition of these terms then the answer could be no.
This (with respect) tells us nothing useful, execpt that the answer to the question depends on one’s definition of understanding. Period.
Tisthammerw said:
Whether or not understanding requires consiousness is going to depend on how we define the terms anyway, so I don’t think this is a valid criticism.
What criticism is that? The point I am trying to make is that “the conclusion depends on the definition”. Tisthammerw can use his definition and conclude that understanding is impossible in a non-conscious agent, MF can use his definition and conclude understanding is possible in a non-conscious agent. Each conclusion is equally valid. This gets us nowhere.
Tisthammerw said:
After all, if we use your logic here, we have not shown that all bachelors are unmarried even though that is an analytic statement.
First define your terms, then construct your argument. Then ask yourself whether or not it is a fallacious argument.
Tisthammerw said:
Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement, and analytic statements are not fallacious.
moving finger said:
With respect, I did not say the statement “understanding requires consciousness” is fallacious.
The statement “understanding requires consciousness” is also a premise in your argument.
I said the ARGUMENT is fallacious. Do you understand the difference between an argument and a statement and a premise?
Tisthammerw said:
Yes, but I also understand that you have phrased my analytic statement in the form of an argument. This can be done to justify the analytic statement.
I can make the statement “the moon is made of cheese”. Is that statement true or false? How would we know? The only way to show whether it is true or false is to construct an argument to show how I arrive at the statement “the moon is made of cheese”. If my argument is “the moon is made of cheese because I define cheese as the main ingredient of moons” then the argument is circular, and fallacious.
Can you construct a non-fallacious (ie non-circular) argument to show whether your statement “understanding requires consciousness” is true or false?
Tisthammerw said:
in the context of my analytic statement “understanding requires consciousness” here is the “argument” I am using:
The first premise is the definition of understanding I'll be using (in terms of a man understanding words):
* The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.
So in this definition, understanding is to be aware of the true meaning of what is communicated. For instance, a man understanding a Chinese word denotes that he is factually aware of what the word means.
The second premise is the definition of consciousness I’ll be using:
* Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
My conclusion: understanding requires consciousness.
The conclusion is contained in the premises, hence circular, hence the argument is fallacious.
Tisthammerw said:
Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms.
Tisthammerw “asserts” that the premises are true – MF disputes that the premises are true.
Regardless of whether the premises are true or not, the argument as it stands is still circular, hence still fallacious.
The argument “all bachelors are unmarried because a bachelor is defined as an unmarried man” does not necessarily contain false premises, but the argument is still circular, hence fallacious.
One CANNOT prove anything useful with a circular argument, because the conclusion is already contained in the premises. This is the whole reason why circular arguments are fallacious.
Tisthammerw said:
You may mean something different when you use the terms, but that doesn’t change the veracity of my premises. The argument here is quite sound.
The argument is fallacious because it is circular, by definition.
The veracity of your premises is a matter of opinion. My opinion is different to yours.
moving finger said:
To construct an argument we first need to state our premises, then we draw inferences from those premises, then we make a conclusion from the inferences and premises.
Let's do this.
First one must define what one means by the terms “bachelor”, and “unmarried”. (you may object "this is obvious", but that is beside the point. Strictly all terms in an argument must be clearly defined and agreed).
These definitions then become part of the premises to the argument.
If the conclusion of the argument is already contained in the premises, then by definition the argument is fallacious, by “circulus in demonstrando”.
For example :
"we take as a premise that "bachelor" is defined as an "unmarried male", it follows that the statement "all bachelors are unmarried" is true"
The above argument is completely logical, but fallacious due to “circulus in demonstrando”
Tisthammerw said:
You have a rather strange and confusing way of looking at analytic statements by phrasing them in the form of an argument and calling them “fallacious.”
To draw a conclusion from premises, one must make an argument. If you think it is “strange” to construct an argument in logic in order to draw conclusions then I must ask where did you learn your logic? How else would you draw a conclusion?
The statement “understanding requires consciousness” is just that – a statement. Tisthammerw asserts this statement is true. MF asserts that it is not necessarily true. How can we know who is right?
Tisthammerw said:
I am familiar with circular reasoning, but this objection doesn’t quite apply to analytic statements, and I don’t understand your insistence of phrasing my analytic statement “understanding requires consciousness” in the form of an argument when (a) you yourself admit that the analytic statement is true
Where have I admitted that the stand-alone statement “understanding requires consciousness” is true?
Tisthammerw said:
(b) the argument is perfectly sound anyway
The argument is circular, hence by definition fallacious. Have you really studied circular arguments? They are generally accepted in logic as being fallacious. Perhaps you follow different rules of logic to the rest of us?
Tisthammerw said:
(c) this analytical statement is itself a premise to larger and more relevant argument that you seem to be avoiding: the one regarding the Chinese room thought experiment.
I have lost count of the number of times that I have said “I disagree with your premise”. I am tired of repeating it.
Tisthammerw said:
Let’s take my example above regarding the “understanding requires consciousness” argument above. The conclusion logically follows from the premises, and the premises are true.
There you go again. What did I just say? What part of “I disagree with your premise” is unclear?
Tisthammerw said:
The argument is sound. Doesn’t it seem odd then to call the argument “fallacious”?
A circular argument is fallacious, by definition. You have agreed that your argument is circular.
moving finger said:
Do we all (MF, Tournesol and Tisthammerw) agree that the following statement is true?
“whether or not consciousness is necessary for understanding is a matter of definition”
True or false?
Tisthammerw said:
If am understanding you correctly, then the answer is true: whether or not consciousness is necessary for understanding depends on how you define “consciousness” and “understanding.”
Thank you. We do agree on this. That is a step forward.
moving finger said:
here is my quick shot at defining the verb "To Understand" :
To Understand (definition)
To know (= to possesses knowledge) and to comprehend the nature or meaning of something;
To perceive (an idea or situation) in terms of mental or informational representations/models;
To make sense of something (eg of a language);
To believe to be the case (as in "I understand it is getting late")
Tisthammerw said:
There’s a problem here. If your definition of understanding does not require consciousness, it seems we are both using the word “perceive” quite differently, since if an entity perceives the entity possesses consciousness (using my definition of the word “consciousness”).
How about your definition of the word “perceive”.
BTW, I use “perceive” definition 1a and 2 in Merriam-Webster’s dictionary. It seems you are not using the conventional definition of the word “perceive” if you are trying to define understanding in such a way that it does not require consciousness. So what do you mean when you use the term “perceive”?
Is that the online version of the dictionary you refer to? With respect there are many more definitions of “to perceive” than contained in this disctionary. If one consults much larger and more comprehensive dictionaries one will find a number of alternative definitions of the verb.

In the Webster dictionary to perceive is defined in a number of ways, one of them being :
To obtain knowledge of through the senses; to receive impressions from by means of the bodily organs; to take cognizance of the existence, character, or identity of, by means of the senses; to see, hear, or feel;

In another dictionary (The New Penguin English Dictionary 2000) I find the following :
To perceive : To become aware of something through the senses, esp to see or observe; to regard somebody or something as something specified (eg she is perceived as being intelligent).

In yet another (The Collins Dictionary) I find :
To perceive : To become aware of (something) through the senses, to recognise or observe
Perception : The process by which an organism detects and interprets information from the external world by means of sensing receptors

There are thus very clear and accepted meanings of “to perceive” and "perception" which do not imply conscious perception.

The word perceive actually derives from the latine “per cipere” which means “to seize” or “to take”. Again, there is no requirement for consciousness contained in the roots of the word.

In psychology and the cognitive sciences, the word perception (= the act of perceiving) is defined as “the process of acquiring, interpreting, selecting, and organising (sensory) information”. It follows from this that “to perceive” is to acquire, interpret, select, and organise (sensory) information.

--------------------------------------------------------------

Actually, having cogitated on this issue for a little longer, I do not see that "perception" (ie the processing of data received from external sense-receptors) is a necessary part of understanding per se. I can imagine a completely self-contained agent which "understands Chinese", but has no sense-receptors at all - hence it couild not "perceive", and yet could still claim to understand Chinese. Therefore on reflection I now delete the requirement "to perceive" from my list of "necessary items" for understanding. :smile:

(on the other hand, there are other possible meanings to "perceive", for example "to perceive the truth of something, such as a statement" - an agent which understands is able to "perceive the truth of" things, therefore it necessarily perceives in this sense of the word)

With respect, we can argue about this until the cows come home. In the end, Tournesol is right (I don’t find myself agreeing with him often, so this is wonderful) – there is no “right” or “wrong” definition of a word in language, there are only more or less accepted definitions. And there are perfectly acceptable definitions of “to perceive” which do not associate perception with consciousness.

with respect

MF
 
Last edited:
  • #92
dgoodpasture2005 said:
I don't mean to butt in.. but i am going to...(throws ass in) this is all turning into a clever game of "a play on words". I think you guys should try and work together a little bit better instead of always trying to prove one another wrong on seemingly aggressive stances. Might get somewhere. Just my 2 cents, take it or leave it.
dgoodpasture2005 is right.

What we have is the following :
Tisthammerw defines understanding such that consciousness is necessary for understanding
Quantumcarl defines understanding such that "being human" is necessary for understanding
MF defines understanding such that neither consciousness nor "being human" is necessary for understanding
We could call these TH-understanding, QC-understanding and MF-understanding respectively.
Only conscious agents can have TH-understanding.
Only human agents can have QC-understanding.
and the China Room may have MF-understanding.

May your God go with you

MF
 
  • #93
TheStatutoryApe said:
Hopefully I can clarify a few things here...
OK, let’s try to understand each other.
TheStatutoryApe said:
Since we are talking about Searle's CR I would suggest using his definitions.
According to the CR argument symbol manipulation is a purely "syntactic" process (regarding only patterns of information) and that this can not yield a "semantic understanding" (semantic: regarding the meaning of the symbols which is not emergent from the "syntax"[pattern] of the symbols).
moving finger said:
With respect, the above is not a “definition”, this is a conclusion (that symbol manipulation cannot give rise to semantic understanding).
If you are saying that the CR cannot have semantic understanding *by definition” then the entire CR argument becomes fallacious (circulus in demonstrando).
TheStatutoryApe said:
I realize that I was referring to Searle's conclusions but I was referring to them along with his definitions(which I have underlined in the above quote this time around). Also I do not fully agree with his definitions and conclusions as I pointed out here...
OK. Unfortunately I do not agree that the process of symbol manipulation (associated with the required information and knowledge) is necessarily a purely syntactic process, hence I disagree with his definition. Semantics is also all about symbol manipulation (just a different level or order of symbol manipulation to the syntactic level).
TheStatutoryApe said:
The problem that I see with his reasoning as I've stated on the other two threads regarding the CR is that Searle never really defines this "semantic" property. I think you would likely agree with me that this "semantic" understanding arises from complex orders of "syntactic" information (at least in humans if nothing else). I'd have to say that including this adendum I agree with his definitions though obviously not his conclusions (that syntactic information can not yield semantic understanding).
OK, we seem to agree on the conclusion, but possibly for slightly different reasons.
The problem I have is understanding the following : If you agree with his definition “symbol manipulation is PURELY syntactic”, how can you then conclude that symbol manipulation gives rise to semantic understanding?
TheStatutoryApe said:
I was saying that the eyes are only necessary for receiving the information and not for understanding and that once the information has been received that the eyes are no longer necessary. Or more exactly that the aquisition of information is necessary to understanding
We seem to agree on most of this. However I would not say that the “acquisition of” information is necessary to understanding, rather I would say that the “possession of” information is necessary to understanding. .
There are still areas where we seem to misunderstand each other, for example :
TheStatutoryApe said:
The purpose of the CR is not to "understand chinese" it's to mimic the understanding of chinese.
moving finger said:
Understanding is a process. In terms of what the process achieves, there is no difference between “a process” and “a perfect simulation of that process”. If you think there is, Please explain why a perfect simulation of a process necessarily differs in any way from the original process?
May I ask some questions, to improve our understanding?
Do you believe the CR understands the meanings of the words it is using?
Do you believe that the words mean to the CR what they mean to people who speak/read chinese?
Do you believe the CR can hold a coherent and intelligent conversation in Chinese?
TheStatutoryApe said:
First off I am not arguing my own position here I am arguing that of Searle.
Whichever position you argue, your argument must be consistent and rigorous. Perhaps (with respect) some of the confusion between us arises because you on the one hand “argue the position of Searle” but at the same time when I disagree with ths argument your response is often to refer me to “your own definitions” of terms rather than Searle’s? Am I debating with TheStatutoryApe here, or with Searle’s stand-in? :smile:
TheStatutoryApe said:
His CR is built specifically to fail at understanding hence it does not understand.
I agree – but would phrase it slightly differently. He chooses his definitions associated with understanding such that understanding requires conscious awareness – which ensures that any agent that is not consciously aware fails to understand. By definition.
TheStatutoryApe said:
While Searle and Tisthammerw here would disagree that it can be altered in such a way as to possesses understanding I do not disagree with this.
Altered in what way?
TheStatutoryApe said:
The only thing which I am discussing here is Searle's original unaltered Chinese Room.
Searle's original unaltered Chinese Room is built in such a way that it is solely reactionary and only spits out preformulated responses to predetermined questions. Searle creates a hypothetical manuel which supposedly can contain enough preformulated responses and predetermined questions that the responses you get from it are indestinguishable from the responses you would get from a human.
The problem here is that this is patently impossible. The number of possible questions and answers is astronomical. The possible number would greatly exceed the possible number of positions on a chess board and the mapping of a full chess game tree itself would take thousands of years. This is not even taking into account the time it would take to program which responses are the best responses to which questions and how long it would take a computer to search such a database for each question and it's proper response.
The CR is supposed to be a thought-experiment, to argue matters of principle. Whether it is practical or even possible to build such a room “in reality” or not is beside the point.
TheStatutoryApe said:
So in reality we would have to conclude that if a computer can speak indestinguishably from a human then it must be capable of "understanding" but this does not mean that Searle's CR actually possesses understanding. Searle's CR is simply a hypothetical model that could not actually be achieved in reality.
My above remark again applies here.
With respect
MF
 
  • #94
moving finger said:
Statement : "It is the case that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey" whereby a human being might have a partial understanding of the subject X."

(subject X could be the French language, for example)

Would quantumcarl agree that the above statement (according to quantumcarl's defininition of understanding) is true, or false?

quantumcarl said:
That's how I see it... as in... true.

OK.

Now let us try a thought-experiment, which leads to another question.

Let us accept QC’s definition of understanding, ie that a human being EITHER has complete understanding of the subject X, OR has no understanding of the subject X - there are NO "shades of grey".

Suppose Mary comes along and claims that she understands the French language. (Mary is human by the way). Would you agree that it is possible that the statement “Mary understands the French language” could be a true statement?

(I assume you will answer “yes” to the above).

Thus, from QC’s definition of understanding, we have :

Mary EITHER has understanding of the French language, OR has no understanding of the the French language - there are NO "shades of grey"

In other words, the statement “Mary has understanding of the French language” is either true or false.

Now my question – how would QC propose to test whether the statement “Mary has understanding of the French language” is true or false?

With respect

MF
 
  • #95
Tentative Conclusion

Here is my tentative conclusion to our review of Searle's Chinese Room Thought experiment.

Searle has used incorrect terminology to describe the central function of the Chinese Room.

Therefore, Searle's thought experiiment is invalid.

I will concede, however, that the Chinese Room is capable of translation. In fact, I believe that is the word their looking for when referring to "understanding" chinese... in the case of Searle's Chinese Room.

In order to translate a language or art form or caligraphy... etc... one does not, nessesarily, have to understand what one is translating.

Take, for instance, the translation of code during war time. A translator will tranlate the code into another code which is passed on to a higher authority who has an understanding of the translated but secondarily encoded code.



The word "understand" has been bastardized in recent times and does not belong nor is it necessary to be used as a description of comprehending a language or math or logisitics etc...

§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?§?

To understand means to literally stand under a topic... as into stand in its shoes (an empathic metaphor) and become a part of its origins, functions, malfunctions an so on. One uses consciousness, empathy, previous and present experiences and knowledge of other's experiences to attain understanding. Until a person has discovered the whole story and all those stories that have made the story whole, they will not understand the story. And, even then, they will only understand the story according to their relative point of view.

Once more, my initial attempt at a conclusion for this thread...

Searle's CR thought experiment - its conclusion and its critics - errored when they utilized a term ("understanding") that, contextually, did not belong as part of the experiment. The term "translation" or "translating" would have been sufficient.

Do you think the CR can translate Chinese?

(Try "Babblefish" at "Altavista" for further research.)
 
  • #96
quantumcarl said:
In order to translate a language or art form or caligraphy... etc... one does not, nessesarily, have to understand what one is translating.
This is an interesting statement from you, QC.
I agree that understanding is not a prerequisite for translation, but I would suggest that to perform an accurate translation of one complex language into another it helps to understand both languages.
In the same way, I argue that empathy is not a prerequisite for understanding, but in order to properly and fully understand a language it helps to have empathy with the people using that language.
quantumcarl said:
Take, for instance, the translation of code during war time. A translator will tranlate the code into another code which is passed on to a higher authority who has an understanding of the translated but secondarily encoded code.
imho this is exactly what the purpose of each neuron or group of neurons in the brain - to accept incoming information (which it does not understand), to process it, then to pass information onto other neurons and groups of neurons. There is no microscopic part of the brain which "understands" what it is doing, just as none of the individual translators in your example "understands" what it is doing. Each component is simply following deterministic rules to generate output from input.
quantumcarl said:
One uses consciousness, empathy, previous and present experiences and knowledge of other's experiences to attain understanding. Until a person has discovered the whole story and all those stories that have made the story whole, they will not understand the story. And, even then, they will only understand the story according to their relative point of view.
imho consciousness and empathy are associated with understanding in humans, but it does not follow from this association that either consciousness or empathy are necessary to achieve understanding in all agents.
quantumcarl said:
Once more, my initial attempt at a conclusion for this thread...
Searle's CR thought experiment - its conclusion and its critics - errored when they utilized a term ("understanding") that, contextually, did not belong as part of the experiment. The term "translation" or "translating" would have been sufficient.
imho I disagree. When QC asks MF a question, and MF replies, one could argue that “all that happens is a process of translation – in the agent MF - from the question to the answer”, but though strictly correct, this would be too simplistic.
If one wishes to say that the CR is an example of no more than a translating machine, then I would assert that humans are also no more than a translating machine.
quantumcarl said:
Do you think the CR can translate Chinese?
"translate Chinese" into what?
By definition the CR understands only Chinese. It does not understand any other language.
MF
 
  • #97
moving finger said:
This is an interesting statement from you, QC.
I agree that understanding is not a prerequisite for translation, but I would suggest that to perform an accurate translation of one complex language into another it helps to understand both languages.

A person who uses Mongolian as a language can translate French into English using translation texts without ever possessing an understanding of either French or English and the origins of those two languages.


moving finger said:
In the same way, I argue that empathy is not a prerequisite for understanding, but in order to properly and fully understand a language it helps to have empathy with the people using that language.

What is the basis of your "argument"?
moving finger said:
imho
this is exactly what the purpose of each neuron or group of neurons in the brain - to accept incoming information (which it does not understand), to process it, then to pass information onto other neurons and groups of neurons. There is no microscopic part of the brain which "understands" what it is doing, just as none of the individual translators in your example "understands" what it is doing. Each component is simply following deterministic rules to generate output from input.

In my opinion you are not a neuroscientist and have little or no knowledge or experience to back up this statement. I could be wrong but, even neuroscientists are unsure of how or why neurons behave in certain ways and are able to preform certain functions in addition to preforming functions that were previously preformed by other, specified neurons.

moving finger said:
imho consciousness and empathy are associated with understanding in humans, but it does not follow from this association that either consciousness or empathy are necessary to achieve understanding in all agents.

If you are saying that understanding is not guilty by the association of relying on empathy and consciousness to be defined, in my opinion, you are wrong.


moving finger said:
If one wishes to say that the CR is an example of no more than a translating machine, then I would assert that humans are also no more than a translating machine.

And that would be your opinion and your way of dealing with my statement. What I would add to your statement is that humans are organic translators with empathy and consciousness. And this is what distinquishes our method of translation from a machine's. We call it "understanding".

moving finger said:
"translate Chinese" into what?
By definition the CR understands only Chinese. It does not understand any other language.MF

Thanks for reminding me.

The experiment is in error because the CR has a human (aka: conscious being) in the room... and it is also for that reason that I declare the Chinese Room Thought Experiment in error and defunct.

If the whole purpose of the experiment is to determine the difference between how a machine processes information and how a human process information... the experiment is lacking control by using a human in the CR.
 
  • #98
It is a test based on an explanation; I am saying we have to solve the hard problem first, before we can have a genuine test.
In other words, in absence of an explanation, it makes no sense to test for consciousness? There is thus no logical basis for Searle’s conclusion that the CR does not possesses consciousness, correct?
In the absence of an objective explanation there is no objective way of
testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.
If manipulating symbols is all there is to understanding, and if consciousness is part of understanding, then there should be a conscious awareness of Chinese in the room (or in Searle's head, in the internalised case).
That is a big “if”. For Searle’s objection to carry any weight, it first needs to be shown that consciousness is necessary for understanding. This has not been done (except by “circulus in demonstrando”, which results in a fallacious argument)
[/QUOTE]
That consciousness is part of understanding is established by the definitions
of the words and the way language is used. Using words correctly is not
fallaciously circular argumentation: "if there is a bachelor in the room, there is a man in the room" is a a perfectly valid argument. So is
"if there is a unicorn in the room, there is a horned animal in the
room". The conceptual, definitional, logical correctness of an
argument (validity) is a separate issue to its factual, empirical correctness (soundness).
If the conclusion to an arguemnt were not in some way contained in its
premisses, there would be no such things as logical arguements in the first
place.
The problem with circular arguemnts is not that the premiss contains the
conclusion, the problem is that it does so without being either analytically true
(e.g by defintion) or synthetically true (factually, empirically).
You could claim that consciousness is not necessarly part of machine understanding; but that would be an admission that the CR's understanding is half-baked compared to human understanding...unless you claim that huamn understanding has nothing to do with consciousness either.
I am claiming that consciousness is not necessary for understanding in all possible agents. Consciousness may be necessary for understanding in humans, but it does not follow from this that this is the case in all possible agents.
To conclude from this that “understanding without consciousness is half baked” is an unsubstantiated anthropocentric (one might even say prejudiced?) opinion.
As I have stated several times, the intelligence of an artificial intelligence
needs to be pinned to human intelligence (albeit not it in a way that makes it
trivially impossible) in order to make the claim of "artificiallity"
intelligible. Otherwise, the computer is just doing something -- something
that might as well be called infromation-processing,or symbol manipulation.
No-one can doubt that computers can do those things, and Searle doesn't
either. Detaching the intelligence of the CR from human intelligence does
nothing to counteract the argument of the CR; in fact it is suicidal to the
strong AI case.
But consciousness is a defintional quality of understanding, just as being umarried is being a defintional quality of being a bachelor.
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.
Is "bachelors are unmarried because bachelors are unmarried" viciously
circular too ? Or is it -- as every logician everywhere maintains -- a
necessary, analytical truth ?
Quote:
Originally Posted by Tournesol
If you understand something , you can report that you know it, explain how you know it. etc. That higher-level knowing-how-you-know is consciousness by definition.
I dispute that an agent needs to in detail “know how it knows” in order for it to possesses an “understanding of subject X”.
“To know” is “to possesses knowledge”. A computer can report that it “knows” X (in the sense that the knowledge X is contained in it’s memory and processes), it might (if it is sufficiently complex) also be able to explain how it came about that it possesses that knowledge. By your definition such a computer would then be conscious?
Maybe. The question is whether syntax is sufficient for semantics.
I think not. imho what you suggest may be necessary, but is not sufficient, for consciousness.
Allow me to speculate.
Consciousness also requires a certain level of internalised self-representation, such that the conscious entity internally manipulates (processes) symbols for “itself” which it can relate to other symbols for objects and processes in the “perceived outside world”; in doing this it creates an internalised representation of itself in juxtaposition to the perceived outside world, resulting in a self-sustaining internal model. This model can have an unlimited number of possible levels of self-reference, such that it is possible that “it knows that it knows”, “it knows that it knows that it knows” etc.
Not very relevant.
If it is necessary but insufficient criterion for consciousness, and the CR doesn't have it, the
CR doesn't have consciousness.
I see. We first define understanding such that consciousness is necessary to understanding. And from our definition of understanding, we then conclude that understanding requires consciousness. Is that how its done?
How else would you do it ? Test for understanding without knowing what
"understanding means". Beg the question in the other direction by
re-defining "understanding" to not require consciousness ?
Write down a definition of "red" that a blind person would understand.
Are you suggesting that a blind person would not be able to understand a definition of “red”?
No, I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality. (Inasmuch as you can write down a theoretical,
non-experiential defintion. a blnd person would be able to understand it).
Thus the argument that all words can be defined
in entirely symbolic terms fails, thus the assumption that symbol-manipulation
is sufficient for semantics fails.
Sense-experience (the ability to experience the sensation of red) is a particular kind of knowledge, and is not synonymous with “understanding the concept of red”. Compare with the infamous “What Mary “Didn’t Know”” thought experiment.
Well, quite. It is a particular kind of knowledge, and somehow who lacks that
particular kind of knowledge lacks full semantics. You seem to be saying that
non-experiential knowledge ("red light has a wavelentght of 500nm") *is*
understanding, and all there is to understanding, and experience is
something extraneous that does not belong to understanding at all
(in contradiction to the conclusion of "What Mary Knew").
Of course, that would be circular and question-begging.
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
I disagree. I do not need to have the power of flight to understand aerodynamics.
To theoretically understand it.
Vision is simply an access to experiential information, a person who “sees red” does not necessarily understand anything about “red” apart from the experiential aspect (which imho is not “understanding”).
How remarkably convenient. Tell me, is that true analytically, by defintion,
or is it an observed fact ?
Experiential information may be used as an aid to understanding in some agents, but I dispute that experiential information is necessary for understanding in all agents.
How can it fail to be necessary for a semantic understanding of words that refer
sepcifically to experiences ?
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
They don't fully lack it, they don't fully have it. But remember that a computer is much more restricted.
More restricted in what sense?
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but
more so.
The latter is critical to the ordinary, linguistic understanding of "red".
I dispute that an agent which simply “experiences the sight of red” necessarily underdstands anything about the colour red.
Well, that is just wrong; they understand just what Mary doesn't: what it
looks like. It may well be the case that they don't know any of the stuff that
Mary does know. However, I do not need to argue that non-experiential
knowledge is not knowledge.
I also dispute that “experiencing the sight of red” is necessary to achieve an understanding of red (just as I do not need to be able to fly in order to understand aerodynamics).
You don't need to fly in order to understand aerodynamics *theoretically*.
However, if you can do both you clearly have more understanding than someone
who can only do one or the other or neither.
(Would you want to fly in a plane piloted by someone who had never been in the
air before ?)
If the "information processing" sense falls short of full human understanding, and I maintain it does, the arguemnt for strong AI founders and Searle makes his case.
And I mainitain it does not. I can converse intelligently with a blind person about the colour “red”, and that person can understand everything there is to know about red, without ever “experiencing the sight of red”.
No they can't. They don't know what Mary doesn't know.
If what you say is true, it would be impossible for anyone to ever learn form,
or be surprised by, an experience. Having been informed that caviare is
sturgeon eggs, they would not be surprised by the taste of caviare.
But "Sturgeon eggs" conveys almost nothing about the taste of caviare.
Your argument seems to be that “being able to see red” is necessary for an understanding of red, which is like saying “being able to fly” is necessary for an understanding of flight.
It is necessary for full understanding.
If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
They are necessary to learn the meaning of sensory language ITFP.
They are aids to understanding in the context of some agents (eg human beings), because that is exactly how human beings acquire some of their information. It is not obvious to me that “the only possible way that any agent can learn is via sense-experience”, is it to you?
I didn't claim sensory experience was the only way to learn, simpliciter.
I claim that experience is necessary for a *full* understanding of *sensory*
language, and that an entity without sensory exprience therefore lacks full
semantics.
If you are going to counter this claim as stated, you need to rise to the
challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red". (ie show that if Mary had access to the right books -- the
ones containing this magic definition -- she would have had nothing left to learn).
 
  • #99
moving finger said:
Tisthammerw said:
It all depends on how you define “understanding” and “consciousness.” If we use my definitions of those terms, then the answer is yes.

And if one uses another definition of these terms then the answer could be no.

Well, yes. I have said many times that the answer to the question depends on the definition of “understanding” and “consciousness” used.


Tisthammerw said:
Whether or not understanding requires consiousness is going to depend on how we define the terms anyway, so I don’t think this is a valid criticism.

What criticism is that?

Criticisms like “circulus in demonstrando” and that the argument is “fallacious.”

Tisthammerw said:
After all, if we use your logic here, we have not shown that all bachelors are unmarried even though that is an analytic statement.
First define your terms, then construct your argument. Then ask yourself whether or not it is a fallacious argument.

Very well. I define a bachelor to be “a male who is not married.” I define unmarried to be “not married.” Therefore, all bachelors are unmarried (this is an analytic statement). This argument is sound, so I don’t think it’s appropriate to call it fallacious.


I can make the statement “the moon is made of cheese”. Is that statement true or false? How would we know?

I’m not sure what relevance this has, but I would say the statement is false. We can “know” it is false by sending astronauts up there.

The only way to show whether it is true or false is to construct an argument to show how I arrive at the statement “the moon is made of cheese”. If my argument is “the moon is made of cheese because I define cheese as the main ingredient of moons” then the argument is circular, and fallacious.

I’m not really sure you can call it fallacious, because in this case “the moon is made of cheese” is an analytic statement due to the rather bizarre definition of “cheese” in this case. By your logic any justification for the analytic statement “all bachelors are unmarried” is fallacious.


Can you construct a non-fallacious (ie non-circular) argument to show whether your statement “understanding requires consciousness” is true or false?

Again, I don’t know of any other way to justify that the phrase “understanding requires consciousness” is an analytic statement in such a way that you wouldn’t consider the justification “fallacious.”


Tisthammerw said:
Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms.

Tisthammerw “asserts” that the premises are true – MF disputes that the premises are true.

Let me try this again. “This is what I mean by ‘understanding’…” is this premise true or false? Obviously it is true, because that is what I mean when I use the term. You yourself may use the word “understanding” in a different sense, but that has no bearing on the veracity of the premise, because what I mean when I use the term hasn’t changed.


Tisthammerw said:
You have a rather strange and confusing way of looking at analytic statements by phrasing them in the form of an argument and calling them “fallacious.”

To draw a conclusion from premises, one must make an argument. If you think it is “strange” to construct an argument in logic in order to draw conclusions then I must ask where did you learn your logic?

I’m not saying that drawing a conclusion from premises is strange, I’m saying it is strange to call logically sound arguments that demonstrate a statement to be analytic fallacious.


Tisthammerw said:
I am familiar with circular reasoning, but this objection doesn’t quite apply to analytic statements, and I don’t understand your insistence of phrasing my analytic statement “understanding requires consciousness” in the form of an argument when (a) you yourself admit that the analytic statement is true

Where have I admitted that the stand-alone statement “understanding requires consciousness” is true?

Remember, I was talking about the analytic statement using the terms as I have defined them. Note for instance in post #221 where you said

moving finger said:
Given your definition of understanding, it logically follows that a non-conscious agent is unable to understand.


Tisthammerw said:
(b) the argument is perfectly sound anyway
The argument is circular, hence by definition fallacious.

Again, I find it very strange that you call a logically sound argument fallacious.


Have you really studied circular arguments?

Yes.


Perhaps you follow different rules of logic to the rest of us?

Different from you perhaps.

But let’s trim the fat here regarding this particular sort of circularity claim. In terms of justifying that a statement is analytic (by showing that the statement necessarily follows from the definitions of the terms), I deny that it is fallacious. If it were, all justifications for analytical statements would fail (as would most of mathematics). And in any case this is beside the point, since we already agree that the statement “understanding requires consciousness” is analytic.

To reiterate, my “argument” when it comes to “understanding requires consciousness” is merely to show that the statement is analytical (using the terms as I mean them). You can call it “fallacious’” if you want to but the fact remains that it is perfectly sound. And since we already agree that “understanding requires consciousness” is analytical, I suggest we simply move on.

Usually, circular arguments are fallacious and I recognize that. So you don’t need to preach to the choir regarding that point.


Tisthammerw said:
Let’s take my example above regarding the “understanding requires consciousness” argument above. The conclusion logically follows from the premises, and the premises are true.
There you go again. What did I just say? What part of “I disagree with your premise” is unclear?

It’s unclear how that has any bearing to the matter at hand (which I have pointed out many times). I understand that you don’t agree with my definition of “understanding” in that you mean something different when you use the term. But that is irrelevant to the matter at hand. I’m not saying computers can’t understand in your definition of the term, I’m talking about mine. Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.

You misquoted me slightly, so I’ve replaced the quote with what the post actually contains:


Tisthammerw said:
There’s a problem here. If your definition of understanding does not require consciousness, it seems we are both using the word “perceive” quite differently, since if an entity perceives the entity possesses consciousness (using my definition of the word “consciousness”). BTW, I use “perceive” definition 1a and 2 in Merriam-Webster’s dictionary. It seems you are not using the conventional definition of the word “perceive” if you are trying to define understanding in such a way that it does not require consciousness. So what do you mean when you use the term “perceive”?


Is that the online version of the dictionary you refer to? With respect there are many more definitions of “to perceive” than contained in this disctionary.

[lists a number of examples]

I never claimed otherwise, but that still doesn’t answer my question. What do you mean when you use the term “perceive”?


And there are perfectly acceptable definitions of “to perceive” which do not associate perception with consciousness.

Perhaps, but what’s your definition?

But getting to the more relevant point at hand, let’s revisit my question:

Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.
 
  • #100
MF said:
OK, we seem to agree on the conclusion, but possibly for slightly different reasons.
The problem I have is understanding the following : If you agree with his definition “symbol manipulation is PURELY syntactic”, how can you then conclude that symbol manipulation gives rise to semantic understanding?
I believe that all information, at it's root, is syntactic. To believe otherwise would require that we believe information has some sort of platonic essence which endows it with meaning. I believe that semantic meaning emerges from the aggregate syntactic context of information. The smallest bit of information has no meaning in and of itself except in relation to other bits of information. This relationship then, lacking semantic meaning on either part, is necessarily a syntactic pattern. Once an entity has acquired a large enough amount of syntactic information and stored it in context it has developed experience, or a knowledge base. When the entity can abstract or learn from these patterns of information it will find significance in the patterns. This significance equates to the "semantic meaning" of the information so far as I believe.

MF said:
We seem to agree on most of this. However I would not say that the “acquisition of” information is necessary to understanding, rather I would say that the “possession of” information is necessary to understanding.
It seems that I just prefer more "active" words.:smile:
To me they just seem to fit better.

MF said:
May I ask some questions, to improve our understanding?
Do you believe the CR understands the meanings of the words it is using?
Do you believe that the words mean to the CR what they mean to people who speak/read chinese?
Do you believe the CR can hold a coherent and intelligent conversation in Chinese?
No.
No.
Yes, but only given the leaway of it being a hypothetical situation.
I'll come back to this as soon as I am done with the rest of your post.

MF said:
Whichever position you argue, your argument must be consistent and rigorous. Perhaps (with respect) some of the confusion between us arises because you on the one hand “argue the position of Searle” but at the same time when I disagree with ths argument your response is often to refer me to “your own definitions” of terms rather than Searle’s? Am I debating with TheStatutoryApe here, or with Searle’s stand-in?
I've been trying to site Searle's argument and make my argument at the same time. Perhaps I haven't maintained a proper devision between what is his argumetn and what is mine but it seems you often think I agree with Searle when I do not. I tend to be under the impression that I have pointed out my disagrements with his arguments but apparently I haven't done a good enough job of that.

MF said:
I agree – but would phrase it slightly differently. He chooses his definitions associated with understanding such that understanding requires conscious awareness – which ensures that any agent that is not consciously aware fails to understand. By definition.
Personally I don't think it is a matter of definitions. The only one of his definitions that is problematic, in my opinion, is his insistance that "syntax" can not yield "semantics". This follows though from the conclusion of his argument. His definitions are coloured by the conclusions of his argument but his argument isn't one of definitions. His argument is a logical hypothetical construct. The fact that his construct does not parallel reality as it claims is the real problem.

MF said:
TheStatutoryApe said:
While Searle and Tisthammerw here would disagree that it can be altered in such a way as to possesses understanding I do not disagree with this.
Altered in what way?
So far there have been several ideas such as giving the man in the room access to sensory information via camera or allowing the man to be the entire system rather than just the processing chip. Every idea for altering the room though is constructed by Searle, or Tisthammerw, in a way that does not reflect reality properly and sets the man in the room up to fail. I know that you believe that the CR does in some sense possesses understanding which I do not agree with but I will have to come back to this to discuss that later.
 
Back
Top