Can computers understand?Can understanding be simulated by computers?

  • Thread starter quantumcarl
  • Start date
  • Tags
    China
In summary, the conversation discusses the concept of the "Chinese Room" thought experiment and its implications on human understanding and artificial intelligence. John Searle, an American philosopher, argues that computers can only mimic understanding, while others argue that understanding is an emergent property of a system. The conversation also touches on the idea of conscious understanding and the potential of genetic algorithms in solving complex problems.
  • #36
TheStatutoryApe said:
It may be a different "flavor" of the same process (such as that of a sentient alien race) or perhaps different in a matter of complexity (such as a chimp) but if we agree that this process shares a significant enough number of defining characteristics we can catagorize it under "understanding".
An acceptable definition of “understanding” is one which allows us to apply the term to a non-human agent and to objectively determine whether or not that agent possesses understanding according to the accepted definition.

For those reading this thread who insist that no machine can ever possesses understanding, I would ask “what about an alien species?”. Can we define “understanding” in terms which are non-anthropocentric enough to enable us to apply the term to an alien species and then determine whether or not that species possesses understanding?
TheStatutoryApe said:
I don't think "consciousness" is necessary to "understanding"
Thank you! We agree.
TheStatutoryApe said:
Understanding: The ability to process information, learn from it, and produce coherant output regarding it.
This is a good starting point. Importantly, it attempts to define understanding in objective (non-anthropocentric) terms.
quantumcarl said:
The computer cannot "parce" the components of the question/problem because it does not have the necessary data or is unable to "decode" the information that is available and that it would use to compute in its function of possibly solving the problem.
This may be true of existing computers. But the point is that there is no reason in principle why a computer could not be constructed which CAN do these things. Indeed, the hypothetical Chinese Room CAN do these things.
quantumcarl said:
Understanding is a word that has been used to describe a function that has developed in humans that has to do with experiencial information being stored chemically and that is readily retreavable via the advent of internal or external stimulus.
Once again – no. You are deliberately defining “understanding” anthropocentrically.
quantumcarl said:
There are difficulties when we get lazy or poetic with our language. Its ok in prose or poetry but in a scientific examination of a premise or problem, terms and terminology must be precise and describe what they are assigned to with great accuracy.
Agreed – which is exactly why we need a definition which is not snowed under with anthropocentric terms.
quantumcarl said:
For instance I could wax on about how a rock is the only thing that can understand what its like to be a rock so we might as well forget trying to understand what its like to be a rock.
I can claim the CR possesses understanding because I can ask it questions to test it’s understanding. On what basis do you claim the rock possesses understanding? What is your “test” of understanding against which you can claim “yes, this rock passes the test”?

With respect

MF
 
Physics news on Phys.org
  • #37
Doctordick said:
All of you seem to have accepted the idea that "understanding" is the central issue of AI when it is not.
With respect, Dr Dick, please read the entire thread before making such accusations. The question was already asked in post #17 of this thread whether the intent of the thread’s originator was to discuss “understanding” or to discuss whether AI could fully implement a human brain. The thread’s originator confirmed that his/her intent was to discuss “understanding”.
Doctordick said:
that the issue of "understanding" is a subjective judgment.
Ultimately all things are subjective.

There is no reason a priori why “understanding” must necessarily be defined in subjective terms.

We should be working towards an objective definition.
Doctordick said:
People often think they understand something which they, very realistically, do not understand at all. Clearly, it is our opinion of another's understanding which is the critical issue here.
One could say that it is also my opinion whether or not the grass is green. In absence of an objective definition of “green” then it remains my subjective opinion. Only once we define what we mean by green in objective terms can we then test my opinion, and it is then no longer subjective.

Understanding, as with any other property, needs to be defined objectively, and then we can devise tests for understanding.
Doctordick said:
How do you convince me that you understand something? In the same vein, but harder for some to comprehend is the question, how do I convince myself that I understand something?
Possible to answer this question only when we have an agreed objective definition of understanding.
Doctordick said:
I sincerely believe that it is only a matter of time before a decent mechanical trickster will come to exist.

Such a “trickster” will be no more of a fraud than humans are.

MF
 
  • #38
Quoted from Moving Finger:

"...we need to move to a definition of such terms in objective, non-human terms, otherwise we risk the definitions continuing to be anthropocentric."

There is no risk.

We can use the word "understanding" in reference to human empathy and experience which is where the word and term stems from.

We can use another term that describes the ability of a machine to mimic the superfical levels of (rote) understanding... like "parse", "compute", "recall".

I'll repeat my earlier example... we don't call electricity "nourishment" when referring to powering up a computer because it leads to a confusion of terms and is inappropriate in a conservation that requires specific definitions that describe specific functions in specific situations and condtions.

There's less confusion with definitions when they have clear boundaries regarding a function and conditions that function takes place in (ie:a human brain and body)

Developing roboticpocentric terminology ensures a separation of definitions and ensures that it is obvious that the definitions apply in completely different curcumstance and conditions ie: "understanding" is what a human brain does..."computing" is what a computer does.

This is the third time I've repeated this premise. Hello!?
 
  • #39
quantumcarl said:
We can use the word "understanding" in reference to human empathy and experience which is where the word and term stems from.
We can use another term that describes the ability of a machine to mimic the superfical levels of (rote) understanding... like "parse", "compute", "recall".
I'll repeat my earlier example... we don't call electricity "nourishment" when referring to powering up a computer because it leads to a confusion of terms and is inappropriate in a conservation that requires specific definitions that describe specific functions in specific situations and condtions.
There's less confusion with definitions when they have clear boundaries regarding a function and conditions that function takes place in (ie:a human brain and body)
Developing roboticpocentric terminology ensures a separation of definitions and ensures that it is obvious that the definitions apply in completely different curcumstance and conditions ie: "understanding" is what a human brain does..."computing" is what a computer does.
This is the third time I've repeated this premise. Hello!?
with respect, "Hello yourself!". I have also repeated many times that to deliberately define understanding such that it is based only on human understanding is to risk anthropocentrism.

You seem not to understand this, since you continue to refer to things like "understanding is what a human brain does...computing is what a computer does." - therefore by (your) definition only a human can understand! This is (with respect) ludicrous.

Imagine that you meet an alien intelligence - would you claim that the alien is incapable of understanding anything, simply because it is not human?

MF
 
  • #40
Two Futures...

I'd like to offer you two possible scenarios of the future, to illustrate how "silly" is an anthropocentric definition of understanding ...It’s understanding Jim, but not as we know it!

“I’m sorry, Mr Spock” Captain James T Kirk was almost apologetic “but in this latest communication from Earth it makes it quite clear - the Federation has decreed that understanding is an exclusively HUMAN trait.”
“But Jim!” interjected Dr McCoy, becoming agitated, “that damned Vulcan may be unfeeling, unemotional, calculating and ruthlessly logical and definitely non-human – but he still understands for goodness sake!”
“No, Bones” replied the Captain “whatever it is that Spock does, call it vulcanising if you like, it is NOT understanding. Only humans understand, by definition!”Robocentrism at its Worst

It is the year 3005. The world is run by machines. Humans are relegated to the roles of zoo animals, pets, and laboratory specimens. In the laboratory, professor C3PO (a machine of course) is working with some human specimens, and after many years of hard work he has finally managed to teach one of the humans to add integers together, and thereby generate the sum of those two integers. Excited, professor C3PO quickly submits his research for publication in a learned journal, claiming in his paper that he has finally been able to teach a human how to compute.
His paper is soon returned to him, with the following covering letter from the journal’s editor :

Dear Professor C3PO

Thank you for your paper entitled “Development of the ability to Compute in Humans”. recently submitted to this journal. Unfortunately I have to inform you that, following peer review of your paper, we have decided not to accept it for publication. The experimental results that you have obtained are considered by our panel of experts to be all perfectly compatible with a mechanism whereby you have managed to achieve merely a simulation of computation in the human organism, and not true computation as known and understood by the machine-scientific community.

It is of course well known, understood and accepted amongst all educated machines that ONLY machines are able to compute – and humans (being as they are watery bags of unstable non-mechanical organic material) are simply incapable of true computation. Our conclusion is therefore that all your experiments have successfully achieved is a human simulation of computing.

This journal is therefore unable to accept your paper for publication.

Yours Sincerely

The Editor
 
  • #41
moving finger said:
Imagine that you meet an alien intelligence - would you claim that the alien is incapable of understanding anything, simply because it is not human?
MF

Yes, sorry if I come across as being some kind of humans only buff but, that is what we are. We cannot assume an alien "understands" the way we do.

We can understand each other because we are all human. When it comes to a machine understanding a human... this is impossible according to the definition of the term understanding.

When one human has skinned a knee and its bleeding, another human, who has had a similar misfortune, can truly understand what it is to have skinned his knee and can therefore empathize because of their experience with the exact same curcumstances...

ie: damaged cellular structure, pain, loss of blood, embarassing stains on the pants, limping, inability to walk properly, bandaids, memories of the parent taking care of skinned knees... etc...

An alien can't understand these curcumstance... especially if it has not evolved legs or it has evolved beyond legs.

A machine is less likely to be able to generate or compute the empathy required in the scenario.

I fail to see a risk in maintaining the term understanding as an anthropocentric term.

There are greater risks in assigning the term to an alien or to a machine.

If we Assume a machine has an understanding of anything or everything, then many humans, who are as dumb as doorhandles, will put confidence in this very presumption.

Imagine all the humans who are depressed or over anxious or have some condition that is understood only by humans.

When they google the word depressed and they get all these appropriate responses from the computer engine... they'll think the computer understands their condition. And that could not be further from the truth.
 
  • #42
quantumcarl said:
Yes, sorry if I come across as being some kind of humans only buff but, that is what we are. We cannot assume an alien "understands" the way we do.
We can understand each other because we are all human. When it comes to a machine understanding a human... this is impossible according to the definition of the term understanding.
You qualify your reply by saying "we cannot assume an alien understands the way we do". This is, with respect, not the same as saying "we cannot assume the alien understands".
I could argue that "moving finger cannot assume quantumcarl understands the way that moving finger does" (and cite as evidence the fact that many contributors on this forum seem to have problems understanding each other's point of view), but that is NOT the same as saying "quantum carl does not understand".
Take the analogy of Kirk and Spock. It may be the case that Kirk and Spock "understand" things in different ways, but it would be wrong to claim that one of them has any more right to claim "understanding" than the other simply because it is human. Using your definition, Spock could claim that Kirk is unable to understand (because Kirk is not Vulcan), and Kirk could claim that Spock is unable to understand (because Spock is not human).
An extreme interpretation of your definition would indeed be that "only moving finger understands" - quantumcarl is not moving finger, therefore quantumcarl can never understand things exactly the same way that moving finger does. Would that be rational?
quantumcarl said:
When one human has skinned a knee and its bleeding, another human, who has had a similar misfortune, can truly understand what it is to have skinned his knee and can therefore empathize because of their experience with the exact same curcumstances...
ie: damaged cellular structure, pain, loss of blood, embarassing stains on the pants, limping, inability to walk properly, bandaids, memories of the parent taking care of skinned knees... etc...
An alien can't understand these curcumstance... especially if it has not evolved legs or it has evolved beyond legs.
All this shows is that an alien might have problems understanding some peculiarly human experiences - just as Spock may have trouble understanding human emotions - but it does NOT follow from this that Spock "does not understand" in the general sense of the word "understand".
quantumcarl said:
I fail to see a risk in maintaining the term understanding as an anthropocentric term.
Then I must assume you do not see the humour and the irony in the Spock and Professor C3PO scenarios I have posted above. Do you consider these two scenarios to be quite logical and rational?
quantumcarl said:
Imagine all the humans who are depressed or over anxious or have some condition that is understood only by humans.
When they google the word depressed and they get all these appropriate responses from the computer engine... they'll think the computer understands their condition. And that could not be further from the truth.
An agent can claim to possesses "understanding" without meaning that it understands everything. I assume that you claim to possesses understanding? But do you claim to understand everything? I hope not (could you accurately diagnose medical conditions?). In the same way a machine could claim to possesses understanding without necessarily understanding all of the elements of the human condition.
It is quite possible that a computer could be developed which is more skilled than many ordinary humans in diagnosing human medical conditions - such a thing is not impossible. In a very real sense, such a computer would "understand" much more about human biology, physiology, psychology and medicine than either you or I. I would gladly "trust" such a computer medical diagnosis more readily than I would trust the amateur diagnosis of a well-intentioned but medically ignorant human. Would you?
Quantumcarl, your argument (with respect) implicitly assumes that "understanding" necessarily means the same as "an understanding of, and empathising with, everything about the human condition". Firstly this is indeed extreme anthropocentrism in action, and secondly I would suggest that even humans often do not "understand" according to this definition!
MF
 
Last edited:
  • #43
moving finger said:
You qualify your reply by saying "we cannot assume an alien understands the way we do". This is, with respect, not the same as saying "we cannot assume the alien understands".
I could argue that "moving finger cannot assume quantumcarl understands the way that moving finger does" (and cite as evidence the fact that many contributors on this forum seem to have problems understanding each other's point of view), but that is NOT the same as saying "quantum carl does not understand".
Take the analogy of Kirk and Spock. It may be the case that Kirk and Spock "understand" things in different ways, but it would be wrong to claim that one of them has any more right to claim "understanding" than the other simply because it is human. Using your definition, Spock could claim that Kirk is unable to understand (because Kirk is not Vulcan), and Kirk could claim that Spock is unable to understand (because Spock is not human).
An extreme interpretation of your definition would indeed be that "only moving finger understands" - quantumcarl is not moving finger, therefore quantumcarl can never understand things exactly the same way that moving finger does. Would that be rational?

All this shows is that an alien might have problems understanding some peculiarly human experiences - just as Spock may have trouble understanding human emotions - but it does NOT follow from this that Spock "does not understand" in the general sense of the word "understand".

Then I must assume you do not see the humour and the irony in the Spock and Professor C3PO scenarios I have posted above. Do you consider these two scenarios to be quite logical and rational?

An agent can claim to possesses "understanding" without meaning that it understands everything. I assume that you claim to possesses understanding? But do you claim to understand everything? I hope not (could you accurately diagnose medical conditions?). In the same way a machine could claim to possesses understanding without necessarily understanding all of the elements of the human condition.
It is quite possible that a computer could be developed which is more skilled than many ordinary humans in diagnosing human medical conditions - such a thing is not impossible. In a very real sense, such a computer would "understand" much more about human biology, physiology, psychology and medicine than either you or I. I would gladly "trust" such a computer medical diagnosis more readily than I would trust the amateur diagnosis of a well-intentioned but medically ignorant human. Would you?
Quantumcarl, your argument (with respect) implicitly assumes that "understanding" necessarily means the same as "an understanding of, and empathising with, everything about the human condition". Firstly this is indeed extreme anthropocentrism in action, and secondly I would suggest that even humans often do not "understand" according to this definition!
MF

An alien might use the term "ravlinz" to describe a condition similar to what we term understanding.

In order to identify for the listener or inquirer the distinct difference between the quality of comprehension in the aliens "brain" function and the human's... we are able to distiquish the (vast) differences by using two different terms... in this case "ravlinz" identifies the quality and origin of comprehension in a specific alien and "understanding" identifies the quality and origin of comprehension in an human.

When you look for understanding... where do you look?

Most people look for someone who understands them because they have a similar background or they may even be a family member.

I suppose some people look for understanding from aliens or machines. What they get is results of computation and various forms of "ravlinz (the "r","l" and "n" are silent).

As you well know, a proper response rarely means someone understands what is being asked to be understood.

We cannot claim to understand one another because, as Dr.D has hammered home, understanding is a relative state.

And as you have pointed out... it is rare that any sort of understanding takes place between two people, here or anywhere else.

Therefore, it seems of the utmost urgency, to me, that we cease to use terms such as "understanding" or "feeling" and so on... to describe a function taking place in an extra-anthropic system (computer or alien or raccoon or african orchid) until humans have been able to use the term in a unversal manner which defies relative semanticisim and the resulting confusion therein.

Until such a time I believe there are many other descriptive words in the languages of the people of Earth that can describe the delicate way computer's are able to interpret the phenomenon of humankind and ensuing environs ...

have a nice day

or should I say,

00100100110101000101010101010101111111111010100010101011000000101011010110010000100111010101001010010000101011101010100100000101010100101010101010101001111111110101000101010101001000101010101011101010010101010100100101010101100101011001001101010101010100000101001
 
  • #44
Hi Quantumcarl
Did you deliberately avoid answering most of my questions from my last post, or was that simply an oversight (or perhaps a misunderstanding? :smile: )

quantumcarl said:
An alien might use the term "ravlinz" to describe a condition similar to what we term understanding.
Ahhh, I think I see what you are getting at.
And perhaps aliens are incapable of "sight" also? After all, their eyes must work in a different way to ours, therefore it would be wrong to say that an alien "sees" something the same way that we do. So aliens with eye-like appendages must "quorkfungle" rather than "see", is that the point?
And they "plonkypoop" rather than "hear", yes?
And they certainly cannot speak, because speech is most definitely a human process. Perhaps they "murzboggle" instead?
Am I getting it at last?
Of course, all of this means it will be impossible for us to understand an alien, or for an alien to understand us, because while we are "seeing, hearing and speaking" they will be be "quorkfungling, plonkypooping and murzboggling", and ne'er the twain shall meet.
Of course, it does not apply only to aliens. After all, a Frenchman is an alien in this respect. English people "see", but French people "vu". What reason do we have to believe that they are equivalent? How on Earth can English and French people ever be expected to understand each other?
I apologise if you consider my reply above "flippant", but I am simply trying to show how ridiculous the purely anthropocentric perspective is.
quantumcarl said:
order to identify for the listener or inquirer the distinct difference between the quality of comprehension in the aliens "brain" function and the human's... we are able to distiquish the (vast) differences by using two different terms... in this case "ravlinz" identifies the quality and origin of comprehension in a specific alien and "understanding" identifies the quality and origin of comprehension in an human.
Yes, I see. Just as “comprendre” expresses what a Frenchman does, and “understand” is what an Englishman does. Of course, it stands to reason (following your logic) that these are completely different concepts and a Frenchman could never be said to be able to understand in the same way an Englishman does. Yes, I can see that.
quantumcarl said:
When you look for understanding... where do you look?
Most people look for someone who understands them because they have a similar background or they may even be a family member.
This is simple “conditioning”, but can often be in error. If I want to find someone who understands quantum mechanics then I might be wasting my time asking a family member.
quantumcarl said:
As you well know, a proper response rarely means someone understands what is being asked to be understood.
This applies equally between humans.
quantumcarl said:
We cannot claim to understand one another because, as Dr.D has hammered home, understanding is a relative state.
Do you conclude from this that we never understand each other? If so, why are you here?
quantumcarl said:
And as you have pointed out... it is rare that any sort of understanding takes place between two people, here or anywhere else.
No, with respect, you are misquoting me.
I said
moving finger said:
many contributors on this forum seem to have problems understanding each other's point of view
and
moving finger said:
humans often do not "understand" according to this (ie your) definition
however as you know I do not agree with your definition (as I have said many times, it takes anthropocentrism to ridiculous extremes).
My statements are NOT the same as saying “it is rare that any sort of understanding takes place between two people”
I happen to believe that people CAN reach a level of understanding between themselves (if they wish to), and I also believe that it is possible in principle for a human and a machine to reach a similar level of understanding between themselves.
quantumcarl said:
Therefore, it seems of the utmost urgency, to me, that we cease to use terms such as "understanding" or "feeling" and so on... to describe a function taking place in an extra-anthropic system (computer or alien or raccoon or african orchid) until humans have been able to use the term in a unversal manner which defies relative semanticisim and the resulting confusion therein.
With respect, I would turn this around and suggest that we must cease to assume that “understanding” is a purely human characteristic and can be defined only in human terms.
Imho, your insistence on defining understanding anthropocentrically only IMPEDES the advance towards that state where we are “able to use the term in a unversal manner which defies relative semanticisim and the resulting confusion therein”, it does not help the process.
With respect, or maybe that should be quortylzeebunkum
MF
 
Last edited:
  • #45
Hi Quantumcarl
1) I am conscious of the fact that we seem to be at an "impasse". Would you agree?
2) I believe that the impasse is created by our apparently entrenched positions whereby we seem to be viewing the question of "understanding" in "black and white" terms - in other words "either an agent understands, completely, or else it does not understand, at all". Thus we have not been allowing any kinds of "shades of grey" in understanding - we have been arguing thus far as if understanding is an "all or nothing" affair. Would you also agree?
3) Would you agree that understanding, in the real world, is not an "all or nothing" affair? That in fact there can be "shades of grey" in understanding, even between two human agents?
I have more to say on this, but would appreciate your feedback on the above three points first... (so that I can be sure we are on the same page)
With respect

MF

ps : You made a small error in your last post, where you state :

00100100110101000101010101010101111111111010100010 10101100000010101101011001000010011101010100101001 00001010111010101001000001010101001010101010101010 01111111110101000101010101001000101010101011101010 01010101010010010101010110010101100100110101010101 0100000101001

In fact, to be entirely logically self-consistent, this should have read :

00100100110101000101010101010101111111111010100010 10101100000010101101011001000010011101010100101001 00001010111010101001001001010101001010101010101010 01111111110101000101010101001000101010101011101010 01010101010010010101010110010101100100110101010101 0100000101001

I hope you can appreciate the difference?

:rofl:

MF
 
Last edited:
  • #46
moving finger said:
With respect, I would turn this around and suggest that we must cease to assume that “understanding” is a purely human characteristic and can be defined only in human terms.
Imho, your insistence on defining understanding anthropocentrically only IMPEDES the advance towards that state where we are “able to use the term in a unversal manner which defies relative semanticisim and the resulting confusion therein”, it does not help the process.
With respect, or maybe that should be quortylzeebunkum
MF

Your point is well taken here. I realize that any sentimentality attached to the use of a word may be dis-functional and impede the advance of its universality.

I am a prude when it comes to terminology (however) and the implications that come with it. You know, I'd like to avoid a scenario such as with George Orwell's newspeak where..."slavery is freedom"... "war is peace"... "understanding is something a friggin' digitally programmed piece of scrap metal can perform".

I really don't want to see people seeking human kindness and "understanding" from some half baked motobot just because they have been taught that the definition of understanding applies to whatever entity supplies a seemingly correct response to a question.

moving finger said:
Hi Quantumcarl
1) I am conscious of the fact that we seem to be at an "impasse". Would you agree?
2) I believe that the impasse is created by our apparently entrenched positions whereby we seem to be viewing the question of "understanding" in "black and white" terms - in other words "either an agent understands, completely, or else it does not understand, at all". Thus we have not been allowing any kinds of "shades of grey" in understanding - we have been arguing thus far as if understanding is an "all or nothing" affair. Would you also agree?
3) Would you agree that understanding, in the real world, is not an "all or nothing" affair? That in fact there can be "shades of grey" in understanding, even between two human agents?
I have more to say on this, but would appreciate your feedback on the above three points first... (so that I can be sure we are on the same page)

1. In an infinite universe there are no impasses so I disagree. Given another month on the question of understanding and my guess is that you and I will have completely reversed our roles in this discussion. Hypothetically.

2. Simulated understanding is similar to partial understanding and is the first step to complete understanding (hypotheically). So, you're correct to imagine that... one day... there will be an organic bot that has been able to program itself to a degree, and then understand the human condition and other topics in a manner that can be considered understanding as we know it.

3. Shades of grey and more. "Understanding" is far more than a result of sensory input and correct responses. That's why the word is used in the context of "care" and "nourishing".

I would like to get into this further but, thankfully, I'm a busy. That's why I have avoided some of your questions... and tried to simply afford you my view on the subject at hand.

Until next time... same perplexing topic, same misunderstood word.



:bugeye:
 
  • #47
quantumcarl said:
I realize that any sentimentality attached to the use of a word may be dis-functional and impede the advance of its universality.
I'd like to avoid a scenario such as …… "understanding is something a friggin' digitally programmed piece of scrap metal can perform".
Oh dear. Might I humbly suggest (with respect) that I do sense here some “sentimentality” with respect to the concept “only humans have any right to claim understanding”. Or maybe it is simply hostility towards non-humans?
Can you perhaps explain exactly why you wish to avoid such a scenario?
quantumcarl said:
I really don't want to see people seeking human kindness and "understanding" from some half baked motobot just because they have been taught that the definition of understanding applies to whatever entity supplies a seemingly correct response to a question.
I know a few humans who could be described as “half-baked”, and in some cases I doubt their ability to understand certain things, but that would not allow me to conclude that homo sapiens in general is incapable of understanding.
If it can be demonstrated that the “motobot” in question does indeed understand much more than other humans about (for example) the medical diagnosis of human ailments, then why (apart from your emotional repugnance at the thought) would you NOT want to see people seeking medical assistance from such an agent?
quantumcarl said:
1. In an infinite universe there are no impasses so I disagree.
Is the universe infinite? Even if it were, we are finite beings within that universe, therefore an impasse would be significant.
quantumcarl said:
Given another month on the question of understanding and my guess is that you and I will have completely reversed our roles in this discussion.
I doubt that.
quantumcarl said:
2. Simulated understanding is similar to partial understanding and is the first step to complete understanding (hypotheically).
Can you explain the difference between “simulated understanding” and “understanding”? I see no difference. Understanding is a process, not a physical object. Whilst I agree that a (perfectly) simulated object is not synonymous with the original object, I do not see how a (perfectly) simulated process differs in any significant way from the original process.
quantumcarl said:
So, you're correct to imagine that... one day... there will be an organic bot that has been able to program itself to a degree, and then understand the human condition and other topics in a manner that can be considered understanding as we know it.
What? You are now saying that a machine is (in principle) capable of understanding?
I am confused.
quantumcarl said:
"Understanding" is far more than a result of sensory input and correct responses. That's why the word is used in the context of "care" and "nourishing".
And you believe that machines cannot (in principle) “care” and “nourish”?
Apart from this, when I say “I understand Quantum Mechanics”, does that mean there is necessarily any care and nourishment associated with my understanding?
With respect
MF
 
  • #48
moving finger said:
Can you explain the difference between “simulated understanding” and “understanding”? I see no difference.

Actor A: "If we reverse the polarity of the neutron flux whilst emiiting a tetrion burst from the defelctor array, we shouldbe able to enter the Quantum Slipstream"

Actor B: "I see..."
 
  • #49
Hi again Tournesol

Tournesol said:
Actor A: "If we reverse the polarity of the neutron flux whilst emiiting a tetrion burst from the defelctor array, we should be able to enter the Quantum Slipstream"
Actor B: "I see..."

Am I correct in assuming that this is presented as an example of a "simulation of understanding"?

Am I permitted to interrogate the above actors, in order to "test the simulation"?

If "yes", then I would ask probing questions to test the quality of the simulation (ie whether they understand or not). If they pass the test, then I would have to conclude that they understand (regardless of whether this is put forward as a simulation or as a genuine case of understanding - the proof of the pudding is in the eating).

If they do not pass the test then I must conclude that it is a poor simulation, not a true simulation, and they do not understand.

In other words, in the case of understanding, if it walks like a duck, and quacks like a duck, then it is a duck.

With respect

MF
 
  • #50
Sorry, I have been in Biloxi helping with problems created by Katrina.:frown:
Moving Finger said:
With respect, Dr Dick, please read the entire thread before making such accusations. The question was already asked in post #17 of this thread whether the intent of the thread’s originator was to discuss “understanding” or to discuss whether AI could fully implement a human brain. The thread’s originator confirmed that his/her intent was to discuss “understanding”.
Yes, and it is exactly the pressure to take that as a rational perspective which constitutes the "misdirection of attention" which I was referring to. :wink:
quantumcarl said:
In this thread we've made the central issue of John Searle's China Room the definition of "understanding" because, as is blaringly evident in the quote you have repeated from the hypothesis... ie:
...the word "understanding" is used three times as though everyone knows what to understand means... and that it applies to a computer as much as to the originator of the word, a human.
Confirming you have been sucked into an invalid perspective.:grumpy:
quantumcarl said:
You are either confused and don't possesses a great deal of information, experience or empathy about a subject or you do possesses a great deal of information, experience or empathy about a subject.
Is that supposed to be your definition of "understanding"? If that is the case, then look at the definition for a moment. You are talking about what someone possesses without providing any mechanism at all for determining what they possess. It is a definition of utterly no significance and serves no purpose other than to point out the character of your perspective (which provides no basis for discussion).:confused:
quantumcarl said:
What happened to being able to "explain" something?
As I said, "I hold that the requirement is that one can present a statement of expectations which are consistent with other information related to the thing supposedly understood". That statement puts forth the essence of any explanation of anything. Any explanation of anything is essentially a self consistent story in accordance with the known facts which gives logical credence to expectations of those facts and possible additional facts not yet known. :rolleyes:
Doctordick said:
The judgment of the quality of that "understanding"...
The method by which one decides whether "an understanding" being judged is of good or poor quality;
Doctordick said:
... is related to the extent to which that statement of expectations remains consistent with...
is a direct function of the relationship between the expectations implied by "that story" and
Doctordick said:
other information related to the thing supposedly understood
the known facts.
Doctordick said:
as the information on the subject increases.
That means the story remains reasonable as one learns more.
If one is able to create such a story then it is the presumption of the listener that whoever created the story understood what they were talking about. Without the story, the feeling that one understands something is little more than self delusion.:biggrin:
quantumcarl said:
There's still the problem of figuring out what you've written here. Please re-phrase.
The whole thing is being able to create a mechanism which yields expectations consistent with what is known and that is already being done in many video games on the market at this very moment. That phenomena will become more and more sophisticated as time goes on. Some day we will reach the stage where the common man will use the phrase "it understands" when his computer "explains" things to him. And that day is not so far away as many think. :devil:
moving finger said:
Am I permitted to interrogate the above actors, in order to "test the simulation"?
If "yes", then I would ask probing questions to test the quality of the simulation (ie whether they understand or not). If they pass the test, then I would have to conclude that they understand (regardless of whether this is put forward as a simulation or as a genuine case of understanding - the proof of the pudding is in the eating).
If they do not pass the test then I must conclude that it is a poor simulation, not a true simulation, and they do not understand.
In other words, in the case of understanding, if it walks like a duck, and quacks like a duck, then it is a duck.
I think you fundamentally understand my perspective even if you deny it. Asking probing questions is the essence of "getting the rest of the story" (that explanation I was talking about). The problem in simulating "understanding" is to develop a mechanism which yields expectations consistent with what is observed. That is exactly what you are referring to when you say " if it walks like a duck, and quacks like a duck, then it is a duck".:smile:
Have fun -- Dick
Knowledge is Power
and the most common abuse of that power is to use it to hide stupidity
 
  • #51
Doctordick said:
I think you fundamentally understand my perspective even if you deny it.
With respect, which perspective is this? You have lost me here.

Doctordick said:
Asking probing questions is the essence of "getting the rest of the story" (that explanation I was talking about).
Asking probing questions is simply the means of testing, and trying to falsify, the hypothesis "this agent understands". I would not expect anyone to take for granted (for example) any claim that "my cat understands quantum physics" - the only way to establish whether such a claim is true or false is to put it to the test.

MF
 
Last edited:
  • #52
moving finger said:
If "yes", then I would ask probing questions to test the quality of the simulation (ie whether they understand or not). If they pass the test, then I would have to conclude that they understand (regardless of whether this is put forward as a simulation or as a genuine case of understanding - the proof of the pudding is in the eating).

That would be a Turing Test then. Of course the China/Chinese room is specicifally designed as a response to the TT (or to the idea
that is an entirely sufficient criterion).

IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.
 
  • #53
Tournesol said:
That would be a Turing Test then. Of course the China/Chinese room is specicifally designed as a response to the TT (or to the idea
that is an entirely sufficient criterion).
IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.

Searle's opponents said:
A number of objections have been raised about his (Searle's) conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain.

This is wrong because the room is specific to interpreting chinese... an entire brain holds information about many other applicible or non-applicibble concepts. Sometimes the concepts offer input with regard to "understanding" the concept of chinese. The room, however, would equate to a simple set of rules about chinese characters, and that's it.

The more I look at Searle's experiment, the more it seems to lack controls and parallel comparisomes.

Searle's opponent's said:
Thus, although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by anyone part.

Righty oh. Mind you, the person in the room has an "entire brain" and still does not understand written Chinese. They are simply performing a rote task, matching scrawls of ink on paper to scrawls of ink on paper. There is no understanding taking place. As for the room... it is very silent through all of this... exhibiting no understanding that there is even someone in the room.


Searle's Opposition said:
Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of anyone component in the room - person, book, or paper.

So, we have seen behind the curtain. The great wizard really doesn't understand what he's telling us however, the sum of the parts or the "emergent property" of behind the curtain and the wizard and all his books is what we should gullably accept as "understanding".

I (still) disagree. If you have any comprehension of understanding... you already know why I disagree.
 
  • #54
doctordick said:
That means the story remains reasonable as one learns more.
If one is able to create such a story then it is the presumption of the listener that whoever created the story understood what they were talking about. Without the story, the feeling that one understands something is little more than self delusion.
I think your mincing the concept here. The "explination" or "story" is not requisite for "understanding". These elements are only necessary for an observer to ascertain whether a subject possesses "understanding". You might assert an observer created universe but then we'd get muddled in discussing whether or not the subject is capable of observing itself.

QC said:
Tournesol said:
That would be a Turing Test then. Of course the China/Chinese room is specicifally designed as a response to the TT (or to the idea
that is an entirely sufficient criterion).
IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.
Searle's opponents said:
A number of objections have been raised about his (Searle's) conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain.
This is wrong because the room is specific to interpreting chinese... an entire brain holds information about many other applicible or non-applicibble concepts. Sometimes the concepts offer input with regard to "understanding" the concept of chinese. The room, however, would equate to a simple set of rules about chinese characters, and that's it.

The more I look at Searle's experiment, the more it seems to lack controls and parallel comparisomes.
But this isn't wrong. It's one of the fundamental flaws of the CR. It is exactly this which prevents the CR from "understanding" chinese.
Langauge is purely representative. Words do not have an inherant semantic property. Those "applicable or non-applicable concepts" you mention are in fact all applicable to the understanding of any given language because these concepts are what define the words. The word "red" is defined by all of the concepts in your brain relating to the experience of what we label "red". Since we are not telepathes we use words as tools to communicate the information inside our brains. The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.
 
  • #55
moving finger said:
Oh dear. Might I humbly suggest (with respect) that I do sense here some “sentimentality” with respect to the concept “only humans have any right to claim understanding”.

Humans created the word understanding and they have the right to apply it to what they want. My position is that there is not generic use of the word understanding where it applies to a pocket calculator. When you step on the gas in your car, your car does not understand you want to move faster in a direction... it simply responds properly to your sensory input.
moving finger said:
Can you perhaps explain exactly why you wish to avoid such a scenario?

Until humans have an understanding of one another, French, Iranian, Iraqis, Sunnis, ****es, Egyptians and Mongolians included I think we won't have a good comprehension of what understanding is. Since we do not have a good definition of the word understanding and therefore employ the word without really knowing what it means...we should hold off on applying it to computers while it either evolves a more solid and universal meaning or dissappears from the language completely.


moving finger said:
I know a few humans who could be described as “half-baked”, and in some cases I doubt their ability to understand certain things, but that would not allow me to conclude that homo sapiens in general is incapable of understanding.

I'm glad you are not being falacious about humans.


moving finger said:
If it can be demonstrated that the “motobot” in question does indeed understand much more than other humans about (for example) the medical diagnosis of human ailments,
then why (apart from your emotional repugnance at the thought) would you NOT want to see people seeking medical assistance from such an agent?

I don't mind seeking med. assist. from a bot, however, the information and treatment, if any, from the bot would only demonstrate to me that the bot is a tool that is helping me understand my situation... I would not assume that the bot understands my situation. As far as I know motobot is only responding correctly to various stimuli and that, as far as I know, is not the definition of understanding.


moving finger said:
Whilst I agree that a (perfectly) simulated object is not synonymous with the original object, I do not see how a (perfectly) simulated process differs in any significant way from the original process.

And that must be frustrating for you.


moving finger said:
And you believe that machines cannot (in principle) “care” and “nourish”?
Apart from this, when I say “I understand Quantum Mechanics”, does that mean there is necessarily any care and nourishment associated with my understanding?With respect
MF


That depends if you put any care into your study of QM and if you nourished certain relationships and concepts surrounding your studies.
 
  • #56
TheStatutoryApe said:
I think your mincing the concept here. The "explination" or "story" is not requisite for "understanding". These elements are only necessary for an observer to ascertain whether a subject possesses "understanding". You might assert an observer created universe but then we'd get muddled in discussing whether or not the subject is capable of observing itself.
But this isn't wrong. It's one of the fundamental flaws of the CR. It is exactly this which prevents the CR from "understanding" chinese.
Langauge is purely representative. Words do not have an inherant semantic property. Those "applicable or non-applicable concepts" you mention are in fact all applicable to the understanding of any given language because these concepts are what define the words. The word "red" is defined by all of the concepts in your brain relating to the experience of what we label "red". Since we are not telepathes we use words as tools to communicate the information inside our brains. The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.

Does the machine understand it is shuffling about ink on paper... or in the case of a computer, does the computer understand that it is collecting data and computing and answer?

This is where understanding becomes a question of consciousness... so consciousness must be clearly defined as well...

I also believe the word "experience" must play into this discussion because all understanding is a result of experience.

Thanks!
 
  • #57
QC said:
Does the machine understand it is shuffling about ink on paper... or in the case of a computer, does the computer understand that it is collecting data and computing and answer?
Does a pocket calculator understand math? I'm thinking that you would say no (this is admittedly only an assumption). If so then considering a human who understands math (which I am assuming you would agree is possible) what is the fundamental difference between a human working a math problem and a calculator working a math problem?

I'm trying to determine for myself what the difference is as well and would like your input. I've read an essay that's logic would assert a human does not utilize a "conscious" act in such an activity. I have a hard time refuting that so I more or less except it but it makes the idea of "understanding" even more elusive.
I'm thinking that a true definition of what we call "understanding" relies on a dynamic process such as "learning". The element of the calculator's psuedo-understanding of math is static. What do you think?
 
  • #58
Tournesol said:
That would be a Turing Test then.
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.
Tournesol said:
IOW, Searle is saying that what simulated understanding would
be like from the inside is followig a lot of meaningless
(to the operator) rules.
Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator" (he uses the rulebook to answer questions in Chinese), but it does not follow that the man in the CR is also the "agent which possesses understanding". The operator in this case is following a lot of (to him) meaningless rules, because the understanding is in the entire CR, not in the operator.
In the same way, if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.
May your God go with you.
MF
 
  • #59
Searle’s opponents said:
A number of objections have been raised about his (Searle's) conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain.
quantumcarl said:
This is wrong because the room is specific to interpreting chinese... an entire brain holds information about many other applicible or non-applicibble concepts.
Incorrect. In practice we observe that brains possesses an understanding of more than “just Chinese”, but this once again is due to our anthropocentric perspective. I would argue that in principle a brain could exist which simply “understands Chinese” in the same way that the CR understands Chinese, with no understanding (for example) of non-Chinese topics.
quantumcarl said:
Sometimes the concepts offer input with regard to "understanding" the concept of chinese. The room, however, would equate to a simple set of rules about chinese characters, and that's it.
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?
Can you give an example of a concept which (a) offers input to understanding the concept of Chinese, but at the same time (b) could not possibly be a part of the rules of the CR?
Searle's opponents said:
Thus, although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by anyone part.
quantumcarl said:
the person in the room has an "entire brain" and still does not understand written Chinese. They are simply performing a rote task, matching scrawls of ink on paper to scrawls of ink on paper. There is no understanding taking place.
In the same way, each neuron in the brain “performs a rote task”. Can you identify exactly where the “homunculus that understands” sits in the brain?
quantumcarl said:
As for the room... it is very silent through all of this... exhibiting no understanding that there is even someone in the room.
The CR is certainly NOT silent. Ask it any question in Chinese, and it will answer correctly and rationally. How is this “silent”?
Searle's Opposition said:
Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of anyone component in the room - person, book, or paper.
quantumcarl said:
So, we have seen behind the curtain. The great wizard really doesn't understand what he's telling us however, the sum of the parts or the "emergent property" of behind the curtain and the wizard and all his books is what we should gullably accept as "understanding".
And if the explanation is unacceptable to you, then what (pray) do you accept as evidence that a human “understands”?
quantumcarl said:
I (still) disagree. If you have any comprehension of understanding... you already know why I disagree.
With respect, your statement here is tantamount to “I cannot explain what understanding is, but I KNOW what it is” …….and all the rest is handwaving……
May your God go with you
MF
 
  • #60
TheStatutoryApe said:
Langauge is purely representative. Words do not have an inherant semantic property.
Semantic understanding arises from symbol manipulation. I would claim that the CR could carry out such symbol manipulation.
TheStatutoryApe said:
The word "red" is defined by all of the concepts in your brain relating to the experience of what we label "red".
"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?
In fact, are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)

A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.
Our senses are aids to our understanding, they are not the sole and unique source of understanding.
TheStatutoryApe said:
Since we are not telepathes we use words as tools to communicate the information inside our brains.
The CR possesses information inside itself; the CR communicates with words.
TheStatutoryApe said:
The CR (as designed by Searle) has a set of tools with no purpose other than to shuffle them about and hence they lack any meaning aside from the process of shuffling them about as far as the CR is concerned.
Your argument seems to be based on the suggestion that the tools in the CR “lack any meaning” because their only purpose “is to shuffle about words”.
I disagree. The purpose of the tools in the CR is “to understand Chinese”.
From what does “meaning” arise?
I would suggest that “meaning” arises simply from a process of symbol manipulation.
On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?
May your God go with you
MF
 
Last edited:
  • #61
quantumcarl said:
When you step on the gas in your car, your car does not understand you want to move faster in a direction... it simply responds properly to your sensory input.
And it is simply from these rather simplistic analogies you conclude that “no machine can ever understand”?
quantumcarl said:
Until humans have an understanding of one another, French, Iranian, Iraqis, Sunnis, ****es, Egyptians and Mongolians included I think we won't have a good comprehension of what understanding is.
But (with respect) your argument is based on a premise which contradicts this statement, which is that “quantumcarl comprehends what understanding is”, and you define it such that only humans can possesses understanding.
quantumcarl said:
Since we do not have a good definition of the word understanding and therefore employ the word without really knowing what it means...we should hold off on applying it to computers while it either evolves a more solid and universal meaning or dissappears from the language completely.
I disagree. The correct (logical) conclusion from your argument should be “we should hold off making any definitive statements about whether or not a machine can understand, until we understand what understanding is”. This seems not to be the position that you take.
quantumcarl said:
I don't mind seeking med. assist. from a bot, however, the information and treatment, if any, from the bot would only demonstrate to me that the bot is a tool that is helping me understand my situation...
And my GP (that’s General Practitioner over here in England, otherwise known as family doctor) is in a very real sense “a tool that is helping me understand my (medical) situation” – but so what?
quantumcarl said:
I would not assume that the bot understands my situation. As far as I know motobot is only responding correctly to various stimuli and that, as far as I know, is not the definition of understanding.
And similarly I have no idea whether my GP really “understands my situation” in the sense that he does not necessarily know all about my background, my childhood, my hopes, fears, beliefs, prejudices, fantasies, aberrations, fetishes……etc etc….. but that does not mean that I conclude from this that my GP “does not understand my medical condition”. Why would a bot need to fully “understand your situation” (any more than a human doctor does) in order to demonstrate an understanding of medicine?
moving finger said:
I do not see how a (perfectly) simulated process differs in any significant way from the original process.
quantumcarl said:
And that must be frustrating for you.
Actually it is very satisfying.
Are you suggesting that you do see how a (perfectly) simulated process differs in a significant way from the original process? Can you explain?
moving finger said:
when I say “I understand Quantum Mechanics”, does that mean there is necessarily any care and nourishment associated with my understanding?
quantumcarl said:
That depends if you put any care into your study of QM and if you nourished certain relationships and concepts surrounding your studies.
Why do you think a machine could necessarily not put care into its study of QM and could not nourish certain relationships and concepts surrounding its studies?
May your God go with you
MF
 
  • #62
TheStatutoryApe said:
I'm thinking that a true definition of what we call "understanding" relies on a dynamic process such as "learning". The element of the calculator's psuedo-understanding of math is static. What do you think?
This may be true of simple pocket calculators, but there is no reason in principle why a "learning calculating machine" could not exist.

MF
 
  • #63
moving finger said:
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.

Yes: figure out how the brain produces consciousness physically, and
see if the AI has the right kind of physics to produce consciousness.

Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator"

He is as a matter of definition.

(he uses the rulebook to answer questions in Chinese), but it does not follow that the man in the CR is also the "agent which possesses understanding". The operator in this case is following a lot of (to him) meaningless rules, because the understanding is in the entire CR, not in the operator.

...and supposing the operator internalises the rules...


In the same way, if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.

...and supposing consciousness requires understanding; then the operator
does consciously understand Chinese, because he understands Chineseby virtue of manipulating the rules; and the operator doesn't understand
Chinese, becuase he has no conscious awareness of understanding Chinese.

This is Searle's reductio of the Systems Response.

Well, you say, understanding doesn't require consciousness.

But it does; there is a difference between competencies that are displayed
instinctively, or learned by rote, and those that are understood.
 
  • #64
moving finger said:
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?

Consciousness.
 
  • #65
moving finger said:
Semantic understanding arises from symbol manipulation.

You have no reason to suppose that is the only requisite.
Whether it does or not is very much open to question.


"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?

It is perfectly reasonable to suggest that anyone needs normal vision in
order to fully understand colour terms in any language.

In fact, are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)

The latter is critical to the ordinary, linguistic understanding of "red".

A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.

It does not mean they are completely incapable; it does not mean
they are as capable as a sightd person.

Are yo conceding that an AI's understandign would be half-baked ?


Our senses are aids to our understanding, they are not the sole and unique source of understanding.

They can be a necessary condition without being a sufficient condition.
If an AI lacks them, it would not have full human semantics ("If a lion could sepak, we would not be able to understand it")

The CR possesses information inside itself; the CR communicates with words.


I would suggest that “meaning” arises simply from a process of symbol manipulation.

Are you claiming that is sufficient, or only necessary.

On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?

Presumably on the basis that while it has one necessary-but-insufficient ingredient, symbol manipulation, it lacks another: sensation.
 
  • #66
moving finger said:
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.
Tournesol said:
Yes: figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness.
What you suggest (with respect) is not a test, let alone a test of understanding. What you suggest is an explanation (of consciousness, not of understanding per se).
moving finger said:
Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator"
Tournesol said:
He is as a matter of definition.
? I’m not sure what you mean here. Are you saying that you disagree with my above statement?
moving finger said:
if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.
Tournesol said:
...and supposing consciousness requires understanding; then the operator does consciously understand Chinese, because he understands Chineseby virtue of manipulating the rules; and the operator doesn't understand Chinese, becuase he has no conscious awareness of understanding Chinese.
This does not follow. Why should the man’s consciousness necessarily understand anything of what it is manipulating in the internalised rulebook (any more than the man in the CR understands anything of Chinese – the man in this case is consciously aware of manipulating chinese characters, but he has no understanding of them)?
Tournesol said:
there is a difference between competencies that are displayed
instinctively, or learned by rote, and those that are understood.
None of the above shows that understanding requires consciousness, only that there is more to understanding than simply being able to repeat a few phrases.
moving finger said:
The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?
Tournesol said:
Consciousness.
Is this merely your opinion, or can you provide any evidence that this is necessarily the case?
moving finger said:
Semantic understanding arises from symbol manipulation.
Tournesol said:
You have no reason to suppose that is the only requisite.
Whether it does or not is very much open to question.
What is missing (in your opinion)? Oh yes, consciousness. But I don’t see why consciousness is required.
moving finger said:
"experiencing the sensation of seeing red" is NOT tantamount to "understanding what the adjective red means".
Or are you perhaps suggesting that an agent must necessarily possesses “sight” in order to understand Chinese?
Tournesol said:
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
moving finger said:
are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)
Tournesol said:
The latter is critical to the ordinary, linguistic understanding of "red".
It has nothing to do with the information-processing sense of understanding what red is, it has only to do with the sense-experience of red.
moving finger said:
A blind person perhaps does not know the experience or sensation of seeing red, but that does not mean that person is incapable of any understanding, nor that he/she is incapable of underdstanding what the adjective "red" means.
Tournesol said:
It does not mean they are completely incapable; it does not mean they are as capable as a sightd person.
Are yo conceding that an AI's understandign would be half-baked ?
Where have I conceded that? But you seem to be implying that a blind person’s understanding would be half-baked.
moving finger said:
Our senses are aids to our understanding, they are not the sole and unique source of understanding.
Tournesol said:
They can be a necessary condition without being a sufficient condition.
If an AI lacks them, it would not have full human semantics ("If a lion could sepak, we would not be able to understand it")
I dispute they are a necessary condition. If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
Are you suggesting that a blind person does not have full human semantics?
Does this mean a blind person is incapable of understanding?
moving finger said:
On what basis do you claim (ie how can you show) that there is necessarily no “meaning” in the CR?
Tournesol said:
Presumably on the basis that while it has one necessary-but-insufficient ingredient, symbol manipulation, it lacks another: sensation.
Sensation is a necessary ingredient of understanding? Therefore if you place me in a state of sensory-deprivation it follows that I will lose all understanding, is that correct?
May your God go with you
MF
 
  • #67
TheStatutoryApe said:
Does a pocket calculator understand math? I'm thinking that you would say no (this is admittedly only an assumption). If so then considering a human who understands math (which I am assuming you would agree is possible) what is the fundamental difference between a human working a math problem and a calculator working a math problem?
I'm trying to determine for myself what the difference is as well and would like your input. I've read an essay that's logic would assert a human does not utilize a "conscious" act in such an activity. I have a hard time refuting that so I more or less except it but it makes the idea of "understanding" even more elusive.
I'm thinking that a true definition of what we call "understanding" relies on a dynamic process such as "learning". The element of the calculator's psuedo-understanding of math is static. What do you think?

I have also had the idea that learning is a product of understanding. Learning implies experience that is stored and readily available even when the task does not require the learned experience. (yet as you say, it is the culmination of information in an entire brain that lends itself to understanding)

I have mentioned genetic algorythms as a set of programs that actually builds with the data it is fed in a manner that exhibits the same leaps and "ah ha" as a human brain.

A bit of history with the genetic algorythm: I was experimenting with the idea of the use of the genetic algorythm as a 24/7 research item that would theoretically test various untested and unformulated forms of treating cancer. I wanted to find examples of its use and only found one other person utilizing the program. This was a laser scientist who was working on Star Wars in the mid-nineties. He said it did help a lot with his math and geometric calculations as well as the simulations of situations he was setting up. In the end, as you know, the program was deemed a waste of money.

Theoretically I could set up a genetic algorythm and enter info on the fine art masters and info on how to reproduce or surpass their works, then, electronically, it could probably produce a masterpiece a day for as long as there was electricity available to it. Each piece would be as individual and as compositionally intracate as any of the master's works (theoretically).

However, I could also frame a sand-dune with a camera lens and everyday I would get a perfectly composed and individual visual piece of art ... if not every few seconds ( in a wind storm!)...

What my question is does not have to do, at the moment, with whether a calculator, sand dune or extremely intricate computer "understands" math or art or biology or economics... the way we do (because I don't think it is a fair comparisome)... but, whether the calculator etc... experiences itself and the steps it is taking to offer us a correct response to stimuli.

As I understand it, understanding only comes from someone who understands a topic through their experience of it and through their experience as a consciously existing being. Understanding between humans is only possible because of the common experience they share which is... being human.

Are the sand-dune, computer or cell phone conscious of their experience and existence? If this can be convincingly demonstrated, (which is practically impossible to prove, in even in humans) then, I think we have a foot-hold on what understanding is and whether it could equally be applied to a heap of silicon chips as well as the minerals of which a human is composed.

Sorry, out of time.
 
  • #68
quantumcarl said:
understanding only comes from someone who understands a topic through their experience of it and through their experience as a consciously existing being.
Why should this necessarily be the case?

quantumcarl said:
Understanding between humans is only possible because of the common experience they share which is... being human.
I think you are talking of a special kind of understanding here, one with high empathy. But I do not need to empathise with a Frenchman in order to understand the French language.

MF
 
  • #69
MF said:
Can you or anyone else propose another test? I'm quite open to the idea of any other kind of test.

Yes: figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness.


What you suggest (with respect) is not a test, let alone a test of understanding. What you suggest is an explanation (of consciousness, not of understanding per se).

It is a test based on an explanation; I am saying we have to solve the hard
problem first, before we can have a genuine test.


Quote:
Originally Posted by moving finger
Here we must be careful to distinguish between "the operator" and the "agent which possesses understanding". The man in the CR could be looked upon as the "operator"

Quote:
Originally Posted by Tournesol
He is as a matter of definition.


? I’m not sure what you mean here. Are you saying that you disagree with my above statement?

I am saying "the operator" means "the man in the room".



if the man internalises the rulebook, the man's consciousness then performs the role of the operator ("using" the rulebook), but it does not follow that the man's consciousness possesses understanding. The consciousness in this case is following a lot of (to it) meaningless rules, because the understanding is in the internalised rulebook, not in the consciousness.

...and supposing consciousness requires understanding; then the operator does consciously understand Chinese, because he understands Chineseby virtue of manipulating the rules; and the operator doesn't understand Chinese, becuase he has no conscious awareness of understanding Chinese.

This does not follow. Why should the man’s consciousness necessarily understand anything of what it is manipulating in the internalised rulebook (any more than the man in the CR understands anything of Chinese – the man in this case is consciously aware of manipulating chinese characters, but he has no understanding of them)?

If manipulating symbols is all there is to understanding, and if consciousness
is part of understanding, then there should be a conscious awareness of
Chinese in the room (or in Searle's head, in the internalised case).

But, by the original hypothesis, there isn't.

You could claim that consciousness is not necessarly part of machine understanding;
but that would be an admission that the CR's understanding is half-baked
compared to human understanding...unless you claim that huamn understanding
has nothing to do with consciousness either.

But consciousness is a defintional quality of understanding, just as being
umarried is being a defintional quality of being a bachelor.



there is a difference between competencies that are displayed
instinctively, or learned by rote, and those that are understood.

None of the above shows that understanding requires consciousness, only that there is more to understanding than simply being able to repeat a few phrases.

If you understand something , you can report that you know it, explain how you
know it. etc. That higher-level knowing-how-you-know is consciousness by
definition.


The CR would, by definition, be able to process (understand) all concepts that impact in any way on an understanding of Chinese. What else is there to “understanding Chinese”?

Consciousness.

Is this merely your opinion, or can you provide any evidence that this is necessarily the case?

It is a matter of definition -- it is part of how we distinguish understanding
from mere know-how.


Quote:
Originally Posted by moving finger
Semantic understanding arises from symbol manipulation.

Quote:
Originally Posted by Tournesol
You have no reason to suppose that is the only requisite.
Whether it does or not is very much open to question.


What is missing (in your opinion)? Oh yes, consciousness. But I don’t see why consciousness is required.


Write down a definition of "red" that a blind person would understand.

It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.


Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?

They don't fully lack it, they don't fully have it. But remember that a
computer is much more restricted.


Quote:
Originally Posted by moving finger
are you suggesting even that an agent must possesses the faculty of sight in order to understand what is meant by the adjective "red"?
(Note here that I mean "understand" in the literal scientific information-processing sense of to understand what the phenomena are that give rise to the sensation of red, I do NOT mean it in the sense of "I have experienced seeing the colour red, therefore I understand what red is" - this latter is (to me) NOT understanding, it is merely sense-experience)

Quote:
Originally Posted by Tournesol
The latter is critical to the ordinary, linguistic understanding of "red".


It has nothing to do with the information-processing sense of understanding what red is, it has only to do with the sense-experience of red.

If the "information processing" sense falls short of full human understanding,
and I maintain it does, the arguemnt for strong AI founders and Searle makes
his case. Remember , he is not attacking weak AI, the idea that computers
can come up with some half-baked approxiamtion to human understanding.




Where have I conceded that? But you seem to be implying that a blind person’s understanding would be half-baked.

Yes.


Quote:
Originally Posted by moving finger
Our senses are aids to our understanding, they are not the sole and unique source of understanding.

Quote:
Originally Posted by Tournesol
They can be a necessary condition without being a sufficient condition.
If an AI lacks them, it would not have full human semantics ("If a lion could sepak, we would not be able to understand it")


I dispute they are a necessary condition. If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.

They are necessary to learn the meaning of sensory language ITFP. Once learnt,
they are no longer necessary -- people who become blind in adulthood
do not learned the meanings of colour-words.

Are you suggesting that a blind person does not have full human semantics?
Yes -- neither does someone who has never been in love, given birth, tasted caviare and so on.
Of course they may have "good enough" semantics -- hardly anyone has full
semantics. But a silicon computer would be much more semantically limited than
a person.

Does this mean a blind person is incapable of understanding?

Not on the "good enough" basis. But the case of a computer, or chinese room,
is much more extreme.


Sensation is a necessary ingredient of understanding? Therefore if you place me in a state of sensory-deprivation it follows that I will lose all understanding, is that correct?

No: if you lack the requisite sense, you cannot attach meaning to sensory
language ITFP. If you disagree, define "red" in such a way that a person
blind from birth could understand it.
 
Last edited:
  • #70
moving finger said:
I think you are talking of a special kind of understanding here, one with high empathy. But I do not need to empathise with a Frenchman in order to understand the French language.MF

"High empathy"? Please explain. Is there such thing as a "low empathy"?

As far as I know, empathy is empathy. It is an ability to understand the curcumstances influencing another human being as well as the ability to identify with objects and animals other than humans. It is a part of understanding and a powerful by-product of consciousness.

You don't need to empathize with a Frankophone to understand the french language?

Of course you do. Otherwise you wouldn't be learning french. As soon as the vowels and all those damn silent letters start forming in your mouth... and you have to twist an accent out of your tongue... you are on the path to empathizing with the French people... like it or not. You are assuming their role and method of communication. When you assume the role or... "walk in their shoes" (so to speak) you are truly standing under them... or... understanding the people and their language.

Understanding describes a function in humans that is more complex than the simple ability to repeat words in a correct sequence so that communication in french or math or medicine is achieved. That is called comprehension and it is properly used by the Italians when they ask you if you "comprende?" as in "can you comprehend what I am saying?"

There is a reason there are different words to describe different functions... the differences between the meanings of words are slight... but they are there for a reason. Terminologies offer subtle shades that help to distinguish the speaker's or writer's references and descriptions.

That is why you see cell differenciation in the plant and animal kingdoms. Different cells function in different ways. They don't work in other organs or tissues. They must be used in the context they have evolved to serve. Much in the way languages develope specific terminology to describe specific functions.

The alien term for understanding is different from the North American term "understanding". The alien terms describes a completely different function... they may use telepathy... they may have greater experiences they may hook up with parallel dimensions to ascertain the function of "ravlinz". For humans, and I'm not sure yet what the components of understanding are... but for humans we use experience, consciousness, empathy and knowledge in a slap-dash mixture that we call "understanding".

Thanks!
 

Similar threads

  • General Discussion
Replies
5
Views
2K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • General Discussion
Replies
6
Views
2K
  • General Discussion
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
632
  • General Discussion
Replies
3
Views
793
Replies
4
Views
1K
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
Writing: Input Wanted Clone Ship vs. Generation Ship
  • Sci-Fi Writing and World Building
Replies
30
Views
2K
  • Special and General Relativity
Replies
1
Views
985
Back
Top