Can computers understand?Can understanding be simulated by computers?

  • Thread starter quantumcarl
  • Start date
  • Tags
    China
In summary, the conversation discusses the concept of the "Chinese Room" thought experiment and its implications on human understanding and artificial intelligence. John Searle, an American philosopher, argues that computers can only mimic understanding, while others argue that understanding is an emergent property of a system. The conversation also touches on the idea of conscious understanding and the potential of genetic algorithms in solving complex problems.
  • #1
quantumcarl
770
0
Here is a good puzzle for the Artifical Intellegence buffs in the crowd. The premise is known as the China Room and doesn't have Jane Fonda or Jack Lemon in the cast... in fact it only has some Chinese Characters that hold any of the attention during this little drama.

Here is the situation explained with a bit of background on John Searle.

Name:
John Searle

Dates:
Born: 1932 in Denver, Colorado
Died: n/a

Biography:
John Searle is an American philosopher who is best known for his work on the human mind and human consciousness. According to Searle, the human mind and human consciousness cannot be reduced simply to physical events and brain states.

Searle is particularly well known for developing a thought experiment called the "Chinese Room" argument. With this, he thought he could demonstrate that no computer could ever be made which could really "think" in the way we do - specifically, that it could never acquire an "understanding" of events and processes.

Imagine sitting alone in the room with a huge book full of Chinese characters. Every so often, someone pushes a piece of paper under the door. You take this paper and find that it has Chinese characters on it. Your job is to match up the characters on the paper with the same characters in the book - in doing so, you fill out a new piece of paper with different Chinese characters on it. You don't understand any Chinese, but you know how to fill out the piece of paper by simply taking the appropriate characters from the book.

This, according to Searle, models the behavior of a computer - taking input, putting it through a set of formal rules, and thereby producing new output. Because you don't understand Chinese, you have no idea that the incoming pieces of paper have questions on them and the book is providing you with answers to those questions. As a matter of fact, people on the outside find the answers to be especially insightful and, at times, witty. As far as they are concerned, the room contains a person who understands Chinese.

According to Searle, however, there is no understanding of Chinese present - just a set of formal rules which, if complex enough, can mimic the appearance of genuine understanding. Searle thus concludes that this is all computers will ever be able to accomplish: mimicry which can fool us into thinking that they understand things.

A number of objections have been raised about his conclusion, including the idea that while the room is supposed to be analogous to a computer, then the room should also be analogous to the entire brain. Thus, although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by anyone part. Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of anyone component in the room - person, book, or paper.

If these ideas spur any thoughts, please feel free to share them in this thread on the differences between human thoughts and physical events such as brain states... if any.
 
Physics news on Phys.org
  • #2
Searle's Chinese Room argument is "infamous". Already the subject of, or discussed in, several other threads here such as :

https://www.physicsforums.com/showthread.php?t=89713

https://www.physicsforums.com/showthread.php?t=91002I agree with the final paragraph in the first post of this thread which states :

"although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by anyone part. Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of anyone component in the room - person, book, or paper."MF
 
Last edited:
  • #3
The systems reply says the thinker (in the scenario) isn't Searle, it's the whole Searle-in-the-room system. Searle responds by imagining himself to "internalize all the elements of the system" by memorizing the instructions, etc.: "all the same," he intuits, he "understands nothing of the Chinese" and "neither does the system" (p. 419).
 
  • #4
Tournesol said:
Searle responds by imagining himself to "internalize all the elements of the system" by memorizing the instructions, etc.: "all the same," he intuits, he "understands nothing of the Chinese" and "neither does the system" (p. 419).
I dispute Searle's conclusion. In actuality, Searle's mind would not be CONSCIOUS of any understanding of Chinese, but having internalised all the elements of the system the physical entity called Searle would understand Chinese nevertheless.

Searle's argument that the Chinese Room does not truly understand Chinese only works if one accepts the implicit assumption that understanding necessarily means conscious understanding, but this is an anthropocentric perspective. I do not agree that conscious understanding is necessary in order to achieve understanding.

MF
 
  • #5
moving finger said:
I dispute Searle's conclusion. In actuality, Searle's mind would not be CONSCIOUS of any understanding of Chinese, but having internalised all the elements of the system the physical entity called Searle would understand Chinese nevertheless.

Searle's argument that the Chinese Room does not truly understand Chinese only works if one accepts the implicit assumption that understanding necessarily means conscious understanding, but this is an anthropocentric perspective. I do not agree that conscious understanding is necessary in order to achieve understanding.

MF

A cure for cancer - and/or the answer to many seemingly unanswerable questions - lies waiting in the use of what are termed "genetic algorythms". Genetic algorythms are a function of artificial intelligence that are able to draw on the resources of multiple sources...ie: one networks all the computing power of idle computers available and begins to process all the data in the main server of a clinic/research facility.

The extra computing power of the combined, previously idle computers is used to cross-reference all... and I mean all of the patient and disease data gathered by research in all areas, statistics in all areas and other misc. data from all areas of the clinic and research facilities...

Genetic algorythms simulate the "free associative" function of our brain but has 800% more access to 1000 percent more information (exageration). The result being a much broader "understanding" of the problem, unfettered by ego, competitions, money concerns, bribes, wives, car problems and other human concerns that hinder the proper examination of problems and their solutions.

Does a genetic algorhythm "understand" what its studying any better than a human? Does a positive result from the use of the genetic algorhythm (ie: cure for cancer... or... coming up with a strategy to best convert to an alternative for an oil-based economy)...does this positive result demonstrate a "better understanding" of a problem than the human understanding that has no ability to solve the problem?
 
  • #6
quantumcarl said:
Does a genetic algorhythm "understand" what its studying any better than a human? Does a positive result from the use of the genetic algorhythm (ie: cure for cancer... or... coming up with a strategy to best convert to an alternative for an oil-based economy)...does this positive result demonstrate a "better understanding" of a problem than the human understanding that has no ability to solve the problem?
Interesting question.

I suggest Searle's response (forgive me my presumptiveness) would be "No, there is no way such an algorythm (sic) could in principle understand anything, unless it possessed consciousness". Anyone disagree?

My response would be : Let us see. I am prepared to believe (in principle) that such an algorythm (sic), if it were sufficiently complex, could literally understand the problem (without being conscious), and I am also prepared to believe (again in principle) that such an algorythm (sic) could demonstrate a better understanding of the problem than any human has done or could do.

I look forward to the day this happens!

MF
 
  • #7
MF said:
I dispute Searle's conclusion. In actuality, Searle's mind would not be CONSCIOUS of any understanding of Chinese, but having internalised all the elements of the system the physical entity called Searle would understand Chinese nevertheless.

Searle's argument that the Chinese Room does not truly understand Chinese only works if one accepts the implicit assumption that understanding necessarily means conscious understanding, but this is an anthropocentric perspective. I do not agree that conscious understanding is necessary in order to achieve understanding.
I would agree with you in a sense but only in regard to a selfreferencial linguistic such as math. I would say that a computer can understand math because it takes no further information then simply the structure and rules of the language to understand it.
Languages like english, chinese, or what have you require more information to understand than simply the language itself. The purpose of such a language is to transmit information which the words represent. Without access to the information being represented there will be no "understanding". A vast majority of words require experiencial information to understand and most of the rest are just connecting "syntactic" words. A computer would require an understanding of the information being represented in order to formulate proper coherant responses. Lacking this understanding it would only be capable of spitting out stock responses which would be rather limited.

MF said:
My response would be : Let us see. I am prepared to believe (in principle) that such an algorythm (sic), if it were sufficiently complex, could literally understand the problem (without being conscious), and I am also prepared to believe (again in principle) that such an algorythm (sic) could demonstrate a better understanding of the problem than any human has done or could do.
It would have an understanding of the process it is using and the math involved. It doesn't necessarily possesses an understanding of the fact that it is helping process information about cancer or what cancer is. The computer cannot even determine whether or not the process it is using is successful except that it has successfully executed the algorythm it was asked to execute. The computers paradigm doesn't actually extend outside of it's program and network unless you can give it the means to accomplish that.
 
  • #8
As usual for any good discussion to survive, the terminology used at the centre of the discussion must be clearly and somewhat universally defined so that there is a mutually accepted understanding of the subject matter of the discussion.

And I completely forgot to include some definitions for "understanding" and "conscious".

It is trivial to continue to discuss a subject without agreed upon definitions of the words we (mostly I so !far¡) spout. So I've found this definition for consciousness which the writers claim is imposible to define while offering this:

Consciousness
From Wikipedia, the free encyclopedia.


Consciousness is a quality of the mind generally regarded to comprise qualities such as subjectivity, self-awareness, sentience, sapience, and the ability to perceive the relationship between oneself and one's environment. Philosophers divide consciousness into phenomenal consciousness which is experience itself and access consciousness which is the processing of the things in experience (Block 2004).

If you have a bonified book definition of "understanding please feel free to put it up in this tread thank you.
 
  • #9
So what do you think? Is it possible or likely that computers possesses either 'Phenomenal Consciousness' or 'Access Consciousness' or both in some form or another?
I'm not so sure if I agree that consciousness requires a sense of self. I think this is just more or less a byproduct of the sort of consciousness that humans experience.


Here's a basic dictionary definition of "understanding"...
To perceive and comprehend the nature and significance of; grasp. See Synonyms at apprehend.
To know thoroughly by close contact or long experience with: That teacher understands children.

To grasp or comprehend the meaning intended or expressed by (another): They have trouble with English, but I can understand them.
To comprehend the language, sounds, form, or symbols of.
To know and be tolerant or sympathetic toward: I can understand your point of view even though I disagree with it.
To learn indirectly, as by hearsay: I understand his departure was unexpected.
To infer: Am I to understand you are staying the night?
To accept (something) as an agreed fact: It is understood that the fee will be 50 dollars.
To supply or add (words or a meaning, for example) mentally.
 
  • #10
TheStatutoryApe said:
I would agree with you in a sense but only in regard to a selfreferencial linguistic such as math. I would say that a computer can understand math because it takes no further information then simply the structure and rules of the language to understand it.
Languages like english, chinese, or what have you require more information to understand than simply the language itself. The purpose of such a language is to transmit information which the words represent. Without access to the information being represented there will be no "understanding".
The "information being represented" can be encoded into a database. I understand English, if you cut me off from the outside world I no longer have direct access to the information sources from the outside world, but I still continue to understand English, because the information I need to be able to understand English is encoded into my database. The only significant difference between me and a computer that understands English is the fact that I am conscious of the fact that I understand English whereas a computer need not necessarily be conscious. In what sense do you think the computer (or the CR) would not have access to the information being represented?

TheStatutoryApe said:
A vast majority of words require experiencial information to understand and most of the rest are just connecting "syntactic" words.
Experiencial information is information nevertheless, and all information can be encoded into a database. Take the noun "chair". I guess you would say that means something to you and me because we have experienced a chair, what it looks like, what it feels like, what purpose it serves in everyday life, etc etc, but the computer or CR could not experience these things therefore would not understand the noun chair, is that it? I disagree with this logic. The "experiencial information" of what a chair looks like, feels like, etc etc can all be encoded into the computer or CR as part of its database, just as the same experiencal information is encoded into my brain, so that I continue to have an image of what a chair looks like etc even when I am cut off from all sensory experience.

TheStatutoryApe said:
A computer would require an understanding of the information being represented in order to formulate proper coherant responses. Lacking this understanding it would only be capable of spitting out stock responses which would be rather limited.
The computer could understand in just the same way as you or I because it could be encoded with the same experiencial information as you or I. My eyes are simply a way of gathering visual information, it is quite possible in principle for the same visual information to be encoded into my brain directly, bypassing my eyes, so that I could understand what a chair looks like without ever having seen a chair. In a similar way, a computer could have access to the data which allows it to understand what a chair looks like.

TheStatutoryApe said:
It would have an understanding of the process it is using and the math involved. It doesn't necessarily possesses an understanding of the fact that it is helping process information about cancer or what cancer is.
"doesn't necessarily" - but it COULD be programmed with that understanding. There is no reason why a computer working on a cure for cancer should not be programmed to understand what cancer is, how it affects people, why it is working on a cure, what the implications are, etc etc etc. There is in principle no limit to the amount of understanding that it could be programmed with.

TheStatutoryApe said:
The computer cannot even determine whether or not the process it is using is successful except that it has successfully executed the algorythm it was asked to execute. The computers paradigm doesn't actually extend outside of it's program and network unless you can give it the means to accomplish that.
The means to accomplish that are (in the case of the CR) the scribbled notes passed back and forth - this is the computer's access to the outside world.

My paradigm does not extend outside of my program and network unless you give me the means to accomplish that - but if you lock me away in a room and deprive me of all sensory information does that mean I suddenly cease to understand Englsh? No, of course not. And the same would be true of the computer.

With respect

MF
 
  • #11
TheStatutoryApe said:
So what do you think? Is it possible or likely that computers possesses either 'Phenomenal Consciousness' or 'Access Consciousness' or both in some form or another?
With respect, the question that needs to be addressed in this thread is NOT whether computers possesses any kind of consciousness, but whether consciousness is a necessary pre-requisite for understanding.

MF
 
  • #12
TheStatutoryApe said:
Here's a basic dictionary definition of "understanding"...

To perceive and comprehend the nature and significance of; grasp. See Synonyms at apprehend.
To know thoroughly by close contact or long experience with: That teacher understands children.

To grasp or comprehend the meaning intended or expressed by (another): They have trouble with English, but I can understand them.
To comprehend the language, sounds, form, or symbols of.
To know and be tolerant or sympathetic toward: I can understand your point of view even though I disagree with it.
To learn indirectly, as by hearsay: I understand his departure was unexpected.
To infer: Am I to understand you are staying the night?
To accept (something) as an agreed fact: It is understood that the fee will be 50 dollars.
To supply or add (words or a meaning, for example) mentally.
I am not saying that I accept this as a correct or full definition, but let us work with it for the time being.

Note that nowhere in this definition does it specify or imply that consciousness is a prerequisite for understanding.

Now - Which part of this do you think a computer could not in principle accomplish, and why?

With Respect

MF
 
  • #13
moving finger said:
I am not saying that I accept this as a correct or full definition, but let us work with it for the time being.

Note that nowhere in this definition does it specify or imply that consciousness is a prerequisite for understanding.

Now - Which part of this do you think a computer could not in principle accomplish, and why?

With Respect

MF

We are comparing human thought processes to those of a computer here. So far we haven't specified what type of computer... for instance say one that is housed in sensory devices that feed the system data from its environment. So far all we've noted is that humans feed the computer data.

Here's a good definition of "understanding" in addition to what Statutory Ape has offered and that MF has found reason to create a rebuttle in (timely) response.

Understanding
From Wikipedia, the free encyclopedia.


Understanding is a psychological state in relation to an object or person whereby one is able to think about it and use concepts to deal adequately with that object.
[edit]


Examples

1. A person understands the weather if he/she is able to predict and to give an explanation of some of its features.
2. A psychiatrist understands another person if he knows his anxieties and their causes and can give him useful advice on how to minimise the anxiety.
3. A person understands a command if he/she knows who gave it, what is expected by the issuer, and whether the command is legitimate.
4. One understands a reasoning, an argument, or a language if one can consciously reproduce the information content conveyed by the message (see the china room).

[edit]


Is understanding definable?

It is difficult to define understanding. If we use the term concept as above, the question then arises as to what is a concept? Is it an abstract thing? Is it a brain pattern or a rule? Whatever definition is proposed, we can still ask how it is that we understand the thing that is featured in the definition: we can never satisfactorily define a concept, still less use it to explain understanding.

It is more convenient to use an operational or behavioural definition, we can say that somebody who reacts appropriately to X understands X. For example, I understand Swahili if I correctly obey commands given in that language.

This approach, however, may not provide an adequate definition. A computer can easily be programmed to react appropriately to simple commands. For most people, this would be stretching the notion of understanding to claim that under an operational definition such a computer understands the language.

I think we can see the bias in this definition. However, personally and briefly my definition is a little less formal in that I believe understanding to mean more than being able to match symbols and concepts to physical objects and events. I think understanding requires a compassion or an empathy that can be utilized in "grasping" (thank you apeman) any given subject.

There are several "strings" or tangents that come along with understanding a subject, culture, language etc... that can only be used to understand a subject. If the enquirer has spent time chemically reacting with a similar environment or set of sets that the subject has also experienced the state of understanding the subject is easier and much quicker to reach than the traditional method of gathering "data" on a subject in order to understand it.

For instance there was the example of understanding a chair that was mentioned and the one thing a computer or computer with sensory devices housing it does not have is a chemically comprised body wrapped in an organ called the dermis. When this chemical bag we call a body sits on a chair, we get an understanding of the chair as it applies to a human body. Not as it applies to mathematics or physics or even ergonomics... its really about how our musculatrature, skeletal, cellular etc... makes contact with the chair and how our rate of fall is caught and whether we feel "secure" in the chair... etc.

What I'm saying, I just noticed, is that "understanding" is relative to the conditions of the enquirer...ie: the conditions of its physical state as a physcial phenomenon on the planet (with gravity, water, sun, blah blah)...

I hope I'm made some progress in describing understanding here... although it also seems that it was a process of 2 steps forward, 2 steps back.

Any further comments are appreciated.
 
  • #14
My applogies to the StatApe for practically duplicating his thread here in Philosphy. Its interesting that its taking a turn toward defining understanding and consciousness in this forum rather than leaning heavily to the computing aspect of AI. CAn you dig the Azimov dude? Right on! His trilogy that ended with ROBOTS OF DAWN is gnarly!
 
  • #15
moving finger said:
I dispute Searle's conclusion. In actuality, Searle's mind would not be CONSCIOUS of any understanding of Chinese, but having internalised all the elements of the system the physical entity called Searle would understand Chinese nevertheless.

Searle's argument that the Chinese Room does not truly understand Chinese only works if one accepts the implicit assumption that understanding necessarily means conscious understanding, but this is an anthropocentric perspective. I do not agree that conscious understanding is necessary in order to achieve understanding.

MF

The claim of Strong AI is that computers can fully implement -- not just
imitate or emulate -- a human mind. That being the case, the "anthropocentric" attitude to undertstanding is appropriate -- a fully-featured human mind is indeed the benchmark.

To argue that there are features of human consciousness which can reasonably be left out on AI is to argue for weak AI, and Searle is not even attempting to argue against weak AI.
 
Last edited:
  • #16
quantumcarl said:
Here's a good definition of "understanding" in addition to what Statutory Ape has offered and that MF has found reason to create a rebuttle in (timely) response.
For the record, I did not rebut the previous definition offered by Statutory Ape - I said let's work with it for the time being.

quantumcarl said:
There are several "strings" or tangents that come along with understanding a subject, culture, language etc... that can only be used to understand a subject. If the enquirer has spent time chemically reacting with a similar environment or set of sets that the subject has also experienced the state of understanding the subject is easier and much quicker to reach than the traditional method of gathering "data" on a subject in order to understand it.
A computer could also (in principle) be constructed which "chemically reacts" (quaint terminology) with its environment. But the "reaction with the environment" is simply a way of gathering data (input). It is well known and accepted that I can get a better appreciation for and understanding of French by going to live and work in France, but nevertheless it is still possible for me to learn (and understand) French from a teacher and textbook in a classroom in London, cut off from direct French experience. In principle, this knowledge and understanding of French could also be programmed directly into a brain or computer, bypassing textbook and teacher.

quantumcarl said:
For instance there was the example of understanding a chair that was mentioned and the one thing a computer or computer with sensory devices housing it does not have is a chemically comprised body wrapped in an organ called the dermis.
There is no reason in principle why we should not be able to construct a computer with a "dermis substitute" through which it could acquire sensory data equivalent to our feeeling/touching. But once again, as with the "learning French" example above, there is also no reason why the sensory information could not also be directly programmed into the computer's database so that it acquires the experiential data of touching/feeling a chair even though it has never sat in a chair.

quantumcarl said:
When this chemical bag we call a body sits on a chair, we get an understanding of the chair as it applies to a human body. Not as it applies to mathematics or physics or even ergonomics... its really about how our musculatrature, skeletal, cellular etc... makes contact with the chair and how our rate of fall is caught and whether we feel "secure" in the chair... etc.
And similarly a computer could in principle be constructed which could appreciate, know and understand how a chair feels to the computer body.

quantumcarl said:
What I'm saying, I just noticed, is that "understanding" is relative to the conditions of the enquirer...ie: the conditions of its physical state as a physcial phenomenon on the planet (with gravity, water, sun, blah blah)...
More correctly, experiential knowledge is relative to the conditions of the enquirer.

How does any of this show that a computer cannot in principle understand anything?

MF
 
  • #17
Tournesol said:
The claim of Strong AI is that computers can fully implement -- not just
imitate or emulate -- a human mind. That being the case, the "anthropocentric" attitude to undertstanding is appropriate -- a fully-featured human mind is indeed the benchmark.
With respect, the question implicit at the beginning of this thread is whether or not a computer could rightfully claim to possesses understanding, NOT whether a computer can “fully implement a human mind”.

quantumcarl said:
According to Searle, however, there is no understanding of Chinese present - just a set of formal rules which, if complex enough, can mimic the appearance of genuine understanding. Searle thus concludes that this is all computers will ever be able to accomplish: mimicry which can fool us into thinking that they understand things.

I believe the subject under discussion here is therefore “whether or not a computer could ever (in principle) rightfully claim to possesses understanding”, and not “whether or not a computer could ever (in principle) fully implement a human mind”. The two questions are very different.

Any guidance from quantumcarl about which question we should be debating here, before I continue?

With Respect

MF
 
  • #18
moving finger said:
With respect, the question implicit at the beginning of this thread is whether or not a computer could rightfully claim to possesses understanding, NOT whether a computer can “fully implement a human mind”.



I believe the subject under discussion here is therefore “whether or not a computer could ever (in principle) rightfully claim to possesses understanding”, and not “whether or not a computer could ever (in principle) fully implement a human mind”. The two questions are very different.

Any guidance from quantumcarl about which question we should be debating here, before I continue?

With Respect

MF

I would interject here and try to guide the discussion toward a better understanding of the state of "understanding" for the moment. I believe its of value to do so before we can "carte blanc" say that understanding is a singlularity so easily "understood".

There have been two categories put forth that we can relate to "understanding"..."'Phenomenal Consciousness' or 'Access Consciousness'. Let's term them "phenomenal understanding" and "access understanding".

Here we can demonstrate "phenomenal understanding" when we learn french in France with all the phenomena of the culture, the wine, coffee houses, various collections of genetic material and dialects bombarding our senses and demanding that we learn french or starve etc...

Demonstrating "access understanding" we'd simply sit in a classroom and watch videos of the Eiffel Tower, striped shirts and flauncy skirts and learn the linguistic's grammatical interpretations and the vocabulary hoops and rollercoasters. We could import some of the culture of france to the classroom in the form of bread, cheese and even a guest speaker... but, are we experiencing the phenomenon of France and the origins and conditions of its language? Not by a long shot.

I would also say that there are more than two types of understanding. I would say there is a type of understanding for every stage of a human's development as a biological unit. There is a type of understanding for every different animal in the mamalian group and so on and so forth.

When MF suggests "we" could build a computer with a skin that senses the chair and that "we" could build a computer that uses chemical interaction to assess its environment that is simply MF's opinion based on a fictitious fancy that could or could not come true. There's no proof that we could do these things and I doubt we could judging from the lack of advances in robotics over the last 20 years. Besides, the economics involved in building such a "bot" would be ludicris when you realize we have "ready-built" humans from which to draw "understanding" of any number of topics... including chairs.

The closest I can get to a definition of "understanding" is that it is a word that humans use to describe a type of a state of awareness... in a human.

The key here is that "understanding" is a human description that probably does not apply to any other organism or otherwise. When we try to assign a human term to a computer or even to a chimpanzee... we are simply anthropomorphizing something that is not human... and that is contradictory in nature and probably a fruitless endevour.
 
  • #19
quantumcarl said:
I would interject here and try to guide the discussion toward a better understanding of the state of "understanding" for the moment. I believe its of value to do so before we can "carte blanc" say that understanding is a singlularity so easily "understood".
Thank you.
quantumcarl said:
There have been two categories put forth that we can relate to "understanding"..."'Phenomenal Consciousness' or 'Access Consciousness'. Let's term them "phenomenal understanding" and "access understanding".
Here we can demonstrate "phenomenal understanding" when we learn french in France with all the phenomena of the culture, the wine, coffee houses, various collections of genetic material and dialects bombarding our senses and demanding that we learn french or starve etc...
Demonstrating "access understanding" we'd simply sit in a classroom and watch videos of the Eiffel Tower, striped shirts and flauncy skirts and learn the linguistic's grammatical interpretations and the vocabulary hoops and rollercoasters. We could import some of the culture of france to the classroom in the form of bread, cheese and even a guest speaker... but, are we experiencing the phenomenon of France and the origins and conditions of its language? Not by a long shot.
I am not saying that I agree with the logic here, but let’s go with it for the time being. Both forms of understanding described above qualify as understanding in my book. In both cases, the net result (the educated pupil) is a person who understands French. Their appreciation for French culture etc, and the quality of their understanding of French, may be slightly better in one case than the other, but in both cases there is an understanding of French.
quantumcarl said:
I would also say that there are more than two types of understanding. I would say there is a type of understanding for every stage of a human's development as a biological unit. There is a type of understanding for every different animal in the mamalian group and so on and so forth.
Possibly.
quantumcarl said:
When MF suggests "we" could build a computer with a skin that senses the chair and that "we" could build a computer that uses chemical interaction to assess its environment that is simply MF's opinion based on a fictitious fancy that could or could not come true. There's no proof that we could do these things and I doubt we could judging from the lack of advances in robotics over the last 20 years.
I beg to differ, but do we really want to get into a long detailed description of how machines could be equipped with various forms of sensory input? Does anyone dispute that a machine could be constructed which is able to acquire and to process visual image inputs, or audio inputs? Why should it be so difficult to conceive of a machine which could acquire and process tactile inputs? If anyone thinks this is impossible in principle then, with all due respect, I suggest they do some more homework.
quantumcarl said:
Besides, the economics involved in building such a "bot" would be ludicris when you realize we have "ready-built" humans from which to draw "understanding" of any number of topics... including chairs.
With all due respect, quantumcarl, what on Earth do the “economics” have to do with any of this? We are (I thought) discussing whether things are possible in principle, and not whether it makes economic sense to carry them out.
quantumcarl said:
The closest I can get to a definition of "understanding" is that it is a word that humans use to describe a type of a state of awareness... in a human.
The key here is that "understanding" is a human description that probably does not apply to any other organism or otherwise.
With respect, this is an extreme version of anthropocentrism. Imagine we meet an intelligent alien at some future date – are you suggesting that we would necessarily conclude the alien has no ability to understand ANYTHING simply because it is not human?
With respect, I suggest the definition of “understanding” that you offer here is totally inadequate.
(An analogy : It is like saying that “intelligence” is a “human description that probably does not apply to any other organism or otherwise” – does this mean that it is impossible for any other species or agent (including aliens) to possesses intelligence?
quantumcarl said:
When we try to assign a human term to a computer or even to a chimpanzee... we are simply anthropomorphizing something that is not human... and that is contradictory in nature and probably a fruitless endevour.
I suggest that the problem in fact resides in your extremely anthropocentric definition of understanding. I would ask whether anyone else reading this thread agrees with your definition, which implies that only homo sapiens could ever be said to understand anything.

With respect

MF
 
Last edited:
  • #20
moving finger said:
With respect, the question implicit at the beginning of this thread is whether or not a computer could rightfully claim to possesses understanding, NOT whether a computer can “fully implement a human mind”.
I believe the subject under discussion here is therefore “whether or not a computer could ever (in principle) rightfully claim to possesses understanding”, and not “whether or not a computer could ever (in principle) fully implement a human mind”. The two questions are very different.

What is implicit is in the eye of the beholder. The thread is explicitly about
the "China Room".
 
  • #21
Originally Posted by quantumcarl
The closest I can get to a definition of "understanding" is that it is a word that humans use to describe a type of a state of awareness... in a human.
The key here is that "understanding" is a human description that probably does not apply to any other organism or otherwise.

MF said:
With respect, this is an extreme version of anthropocentrism.

Indeed, it is futile to define understanding in this way because it means
that no AI can (a priori) ever understand, simply by virtue of being artifical.

However, it is trivial to define "understanding" in such a way that anythng the computer fails to do is deemed unnecessary (thus guranteeing, apriori, success rather than failure). AI is an artificial duplication of something natural:
intelligence. Since human intelligence is the best exampe we have to
go on, it should set the standard, and our definitions should be
anthropocentric to that extent.
 
  • #22
Tournesol said:
What is implicit is in the eye of the beholder.
which is why I asked the originator of the thread what he/she actually intended.

MF
 
  • #23
Tournesol said:
However, it is trivial to define "understanding" in such a way that anythng the computer fails to do is deemed unnecessary (thus guranteeing, apriori, success rather than failure). AI is an artificial duplication of something natural:
intelligence. Since human intelligence is the best exampe we have to
go on, it should set the standard, and our definitions should be
anthropocentric to that extent.
imho, for the definition to be acceptable, we need to define "understanding" as far as possible in non-human terms (otherwise we risk being deliberately anthropocentric).
But we can argue about an acceptable definition of understanding for a very long time. What I would ask is : Has anyone come up with what they consider to be a TEST of understanding? If we can agree such a test, we can then apply that test to both human and machine subjects, and see which ones pass and which do not.
If the response is that it is not possible to devise a test for understanding, then I would ask why not? Is it because it is impossible to define? But then we don't really understand understanding, do we?
MF
 
  • #24
moving finger said:
Thank you.

I am not saying that I agree with the logic here, but let’s go with it for the time being. Both forms of understanding described above qualify as understanding in my book. In both cases, the net result (the educated pupil) is a person who understands French. Their appreciation for French culture etc, and the quality of their understanding of French, may be slightly better in one case than the other, but in both cases there is an understanding of French.

Yes, here I am simply demonstrating the various qualities, quantities and "depths" there are to understanding and that there are "better" modes of understanding as compared to "worse" or "less" understanding. For instance, one understands Michaelangelo's sculpture, David, when one sees a photograph of it... yet, when one has walked around the real (replica) of it... one holds a better and superior understanding of the sculpture.


moving finger said:
I beg to differ, but do we really want to get into a long detailed description of how machines could be equipped with various forms of sensory input? Does anyone dispute that a machine could be constructed which is able to acquire and to process visual image inputs, or audio inputs? Why should it be so difficult to conceive of a machine which could acquire and process tactile inputs? If anyone thinks this is impossible in principle then, with all due respect, I suggest they do some more homework.

Principle and reality are very different from one another. In fact "understanding" a topic is best done through the use of data derived from its "reality" rather from the principles someone has theoretically constructed or imagined about the reality. What we project to be possible is not always going to happen so, its best to stick with what we know rather than what we think could happen.


moving finger said:
With all due respect, quantumcarl, what on Earth do the “economics” have to do with any of this? We are (I thought) discussing whether things are possible in principle, and not whether it makes economic sense to carry them out.

I'll leave the practical aspect of building a replica of a human out of this... however, I believe several companies are attempting to genetically engineer humans as we speak... not withstanding, similar companies are somewhat successfully attempting to engineer a programmed society to suit their economic needs.[/quote]



moving finger said:
With respect, this is an extreme version of anthropocentrism. Imagine we meet an intelligent alien at some future date – are you suggesting that we would necessarily conclude the alien has no ability to understand ANYTHING simply because it is not human?
With respect, I suggest the definition of “understanding” that you offer here is totally inadequate.
(An analogy : It is like saying that “intelligence” is a “human description that probably does not apply to any other organism or otherwise” – does this mean that it is impossible for any other species or agent (including aliens) to possesses intelligence?

When did anyone equate intelligence with understanding? I think the dilema is that humans have a (large variety) of languages that describe their why of thinking. This array of linguistical descriptions is maliable in that it can be applied to systems other than the biological human. I think the word "understanding" has a deep root in the human consciousness that humans really cannot adequately transfer to other biological or strickly mechanical systems. (Especially when you consider that and encyclopedia has a problem with defining "understanding".)

moving finger said:
I suggest that the problem in fact resides in your extremely anthropocentric definition of understanding. I would ask whether anyone else reading this thread agrees with your definition, which implies that only home sapiens could ever be said to understand anything.

Consider another analogy in the form of a specified term such as " nourishment". Can we effectively call electricity the nourishment of a computer? Or is it less confusing to use the term "power" or "powersource" or even "electricity"... to be more succinct.

There is a confusion that ensues out of the undisciplined use of language that's "pre-computer" that is doesn't have to happen. We have the ability to come up with specific terms that apply to specific conditions to avoid this confusion. Why don't we use that ability.

Cool discussion dude!
 
  • #25
quantumcarl said:
quantumcarl said:
I beg to differ, but do we really want to get into a long detailed description of how machines could be equipped with various forms of sensory input? Does anyone dispute that a machine could be constructed which is able to acquire and to process visual image inputs, or audio inputs? Why should it be so difficult to conceive of a machine which could acquire and process tactile inputs? If anyone thinks this is impossible in principle then, with all due respect, I suggest they do some more homework.
Principle and reality are very different from one another. In fact "understanding" a topic is best done through the use of data derived from its "reality" rather from the principles someone has theoretically constructed or imagined about the reality. What we project to be possible is not always going to happen so, its best to stick with what we know rather than what we think could happen.
Firstly, I do not agree that we should be talking only about what has actually been achieved today, and not be talking about what could be achieved. If the Wright brothers only considered what had already been achieved then we would never have developed flying machines.
Besides this, we already know that machines can be equipped with various sensors that allow them to take inputs based on optical (visual), auditory, and tactile stimuli (amongst others), and to process these data. This is reality, not just principle. Can you explain exactly what you think it is that cannot yet be done in reality?
quantumcarl said:
I'll leave the practical aspect of building a replica of a human out of this... however, I believe several companies are attempting to genetically engineer humans as we speak... not withstanding, similar companies are somewhat successfully attempting to engineer a programmed society to suit their economic needs.
I was not aware that we were talking of making a replica human. I thought we were only talking of making a machine which could “sit” on a chair and be able to process the sensory input data associated with sitting (ie feel what it is like to sit). This does not require a complete replica human.
quantumcarl said:
When did anyone equate intelligence with understanding?
I offered the example of intelligence as an “analogy”. I did not imply intelligence was equated with understanding (though in fact it may be linked).
quantumcarl said:
There is a confusion that ensues out of the undisciplined use of language that's "pre-computer" that is doesn't have to happen. We have the ability to come up with specific terms that apply to specific conditions to avoid this confusion. Why don't we use that ability.
Agreed. So let’s define understanding in non-anthropocentric terms.


MF
 
  • #26
moving finger said:
imho, for the definition to be acceptable, we need to define "understanding" as far as possible in non-human terms (otherwise we risk being deliberately anthropocentric).

I have given an argument to the effect that it should be defined anthropocentrically (up to a point -- not to QC's externt). Can you
refute the argument ?

What I would ask is : Has anyone come up with what they consider to be a TEST of understanding?

How can you , in the absence of a definition ?
 
  • #27
Tournesol said:
I have given an argument to the effect that it should be defined anthropocentrically (up to a point -- not to QC's externt). Can you
refute the argument ?

If we decide to designate the ability of a computer to compute as an act that is characteristically the same as a human's ability to understand then the term "understanding" would apply to a rock which, relatively speaking, responds appropriately to the stimulus in its environment and would therefore, by this definition, "understand" its condition at any given time.

When a frying pan heats up because it's on a lit burner does that mean it understands that it is being heated and, as a result, or not, it turns red and begins to fry?

I'm sure there are a billion examples that provide the evidence I'm looking for where it is plain to see that it can be inconsistent, unconstructive and very time consuming to use terminology that specifically belongs within the parameter set by its original intention of use... ie: the word understanding has always applied to a brain state in a human. When the word is used to describe another state in another system that is projected anthropomorphism. It is the application of a term that descibes a human attribute and nothing more.

I believe through exploring the meaning of "understanding" we are also exploring Searle's China Room and the implications his hypothetical, experimental model would reveal.
 
  • #28
MF said:
The "information being represented" can be encoded into a database. I understand English, if you cut me off from the outside world I no longer have direct access to the information sources from the outside world, but I still continue to understand English, because the information I need to be able to understand English is encoded into my database. The only significant difference between me and a computer that understands English is the fact that I am conscious of the fact that I understand English whereas a computer need not necessarily be conscious. In what sense do you think the computer (or the CR) would not have access to the information being represented?
-----------------------------------
Experiencial information is information nevertheless, and all information can be encoded into a database. Take the noun "chair". I guess you would say that means something to you and me because we have experienced a chair, what it looks like, what it feels like, what purpose it serves in everyday life, etc etc, but the computer or CR could not experience these things therefore would not understand the noun chair, is that it? I disagree with this logic. The "experiencial information" of what a chair looks like, feels like, etc etc can all be encoded into the computer or CR as part of its database, just as the same experiencal information is encoded into my brain, so that I continue to have an image of what a chair looks like etc even when I am cut off from all sensory experience.
I agree with most of this. The point of my statements that you were responding to here was that experiencial information is needed. Most computers currently though have none and those that do are limited as far as I know.
There are two points that I am not sure I agree with. I don't think that "consciousness" is separate from understanding. I don't see consciousness as being an anthropocentric term either. Perhaps our definitions of consciouness are just differant.
The other point, you offer the idea of creating a program with a database of experiencial information. This may be possible but I believe there would be issues concerning the computers ability to understand the experiencial information. The information would have to be meaningful to the computer. The computer perceives the world from a particular vantage more or less unique to it's situation and the same with humans. Much of the information a human might give to a computer of an experiencial nature may not have much value or coherance to a computer. There is also a matter of context. Experiencial information gains the majority of it's meaning through context ('memory' is probably a good word for this but I think 'context' is less anthropocentric:wink:). If possible it would be quite an undertaking to devise a database of the nature you propose. I'm thinking it would be far easier to give the computer eyes and ears with which to develope it's own database. After one is developed it can be copied.
MF said:
The computer could understand in just the same way as you or I because it could be encoded with the same experiencial information as you or I. My eyes are simply a way of gathering visual information, it is quite possible in principle for the same visual information to be encoded into my brain directly, bypassing my eyes, so that I could understand what a chair looks like without ever having seen a chair. In a similar way, a computer could have access to the data which allows it to understand what a chair looks like.
There isn't much difference. You're just substituting eyes for some other manner of gathering information. Unless you mean a single static imprint imbedded in your brain in which case we have the problem with lack of context that I mentioned earlier. You could say that you can imbed the context as well but it must be context that is meaningful to you and just how much context do you think is necessary for a decent level of understanding? In any event it's probably easier to just use your eyes wouldn't you agree?
MF said:
"doesn't necessarily" - but it COULD be programmed with that understanding. There is no reason why a computer working on a cure for cancer should not be programmed to understand what cancer is, how it affects people, why it is working on a cure, what the implications are, etc etc etc. There is in principle no limit to the amount of understanding that it could be programmed with.
I was referring solely to the example of the computer using an algorythm to process information and yield data for cancer research. In the example it was stated that such a computer possibly understands the situation better than the scientists and I was simply pointing out that it doesn't necessarily understand anything about the situation in preforming the described function. True perhaps it is possible to imbue the computer with understanding by giving it the right program and information. But the algorythm's yielding results doesn't imply any sort of understanding.
MF said:
The means to accomplish that are (in the case of the CR) the scribbled notes passed back and forth - this is the computer's access to the outside world.
This doesn't necessarily give it access to the outside world. To it the messages appearing could be akin to god making inscribed tablets appear out of nothingness.
A four dimensional(spacial dimensions not including a temporal dimension) entity could pierce this plane and leave us a message then retrieve any response we make but this does not give us access to the fourth dimension. The entity could even go so far as to give us "experiencial" information about this dimension that we could use to make a model that will give us some idea of what this dimension is like but without being given or creating the proper tools to pierce that dimension we still have no actual access to it.
Note that I am not disputing any level of understanding just illustrating a computer's lack of access without proper tools at it's disposal. I would though argue that this lack of access can easily result in difficulty in regards to understanding.
MF said:
My paradigm does not extend outside of my program and network unless you give me the means to accomplish that - but if you lock me away in a room and deprive me of all sensory information does that mean I suddenly cease to understand Englsh? No, of course not. And the same would be true of the computer.
I am only arguing regarding something lacking prior understanding. You already know english and have developed your understanding. If a four dimensional entity became trapped in a three dimensional world it would not lack any understanding of the four dimensional world because it already possesses experiencial knowledge.
MF said:
With respect, the question that needs to be addressed in this thread is NOT whether computers possesses any kind of consciousness, but whether consciousness is a necessary pre-requisite for understanding.
As stated above I think our definitions of consciousness may differ. I do not think something can "understand" without being "conscious" but I do not think it is impossible for a computer to be "conscious" by my own definitions for these words.
Edited. Change of position. See later post.
MF said:
I am not saying that I accept this as a correct or full definition, but let us work with it for the time being.
Note that nowhere in this definition does it specify or imply that consciousness is a prerequisite for understanding.
Now - Which part of this do you think a computer could not in principle accomplish, and why?
I do not think it unaccomplishable. We just don't seem to share the same concept of how this might be accomplished.
 
Last edited:
  • #29
quantumcarl said:
If we decide to designate the ability of a computer to compute as an act that is characteristically the same as a human's ability to understand then the term "understanding" would apply to a rock which, relatively speaking, responds appropriately to the stimulus in its environment and would therefore, by this definition, "understand" its condition at any given time.
When a frying pan heats up because it's on a lit burner does that mean it understands that it is being heated and, as a result, or not, it turns red and begins to fry?
I'm sure there are a billion examples that provide the evidence I'm looking for where it is plain to see that it can be inconsistent, unconstructive and very time consuming to use terminology that specifically belongs within the parameter set by its original intention of use... ie: the word understanding has always applied to a brain state in a human. When the word is used to describe another state in another system that is projected anthropomorphism. It is the application of a term that descibes a human attribute and nothing more.
I believe through exploring the meaning of "understanding" we are also exploring Searle's China Room and the implications his hypothetical, experimental model would reveal.
I'd have to agree with Tournesol. Words and their usage change to be more appropriate for the emerging paradigm. If we discover or postulate a process which we determine to parallel what we refer to as "understanding" then it is very possible for us to change our lexicon to include such a process under the definition of "understanding". It may be a different "flavor" of the same process (such as that of a sentient alien race) or perhaps different in a matter of complexity (such as a chimp) but if we agree that this process shares a significant enough number of defining characteristics we can catagorize it under "understanding".

So the question here is whether or not a computer's processes can, or could be capable of, paralleling what we call "understanding". No need necessarily for a new word. If you don't think "understanding" is appropriate is there a term that you would like to suggest? Perhaps you could explain for us how what a computer does is better defined by this other word and what the characteristics are that separate them. I apologize if you already have and I missed it.
 
  • #30
TheStatutoryApe said:
As stated above I think our definitions of consciousness may differ. I do not think something can "understand" without being "conscious" but I do not think it is impossible for a computer to be "conscious" by my own definitions for these words.
I think that I have to go back on this now that I have thought about it.
I don't think "consciousness" is necessary to "understanding" and considering some certain possibilities I can't even say that I think "understanding" is necessary to "consciousness".
I.. all of us really, should explain what our definitions of these words are or at least where our definitions diverge from the common definition.

Understanding: The ability to process information, learn from it, and produce coherant output regarding it.

Consciousness: An autonomous, complex, and dynamic information processing matrix with the capacity for descision making.

These probably aren't the best but they're about as good as I can come up with at the moment. It seems to me that my definition of "consciousness" is probably lacking an element but I can't figure out exactly what.
 
  • #31
TheStatutoryApe said:
I'd have to agree with Tournesol. Words and their usage change to be more appropriate for the emerging paradigm. If we discover or postulate a process which we determine to parallel what we refer to as "understanding" then it is very possible for us to change our lexicon to include such a process under the definition of "understanding". It may be a different "flavor" of the same process (such as that of a sentient alien race) or perhaps different in a matter of complexity (such as a chimp) but if we agree that this process shares a significant enough number of defining characteristics we can catagorize it under "understanding".
So the question here is whether or not a computer's processes can, or could be capable of, paralleling what we call "understanding". No need necessarily for a new word. If you don't think "understanding" is appropriate is there a term that you would like to suggest? Perhaps you could explain for us how what a computer does is better defined by this other word and what the characteristics are that separate them. I apologize if you already have and I missed it.

When a computer is unable to process a question or problem it is unable to compute the data. It is not unable to "understand" the data. It is unable to find, parce or compute the data.

As I've already emphasized there is an unnecessary confusion and waste of time that ensues when humans are so lazy as to assign words that describe a strictly human function to non-human systems. It is anthropomorphism at its worse.

The computer cannot "parce" the components of the question/problem because it does not have the necessary data or is unable to "decode" the information that is available and that it would use to compute in its function of possibly solving the problem.

This description of digital calculation (whether yeilding results or not), in my limited understanding of computer sciences, is a more concise way to define the function of computing.

Understanding is a word that has been used to describe a function that has developed in humans that has to do with experiencial information being stored chemically and that is readily retreavable via the advent of internal or external stimulus.

There are difficulties when we get lazy or poetic with our language. Its ok in prose or poetry but in a scientific examination of a premise or problem, terms and terminology must be precise and describe what they are assigned to with great accuracy.

For instance I could wax on about how a rock is the only thing that can understand what its like to be a rock so we might as well forget trying to understand what its like to be a rock.

Or... David Bowie is the only person in the world that understands what its like to be David Bowie and no other person, computer, rock or rock star could ever come close to this understanding.

Logically one would ask... " a rock understands what being a rock is like?" And the writer of prose or poetry could legitimately say, "that's right".

This type of anthropomorphizing the rock and assigning it an "understanding" and a "self" is - if not a resemblance of what I'm pointing out then it - is exactly what I'm emphasizing with regard to the disciplined and categorical use of specific terminologies.

Thanks for all the stimulus!
 
  • #32
Please forgive any illegibility or lack of proper spelling and/or gramatical arrangement in my posts. For some reason I have no access to the "edit" function that is available to most of you on PF.

Please feel free to mentally edit my posts for me as you read them... if you do.

Muchas gracias hermanos and hermanas!
 
  • #33
The China Room thought experiment is a perfect example of the power of misdirection of attention. All of you seem to have accepted the idea that "understanding" is the central issue of AI when it is not. It is instead, the ability to generate "explanations" which is the central problem of achieving AI. The ability to generate an explanation is the primary evidence of what one usually refers to as understanding. :wink:
China Room hypothesis said:
According to Searle, however, there is no understanding of Chinese present - just a set of formal rules which, if complex enough, can mimic the appearance of genuine understanding. Searle thus concludes that this is all computers will ever be able to accomplish: mimicry which can fool us into thinking that they understand things.
It should be clear to all that the issue of "understanding" is a subjective judgment. People often think they understand something which they, very realistically, do not understand at all. Clearly, it is our opinion of another's understanding which is the critical issue here. Thus it is that the difficulty of fooling us into thinking that they understand things is indeed the issue of real significance. One should ask oneself what kind of information is used to uncover the true state of affairs. How do you convince me that you understand something? In the same vein, but harder for some to comprehend is the question, how do I convince myself that I understand something? :rolleyes:
I hold that the requirement is that one can present a statement of expectations which are consistent with other information related to the thing supposedly understood. The judgment of the quality of that "understanding" is related to the extent to which that statement of expectations remains consistent with other information related to the thing supposedly understood as the information on the subject increases. Read that sentence carefully as it expresses the fundamental crux of AI. The whole thing is the creation of rational expectations consistent with what is known. Do that and AI is a solved problem. If you can do it, decision making machines can do it. I sincerely believe that it is only a matter of time before a decent mechanical trickster will come to exist. :smile:
Have fun -- Dick
Knowledge is Power
and the most common abuse of that power is to use it to hide stupidity
 
  • #34
Doctordick said:
The China Room thought experiment is a perfect example of the power of misdirection of attention. All of you seem to have accepted the idea that "understanding" is the central issue of AI when it is not. It is instead, the ability to generate "explanations" which is the central problem of achieving AI. The ability to generate an explanation is the primary evidence of what one usually refers to as understanding. :wink:

In this thread we've made the central issue of John Searle's China Room the definition of "understanding" because, as is blaringly evident in the quote you have repeated from the hypothesis... ie:

China Room hypothesis
According to Searle, however, there is no understanding of Chinese present - just a set of formal rules which, if complex enough, can mimic the appearance of genuine understanding[/b]. Searle thus concludes that this is all computers will ever be able to accomplish: mimicry which can fool us into thinking that they understand things.


...the word "understanding" is used three times as though everyone knows what to understand means... and that it applies to a computer as much as to the originator of the word, a human.



Doctordick said:
It should be clear to all that the issue of "understanding" is a subjective judgment.

That point has already been made in this thread in that "understanding is relative" to the person doing the understanding. Thank you for reaffirming it.

Doctordick said:
Clearly, it is our opinion of another's understanding which is the critical issue here.

Thank you for repeating that understanding is a relative function.

Doctordick said:
Thus it is that the difficulty of fooling us into thinking that they understand things is indeed the issue of real significance.

Thus John Searle offers the China Room Hypothesis to stimulate discussion about understanding.

Doctordick said:
how do I convince myself that I understand something?
You are either confused and don't possesses a great deal of information, experience or empathy about a subject or you do possesses a great deal of information, experience or empathy about a subject.

Doctordick said:
I hold that the requirement is that one can present a statement of expectations which are consistent with other information related to the thing supposedly understood.

What happened to being able to "explain" something?

Doctordick said:
The judgment of the quality of that "understanding" is related to the extent to which that statement of expectations remains consistent with other information related to the thing supposedly understood as the information on the subject increases. Read that sentence carefully

If I read that "sentence" carefully it still doesn't make sense. Please re-phraze what you're trying to say here if you'd like a return comment.

The whole thing is the creation of rational expectations consistent with what is known. Do that and AI is a solved problem.[/quote]

There's still the problem of figuring out what you've written here. Please re-phrase.
 
  • #35
My apologies, I’ve been away a couple of days and am behind on responding to some posts.
moving finger said:
imho, for the definition to be acceptable, we need to define "understanding" as far as possible in non-human terms (otherwise we risk being deliberately anthropocentric).
tournesol said:
I have given an argument to the effect that it should be defined anthropocentrically (up to a point -- not to QC's externt). Can you
refute the argument ?
tournesol said:
However, it is trivial to define "understanding" in such a way that anythng the computer fails to do is deemed unnecessary (thus guranteeing, apriori, success rather than failure). AI is an artificial duplication of something natural:
intelligence. Since human intelligence is the best exampe we have to
go on, it should set the standard, and our definitions should be
anthropocentric to that extent.
With respect, this is not a rigorous argument. There is nothing wrong with using human intelligence to “set the standard” (in the form of a benchmark against which other candidates can be measured), but the definition nevertheless needs to be expressed as far as possible in non-human terms if we are to avoid anthropocentricism (otherwise we are in danger of reducing the definition to something like “if you are human you are intelligent; if not you are not; by definition”.)
moving finger said:
What I would ask is : Has anyone come up with what they consider to be a TEST of understanding?
tournesol said:
How can you , in the absence of a definition ?
I am not suggesting that one can. A test for “X” assumes an implicit definition of “X”.
quantumcarl said:
If we decide to designate the ability of a computer to compute as an act that is characteristically the same as a human's ability to understand then the term "understanding" would apply to a rock which, relatively speaking, responds appropriately to the stimulus in its environment and would therefore, by this definition, "understand" its condition at any given time.
I did not suggest that “the ability to compute” is equivalent to “the ability to understand”. Apart from this, I don’t see what this has to do with a static rock.
quantumcarl said:
When a frying pan heats up because it's on a lit burner does that mean it understands that it is being heated and, as a result, or not, it turns red and begins to fry?
Are you suggesting that a frying pan “understands”? On what basis do you claim this?
quantumcarl said:
I'm sure there are a billion examples that provide the evidence I'm looking for where it is plain to see that it can be inconsistent, unconstructive and very time consuming to use terminology that specifically belongs within the parameter set by its original intention of use... ie: the word understanding has always applied to a brain state in a human.
Which is exactly why we need to move to a definition of such terms in objective, non-human terms, otherwise we risk the definitions continuing to be anthropocentric.
The Statutory Ape said:
The point of my statements that you were responding to here was that experiencial information is needed. Most computers currently though have none and those that do are limited as far as I know.
But we are not discussing “most computers”. We are discussing “what is in principle possible”. I am not suggesting that any present-day computer “understands” anything. What I am suggesting is that there is no reason in principle why a machine (provided it is constructed in the right way) should not possesses understanding.
The Statutory Ape said:
There are two points that I am not sure I agree with. I don't think that "consciousness" is separate from understanding. I don't see consciousness as being an anthropocentric term either. Perhaps our definitions of consciouness are just differant.
I see no evidence (apart from anthropocentric evidence) that consciousness is ncessarily required for understanding.
The Statutory Ape said:
The other point, you offer the idea of creating a program with a database of experiencial information. This may be possible but I believe there would be issues concerning the computers ability to understand the experiencial information. The information would have to be meaningful to the computer. The computer perceives the world from a particular vantage more or less unique to it's situation and the same with humans. Much of the information a human might give to a computer of an experiencial nature may not have much value or coherance to a computer.
There is no reason in principle why the experiential information must be anthropocentric in nature (unless we choose to make it so). The approach to “experience” must be the same as the approach to “understanding” – we must be careful to avoid an anthropocentric position.
The Statutory Ape said:
There is also a matter of context. Experiencial information gains the majority of it's meaning through context ('memory' is probably a good word for this but I think 'context' is less anthropocentric ). If possible it would be quite an undertaking to devise a database of the nature you propose.
“Quite an undertaking” is not the same as “impossible”
The Statutory Ape said:
I'm thinking it would be far easier to give the computer eyes and ears with which to develope it's own database. After one is developed it can be copied.
Agreed one method may be easier than the other. But both means are possible.
The Statutory Ape said:
In any event it's probably easier to just use your eyes wouldn't you agree?
Of course it is “Easier” to use one’s eyes, because they are already there and function in this way. But once again we are discussing matters of principle and not “what is easier”.
The Statutory Ape said:
True perhaps it is possible to imbue the computer with understanding by giving it the right program and information.
Thank you. It seems we agree that a computer can (in principle) possesses understanding.
The Statutory Ape said:
This doesn't necessarily give it access to the outside world. To it the messages appearing could be akin to god making inscribed tablets appear out of nothingness.
And the same could be true of humans. How do you “know” that you are “seeing reality”? You may be living in a vat, and all of your supposed experiential information may be supplied to you directly by a “god”. But this proves nothing (except that we necessarily make assumptions in our interpretation of experiential information).
The Statutory Ape said:
I do not think it unaccomplishable.
Thank you, we agree.
The Statutory Ape said:
We just don't seem to share the same concept of how this might be accomplished.
Strange, because I don’t think I have specified how this might be accomplished.
With respect
MF
 

Similar threads

Replies
5
Views
2K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • General Discussion
Replies
6
Views
2K
  • General Discussion
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
689
  • General Discussion
Replies
3
Views
836
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
Replies
4
Views
1K
  • Sci-Fi Writing and World Building
Replies
30
Views
2K
  • Special and General Relativity
Replies
1
Views
1K
Back
Top