Can computers understand?Can understanding be simulated by computers?

  • Thread starter quantumcarl
  • Start date
  • Tags
    China
In summary, the conversation discusses the concept of the "Chinese Room" thought experiment and its implications on human understanding and artificial intelligence. John Searle, an American philosopher, argues that computers can only mimic understanding, while others argue that understanding is an emergent property of a system. The conversation also touches on the idea of conscious understanding and the potential of genetic algorithms in solving complex problems.
  • #141
quantumcarl said:
Tisthammerw said:
Personally (for the most part) I don't think I would understand the meaning of a Chinese character no matter how often I looked at it. For the most part (as far as I know) modern Chinese characters aren't the same as pictures or drawings; and a string of these characters would still leave me even more baffled. And don't forget the definition of understanding I am using: a person has to know what the word means. This clearly isn't the case here ex hypothesi.


I will still enter the contaminating effect of the pictorial nature of the Chinese language as a potential error in the experiment. The very fact that "subliminal influence" is a real influence in the bio-organic (human etc..) learning process suggests that what remains of the "pictorial nature" of Chinese caligraphy in modern Chinese caligraphy also presents a contamination to the experiment.

That's still a little too speculative for me. If we ran some actual experiments showing people picking up Chinese language merely by reading the text you might have something then.


Tisthammerw said:
And what flaws are these? My main beef with some of these arguments is that they show how a human can learn a new language. That's great, but it still seems a bit of an ignoratio elenchi and misses the point of the thought experiment big time. Remember, what’s at the dispute is not whether humans can learn, but whether computers can.

Yes... I see. Computers are programmed. They don't really have a choice if they are programmed or not. Humans learn. Some have the motivation to do so... some do not. There are many many factors behind the function of learning. I realize that at some fictitious point a computer might have the power to gather its own data and program itself...

And that's kind of what the story of program X talks about. Let program X stand for any program (with all the learning algorithms you could ever want) that if run would produce understanding. Yet when the program is run, there is still no understanding (at least, not as how I've defined the term).
 
Physics news on Phys.org
  • #142
Tisthammerw said:
That's still a little too speculative for me. If we ran some actual experiments showing people picking up Chinese language merely by reading the text you might have something then.

OK. I agree. And if we can rule out my speculation about the subconsciousunderstanding the man in the CR is experiencing... we can rule out the speculation that involves undeveloped and imagined "bots" or "programs" that (do not exist, at present, to...) specifically simulate human understanding.


Tisthammerw said:
And that's kind of what the story of program X talks about. Let program X stand for any program (with all the learning algorithms you could ever want) that if run would produce understanding. Yet when the program is run, there is still no understanding (at least, not as how I've defined the term).

Yes, Program X would be a collection of 6 billion mail rooms sending bits (of paper) to wherever the zip code has determined the bits go. The only understanding that would develope out of this program is the understanding it helped to foster in an bio-organic system such as a human's... and a conscious one at that!
 
  • #143
moving finger said:
To make some inroads into “understanding what is going on in understanding” I suggest that we neeed to distinguish between different forms or qualities of understanding. I suggest that we can categorise examples of understanding as either passive or creative.

Let us define passive understanding of subject X as the type of understanding which an agent can claim to have by virtue of its existing (previously learned) knowledge of subject X. For example, an agent may be able to claim that it understands Pythagoras’s theorem by explaining how the theorem works logically and mathematically, based on its existing knowledge and information base. The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other. In the case of our example of Pythagoras’s theorem, it would be perfectly possible for an agent to “learn by rote” the detailed explanation of Pythagoras’s theorem, and therefore to “fool” an interrogator into thinking that it understands when in fact all it is doing is repeating (regurgitating) pre-existing information. To passively understand (or to simulate understanding of) subject X, an agent needs only a static database and static program (in other words, an agent needs neither a dynamic database nor a dynamic program in order to exhibit passive understanding).

Let us define creative understanding of subject Y as the type of new understanding which an agent develops during the course of learning a new subject – by definition therefore the agent does not already possesses understanding of subject Y prior to learning about subject Y, but instead develops this new understanding of subject Y during the course of its learning. Clearly, to creatively understand subject Y, an agent needs a dynamic database and possibly also a dynamic program (since the agent needs to learn and develop new information and knowledge associated with the new subject Y). An important question is : Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?

I suggest that the classic Turing test is usually aimed at testing only passive, and not creative, understanding – the Turing test interrogator asks the agent questions in order to test the agent’s existing understanding of already “known” subjects. I suggest also that the CR as defined by Searle is also aimed at testing only passive, and not creative, understanding (In his description of the CR Searle makes no reference to any ability of the room to learn any new information, knowledge or understanding).

But we have seen that it is possible, at least in principle, for an agent to “simulate” passive understanding, and thereby to “fool” both the Turing test interrogator and the CR interrogator.

It seems clear that to improve our model and to improve our understanding of “understanding” we need to modify both the Turing test and the CR experiment, to incorporate tests of creative understanding. How might this be done? Instead of simply asking questions to test the agent’s pre-existing (learned) understanding, the interrogator might “work together with” the agent to explore new concepts and ideas, incorporating new information and knowledge, leading to new understanding. It is important however that during this process of creatively understanding the new subject the interrogator must not always be in a position of “leading” or “teaching”, otherwise we are simply back in the situation where the agent can passively accept new information and thereby simulate new understanding. The interrogator must allow the agent to demonstrate that it is able to develop new understanding through its own processing of new information and knowledge, putting this new information and knowledge into correct context and association with pre-existing information and knowledge, and not simply “learn new knowledge by rote”.

IF the Turing test is expanded to incorporate such tests of creative understanding, would we then eliminate the possibility that the agent has “simulated understanding and fooled the interrogator by learning by rote”?

Constructive criticism please?


MF
On the surface it makes plenty of sense. I myself at the beginning of this discussion may have attributed some level of understanding to a pocket calculator which would fit with the description here of "passive understanding". The problem I see with this is that if I were to write out an equation on a piece of paper we could similarly refer to that piece of paper along with the equation as possessing "passive understanding".
Ofcourse a calculator works off of an active process so there is a difference at least in this though I'm not sure if I can bring myself, at least not any longer, to think this makes that much of a difference when it comes to whether or not it possesses any sort of understanding. An abacus or a slide rule present an active process but I still wouldn't consider them to possesses any more understanding for it. I think that the idea I presented previously that such devices only "reflect the understanding of the designer" is probably much more logical than to state that they actually possess any kind of understanding in and of themselves.
I'm still not quite settled on it.
Actually I'm not even sure exactly how a calculator's program works. If I did I might still attribute some sort of understanding to it.
 
  • #144
Tisthammerw said:
Yes. All your definitions are wrong, because I don't agree with them. Everything you say is wrong, fallacious and circulus in demonstrado because I use those words differently than you do. I will of course ignore the fact that you are not referring to everybody's definitions, and for many posts to come I will repeatedly claim that your definitions are wrong in spite of any of your many clarification attempts to undo the severe misunderstanding of what you are saying.
HA! NOW YOU KNOW HOW IT FEELS!
Tisthammerw, you are just being very silly here. I NEVER claimed that your definitions were wrong, simply that I did not agree with them. You are entitled to your opinion, and I to mine.
Doesn't bother me that you are being silly. Does it bother you?
Will reply to rest of your post later.
MF
 
  • #145
Tisthammerw said:
I am not claiming that “understanding requires consciousness” for all people’s definitions of those terms (as I’ve said many times). Do you understand this?
Yes, and it follows from this that the statement “understanding requires consciousness” is synthetic (because, as you have agreed, it is NOT clear that all possible forms of understanding DO require consciousness) Do you understand this?

moving finger said:
Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
That’s what I’m asking you.
With respect, this is NOT what you are asking me. You are asking me whether the computer you have in mind “perceives”. I don’t know what computer you are referring to. This is YOUR example of a computer, not mine. How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?
A computer which acquires, interprets, selects, and organises (sensory) information is perceiving in my definition of the word perceiving.

Tisthammerw said:
My definition is found in Merriam-Webster’s dictionary remember?
Do you want to play that silly “I use the X dictionary so I must be right” game again?.

Tisthammerw said:
You said that your definition did not require consciousness, remember? If a person perceives in the sense that I’m using the word then the person by definition possesses consciousness. But if your definition does not require consciousness, then by all logic it must be different from my definition (please look it up in the dictionary I cited). So please answer my question regarding a person “perceiving” an intensely bright light.
You are taking an anthropocentric position again. A human agent requires consciousness in order to be able to report that it perceives anything. One cannot conclude from this that all possible agents require consciousness in order to be able to perceive.

The same applies to understanding. A human agent requires consciousness in order to be able to report that it understands anything. One cannot conclude from this that all possible agents require consciousness in order to be able to understand.

MF
 
Last edited:
  • #146
Tisthammerw said:
At least we’re covering some ground here. We've established that Bob does not understand what's going on even though he runs program X.
To be correct, we have established that Bob’s consciousness does not understand, and Bob’s consciousness is a “component of program X”, it is not “running program X”.

Tisthammerw said:
Do you really believe that the combination of the man, the rulebook etc. actually creates a separate consciousness that understands Chinese?
No, I never said that the “agent that understands” is necessarily conscious. My definition of understanding does not require consciousness, remember?

Tisthammerw said:
Remember, the definition of understanding I am using (which you have called TH-understanding) requires consciousness. Bob’s consciousness is fully aware of all the rules of the rulebook and indeed carries out those rules. Yet he still doesn’t possesses TH-understanding here.
If you are asking “can a non-conscious agent possesses TH-Understanding?” then I agree, it cannot. That is clear by definition of TH-Understanding. Is that what you have shown?

That’s pretty impressive I suppose. “A non-conscious agent cannot possesses TH-Understanding, because TH-Understanding is defined as requiring consciousness”. Quite a philosophical insight! :smile:

Can we move on now to discussing something that is NOT tautological?

MF
 
  • #147
moving finger said:
The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other.
Tisthammerw said:
That was kind of Searle's point. Just because an entity (as a computer program) can simulate understanding doesn't mean the computer actually has it.
You’ve got it. This is why I am looking at ways of possibly improving the basic Turing test, and the CR experiment, to enable us to distinguish between an agent which “understands” and one which is “simulating understanding”.
moving finger said:
Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?
Tisthammerw said:
Variants of the Chinese room include learning a person's name etc. by the man in the room having many sheets of paper to store data and having the appropriate rules (think Turing machine).
But “learning a person’s name” is NOT necessarily new understanding, it is simply memorising. This is exactly why I say later in my post :
moving finger said:
It is important however that during this process of creatively understanding the new subject the interrogator must not always be in a position of “leading” or “teaching”, otherwise we are simply back in the situation where the agent can passively accept new information and thereby simulate new understanding.
Tisthammerw said:
The Chinese room can be modified as such, and yet there is still no (TH) understanding.
There never will be TH-Understanding as long as there is no consciousness. If you insist that the agent must possesses TH-Understanding then it must by definition be conscious. I am not interested in getting into tautological time-wasting again :rolleyes:
MF
 
  • #148
TheStatutoryApe said:
On the surface it makes plenty of sense.
Thank you

TheStatutoryApe said:
I myself at the beginning of this discussion may have attributed some level of understanding to a pocket calculator which would fit with the description here of "passive understanding". The problem I see with this is that if I were to write out an equation on a piece of paper we could similarly refer to that piece of paper along with the equation as possessing "passive understanding".
Hmmmm. But to me it seems that understanding is a "process". I do not enact a process by writing an equation on a piece of paper (any more than I "run a computer program" by printing out the program). I don't see how we could claim that any static entity (such as an equation, or pile of bricks, or switched-off computer) has any type of understanding (passive or otherwise), because there are no processes going on.

TheStatutoryApe said:
Ofcourse a calculator works off of an active process so there is a difference at least in this though I'm not sure if I can bring myself, at least not any longer, to think this makes that much of a difference when it comes to whether or not it possesses any sort of understanding. An abacus or a slide rule present an active process but I still wouldn't consider them to possesses any more understanding for it.
To say that "understanding is a process" is not the same as saying that "all processes possesses understanding". It simply means that "being a process" is necessary, but not sufficient, for understanding. This would explain your "an abacus does not understand" position.

TheStatutoryApe said:
I think that the idea I presented previously that such devices only "reflect the understanding of the designer" is probably much more logical than to state that they actually possess any kind of understanding in and of themselves.
I agree that designed agents can and do reflect to some extent the understanding of the designer. But is this the same as saying that "no designed agent can be said to understand"? I think not.

MF
 
  • #149
In the interest of space, I'll combine posts.

moving finger said:
Tisthammerw said:
Yes. All your definitions are wrong, because I don't agree with them. Everything you say is wrong, fallacious and circulus in demonstrado because I use those words differently than you do. I will of course ignore the fact that you are not referring to everybody's definitions, and for many posts to come I will repeatedly claim that your definitions are wrong in spite of any of your many clarification attempts to undo the severe misunderstanding of what you are saying.

HA! NOW YOU KNOW HOW IT FEELS!

Tisthammerw, you are just being very silly here. I NEVER claimed that your definitions were wrong, simply that I did not agree with them.

You implied it when you said that you do not agree that my statement was analytic statement and did not agree with my definitions. In these circumstances, “disagree” usually means believing that the statement in question is false. In any case, I admit the above was an exaggeration, but not by much. You claimed a lot of what I said was wrong because you blatantly misunderstood what I was saying (e.g. ignoring the fact that I was not referring to everybody's definition when I said that “understanding requires consciousness” was an analytic statement).

moving finger said:
Tisthammerw said:
I am not claiming that “understanding requires consciousness” for all people’s definitions of those terms (as I’ve said many times). Do you understand this?

Yes, and it follows from this that the statement “understanding requires consciousness” is synthetic (because, as you have agreed, it is NOT clear that all possible forms of understanding DO require consciousness) Do you understand this?

No I do not. Why is the fact that other people use different definitions of the terms make the statement “understanding requires consciousness” synthetic? The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.

Tisthammerw said:
Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?


That’s what I’m asking you.

With respect, this is NOT what you are asking me.

Yes I am. I am asking whether or not this scenario meets your definitions of those characteristics.


You are asking me whether the computer you have in mind “perceives”. I don’t know what computer you are referring to.

I am asking if the computer I described “perceives” using your definition of the term. But since you seem to have forgotten the scenario (even though I described it in the very post you responded to), let me refresh your memory:

Tisthammerw said:
Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.

There. Now please answer my questions.


How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?

Because I have described it to you.


Tisthammerw said:
I think some of your definitions are unusual but you are of course entitled to your opinion.

My definition is found in Merriam-Webster’s dictionary remember? If anything, it’s your definition that is unconventional (an entity perceiving without possessing consciousness).

Do you want to play that silly “I use the X dictionary so I must be right” game again?.

No, I am merely pointing out that my definitions are not the ones that are unconventional. Please read what I say in context.


Tisthammerw said:
You said that your definition did not require consciousness, remember? If a person perceives in the sense that I’m using the word then the person by definition possesses consciousness. But if your definition does not require consciousness, then by all logic it must be different from my definition (please look it up in the dictionary I cited). So please answer my question regarding a person “perceiving” an intensely bright light.

You are taking an anthropocentric position again. A human agent requires consciousness in order to be able to report that it perceives anything. One cannot conclude from this that all possible agents require consciousness in order to be able to perceive.

Fine, but that still doesn’t answer my question:

Are you saying an entity can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to definition 2 of the Merriam-Webster dictionary).

moving finger said:
Tisthammerw said:
At least we’re covering some ground here. We've established that Bob does not understand what's going on even though he runs program X.

To be correct, we have established that Bob’s consciousness does not understand, and Bob’s consciousness is a “component of program X”, it is not “running program X”.

I think you may have misunderstood the situation. Programs are nothing more than set of instructions (albeit often a complex set of instructions). Bob is not an instruction, he is the processor of the program X.


Tisthammerw said:
Do you really believe that the combination of the man, the rulebook etc. actually creates a separate consciousness that understands Chinese?
No, I never said that the “agent that understands” is necessarily conscious. My definition of understanding does not require consciousness, remember?

As I have said before, this story is referring to TH-understanding, remember? I am not, repeat NOT using your definition here. Can a computer that runs program X have TH-understanding? I say no, and the whole point of this argument is to justify that claim.


Tisthammerw said:
Remember, the definition of understanding I am using (which you have called TH-understanding) requires consciousness. Bob’s consciousness is fully aware of all the rules of the rulebook and indeed carries out those rules. Yet he still doesn’t possesses TH-understanding here.

If you are asking “can a non-conscious agent possesses TH-Understanding?” then I agree, it cannot.

That is not what I am asking here in regards to this argument. Here is what I am asking: Can a computer (the model of a computer being manipulating input via a complex set of instructions etc.) have TH-understanding? This is what the argument is about. Now how about addressing its relevant questions?


Tisthammerw said:
Variants of the Chinese room include learning a person's name etc. by the man in the room having many sheets of paper to store data and having the appropriate rules (think Turing machine).

But “learning a person’s name” is NOT necessarily new understanding, it is simply memorising.

That would depend on how you define “understanding.” If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”
 
  • #150
More definitions of the concept "Understanding"

understanding
A noun
1* reason, understanding, intellect
* the capacity for rational thought or inference or discrimination; "we are told that man is endowed with reason and capable of distinguishing good from evil"
Category Tree:
psychological_feature
?cognition; knowledge; noesis
?ability; power
?faculty; mental_faculty; module
?reason, understanding, intellect
2* understanding, apprehension, discernment, savvy
* the cognitive condition of someone who understands; "he has virtually no understanding of social cause and effect"
Category Tree:
psychological_feature
?cognition; knowledge; noesis
?process; cognitive_process; mental_process; operation; cognitive_operation
?higher_cognitive_process
?knowing
?understanding, apprehension, discernment, savvy
?realization; realisation; recognition
?insight; brainstorm; brainwave
?hindsight
?grasping
?appreciation; grasp; hold
?smattering
?self-knowledge
?comprehension
3* sympathy, understanding
* an inclination to support or be loyal to or to agree with an opinion; "his sympathies were always with the underdog"; "I knew I could count on his understanding"
Category Tree:
psychological_feature
?cognition; knowledge; noesis
?attitude; mental_attitude
?inclination; disposition; tendency
?sympathy, understanding
4* agreement, understanding
* the statement (oral or written) of an exchange of promises; "they had an agreement that they would not interfere in each other's business"; "there was an understanding between management and the workers"
Category Tree:
abstraction
?relation
?social_relation
?communication
?message; content; subject_matter; substance
?statement
?agreement, understanding
?oral_contract
?entente; entente_cordiale
?submission
?written_agreement
?gentlemen's_agreement
?working_agreement
?bargain; deal
?sale; sales_agreement
?unilateral_contract
?covenant
?conspiracy; confederacy

B adjective
1* understanding
* characterized by understanding based on comprehension and discernment and empathy; "an understanding friend"
 
  • #151
Please excuse the lack of links for each of the descriptors associated with "understanding" in my post.

Somehow I have lost the ability to edit and parce links. I, personally don't parce why this is happening and I've had no time to utilize my program X to find out.

May the electricity be with you at a moderate price!

Spooky All Hallow's Eve!
 
  • #152
MF said:
To say that "understanding is a process" is not the same as saying that "all processes possesses understanding". It simply means that "being a process" is necessary, but not sufficient, for understanding. This would explain your "an abacus does not understand" position.
I agree competely. It is exactly this that makes me reconsider the idea that a calculator could potentially be considered to possesses understanding. The simple fact that it utilizes a process does not mean it understands. Agian I'm not sure exactly how a calculator's program works but if it is really just a more complex version of what a slide rule or abacus does then it would say that it does not understand.

MF said:
I agree that designed agents can and do reflect to some extent the understanding of the designer. But is this the same as saying that "no designed agent can be said to understand"? I think not.
Again I agree. I would not argue that any designed agent can only reflect understanding. I only argue that this "passive understanding" may better be described as "reflected understanding". The designer or teacher has done the work of "actively understanding" and deriving information based on this. The designer or teacher then passes the information on to a device or pupil. The device or pupil may be capable of coveying the information on to yet another agent but the information is not necessarily "understood" by any of the sucessive agents that may memorize and transmit it. In this case I would prefer to consider that the agents reflect the understanding of the information's progenitor rather than possesses any sort of understanding themselves. The logic here does not preclude the ability of a device or pupil to "actively understand" it only changes the concept of "passive understanding" to what I personally think is a more logical conception of what is occurring.
 
  • #153
Tisthammerw said:
You implied it when you said that you do not agree that my statement was analytic statement and did not agree with my definitions.
“Y does not agree with X” is not synonymous with “Y thinks X is wrong”.
There are such things as “matters of opinion”. You and I may have different opinions on some issues (such as the definition of understanding), which means that “we do not agree on the definition of understanding”, but does NOT mean that one of us is necessarily “wrong”.
Tisthammerw said:
In these circumstances, “disagree” usually means believing that the statement in question is false.
I disagree. I thought that we already established that definitions of words are normally not things that can be “true” or “false”….. or are you now changing your mind?
Tisthammerw said:
Why is the fact that other people use different definitions of the terms make the statement “understanding requires consciousness” synthetic?
The answer is given already in my previous post. Because
moving finger said:
it is NOT clear that all possible forms of understanding DO require consciousness
As I pointed out several times already, two people may not be able to agree on whether a given statement is analytic or not if those two people are not defining the terms used in the statement in the same way. Do you agree with this?
Tisthammerw said:
The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.
To my mind, whether or not “understanding requires consciousness” needs to be determined by observation. Thus the statement is indeed synthetic.
moving finger said:
How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
Because I have described it to you.
In fact you did NOT specify that the computer you have in mind is interpreting the data/information. Interpreting is part of perceiving.
Tisthammerw said:
I am merely pointing out that my definitions are not the ones that are unconventional.
My definitions can also be found in dictionaries, scientific textbooks, encyclopaedias and reference works. Just because I do not use the same dictionary as you does not mean my definitions are unconventional. In psychology and the cognitive sciences, the word perception (= the act of perceiving) is usually defined as “the process of acquiring, interpreting, selecting, and organising (sensory) information”. It follows from this that “to perceive” is to acquire, interpret, select, and organise (sensory) information.
Tisthammerw said:
Are you saying an entity can “perceive” an intensely bright light without being aware of it through the senses?
I have already answered this in a previous post, thus :
moving finger said:
I am not saying this, where did you get this idea?
Tisthammerw said:
Programs are nothing more than set of instructions (albeit often a complex set of instructions). Bob is not an instruction, he is the processor of the program X.
What makes you think that a person is anything more than a “complex set of instructions”?
Tisthammerw said:
Can a computer (the model of a computer being manipulating input via a complex set of instructions etc.) have TH-understanding?
If the computer in question is conscious, then yes, it can (in principle) possesses TH-Understanding.
Can you show that no possible computer can possesses consciousness?
Tisthammerw said:
If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”
This does not follow at all. How do you arrive at this conclusion?
It is not clear from Searle’s description of the thought experiment whether or not he “allows” the CR to have any ability of acquiring new understanding (this would require a dynamic database and dynamic program, and it is not clear that Searle has in fact allowed this in his model).
I am simply saying that memorising is not synonymous with understanding. This applies to all agents, humans as well as computers.
MF
 
  • #154
MovingFinger said:
How does MF know whether the agent Tournesol understands English? The only way MF has of determining whether the agent Tournesol understands English or not is to put it to the test in fact the Turing test -fann to ask it various questions designed to test its understanding of English. If the agent Tournesol passes the test then I conclude that the agent understands English.
Why should it be any different for a machine?
Because there is another piece of information you have about me: I have a
human brain, and human brains are known to be able to implement semantics,
consciousness, etc. Silicon is not currently known to -- it is not known
not to , it just not known to.
I am not suggesting that Turings test is definitive. But in the absence of any other test it is the best we have (and certainly imho better than defining our way out of the problem). I am sure we would all love to see a better test, if you can suggest one.
If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?
The TT is more doubtful in the case of a machine than that of a human.
You are missing another point as well: the point is whether syntax is
sufficient for semantics. What fills the gap in humans, setting aside
immaterial souls, is probably the physical embodiment and interactions
with the surroundings. Of course,
any actual computer will have a physical embodiment, and its
physical embodiment *might* be sufficient for semantics and cosnciousness.
However, even if that is true, it does not mean the computer's
posession of semantics is solely due to syntactic abilities,
and Searle's point is still true.
Whether it is a valid analytical argument depends on whether the definitions it relies on are conventional or eccentric.
Conventional by whose definition? Tournesols?
In rational debate we use words as tools so long as we clearly define what we mean by the tools we use then we may use whatever tools we wish.
Redefine "fanny" to mean "dick" and your auntie is your uncle.
yes, by defintion. That is the difference between understanding, and instinct intuition, etc. A beaver can buld dams, but it cannot give lectures on civil
engineering.
I cannot report that I know anything if my means of reporting has been removed.
A beaver might in principle understand civil engineering, but it can't give lectures if it cannot speak.
Are you seriously asserting that the only think that prevents a beaver from lecturing on civil
engineering is its lack of a voicebox ?
You don't seem to have an alternative.
Consciousness imho is the internal representation and manipulation of a self model within an information-processing agent, such that the agent can ask rational questions of itself, for example what do I know?, how do I know, do I know that I know?, etc etc. The ability of an agent to do this is NOT necessary for understanding per se,
That depends on what you mean by "understanding".
The question is whether syntax is sufficient for semantics.
Im glad that you brought us back to the Searle CR argument again. Because I see no evidence that the CR does not understand semantics
Well, I have already given you a specific reason; there are words in human languages which refer specifically to sensory experiences.
Why do you consider this is evidence that the CR does not understand semantics? Sensory experiences are merely conduits for information transfer, they do not endow understanding per se, much less semantic understanding.
There are terms in language with specifically sensory meanings, such as "colour", "taste", etc.
given the set of sentences.
"all floogles are blints"
"all blints are zimmoids"
"some zimmoids are not blints"

[ ... etc ... ]
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context.
That's the point! There is nothing special about the inability of the CR to derive semantics from syntax, since
that is not possible in general.
Now, the point you should be arguing is that the CR does in fact have semantics; you need to show
that it is an *exception* to the general rule.
Agreeing that its specific inability to grasp semantics through syntax
is an instance of a general rule is not mounting an argument
against the CR -- it is tantamount to accepting the CR.
This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantics of two different agents.
If there is a difference between two agents, each is failing to grasp the semantics of the other.
Although they agree on syntax. So why should the CR specifically be able to avoid that problem ?
Human agents are likely to do so using their embededness in the world -- "*this* [points] is what I mean by zimmoid"--
but the CR does not have that capacity.
= more than one semantic model can be consistently given to
the same symbols.
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context. This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantic understanding of two different agents.
So..are you saying that the CR has the wrong semantics, and that having the wrong semantics
counts as "understanding" in a way that having no semantics does ? And how does A TT
distinguish between having the wrong semantics and having no semantics (since the
symbol-manipulation is the same in each case ?) And *how* does the CR have the
wrong semantics, since it does not get them from syntax alone ?
Now the strong AI-er could object that the examples are too
simple to be realistic, and if you threw in enough symbols,
you would be able to resolve all ambiguities succesfully.
See above.
Why? You haven't said what it is that allows the CR to plug semantic gaps. Your observation
that there is nothing special about the CR's inability to derive semantics from syntax is
far short of a demonstration that Searle is wrong, and it actually *can* derive semantics from syntax ?
(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)
Who has suggested that semantics is redundant?
You have, in effect. If semantics can be derived from syntax, it is informationally redundant.
Using your logic, one might equally ask why do we have the separate concepts of programmable computer and pocket calculator both are in fact calculating machines therefore why not just call them both calculating machines and be done with it.
I am not saying that semantics is redundant as a term because it can be subsumed under some more
general term, in the way that "computer" can be subsumed under "calculating machine";
people who directly counter Searle's argument are effectively saying that semantic information
is redundant, since it can derived from syntactic information.
I have been consistently suggesting that establishing definitions is completely different to establishing facts. Defining a word in a certain way does not
demonstrate that anything corresonding to it actaully exists.
Excellent! Therefore we can finally dispense with this stupid idea that understanding requires consciousness because it is defined that way
Dreadful! Truths-by-definition may not amount to empirical truths, but that does not mean they
are emprical falsehoods -- or do you think there are no umarried bachelors ?
How can you establish a fact without definitions?
Ask yourself what are the essential qualities of understanding that allow me to say this agent understands avoid prejudicial definitions and and avoid anthropocentrism
You haven't shown how to do all that without any pre-existing definitions.
I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality.
Experiential qualities are agent-dependent (ie subjective). Tournesols experiential quality of seeing red is peculiar to Tournesol subjective - it is meaningless to any other agent.
Even if that is true, it is a far cry from the argumentatively relevant point that "red" has no meaning.
And I don't see why it should be true anyway; if consciousness is generated by the brain, then anatomically normal
brains should generate the same qualia. To argue otherwise is to assume some degree of non-physicalism. Need I point
out the eccentricity of appealing to anti-physicalism to support AI?
This does not mean that writing the definition is impossible, it just means that it is a subjective definition, hence not easily accessible to other agents
The semantics of a phrase like "the taste of caviare" are easily supplied by non-syntactic means -- one
just tastes caviare. Are you saying we should reject the easy option , and stick to the hard-to-impossible one just to keep the semantics-is-really-syntax flag flying ?
I have denied that experiential knowledge is synonymous with understanding, I have NOT denied that experiential knowledge is knowledge
I have said that experiential semantics is part of semantics, not all of it.
As to your distinction between knowledge and understanding , I don't think it is
sustainable. To know what a word means is to understand it.
Your assertion assumes that vision is required for understanding. Vision provides experiential information, not understanding. I understand the terms X-ray and ultra-violet and infrared and microwave even though I possesses no experiential information associated with these terms. What makes you think I need experiential information to understand the terms red and green? The onus is on you to show why the experiential information is indeed necessary for understanding red and green, but not for x-rays or ultra-violet rays..
You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.
You don't need experience to grasp the semantics of "infra-red" and "ultra violet" as well as other people, because
they don't have appropriate experiences to base their semantics on.
You have no grounds to suppose that understanding X-rays is just understanding per se -- it is
only partial understanding compared to understanding visible colours.
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but more so.
Information and knowledge are required for understanding, not senses.
You have conceded that experiential knowledge is knowledge.
If knowledge is required for understanding, as you say, experiential knowledge
is required for understaning. Since eperience is needed for
experiential knowledge, that means experience is required for
understanding.
However, I do not need to argue that non-experiential knowledge is not knowledge.
Why not - is this perhaps yet another analytic statement?
It doesn't affect my conclusion.
It affects whether your conclusion is simply your opinion or not
"Not-experiential knowledge is not knowledge" is not something
I need to assume, not something I am claiming, and not my opinion
(not that that matters).
Sigh..that is a very anthropocentric viewpoint.
ces.
AI needs to be anthropocentric..up to a point.
Humans acquire most of their information from their senses in the form of reading, listening etc the same information could be programmed directly into a machine.
We could copy the data across -- as in standard, non-AI computing-- but would that be sufficient for
meaning and understanding ? If a system has language, you can use that to cnvey
3rd-person non-experiential knowledge. But how do you bootstrap that process -- arrive
at linguistic understanding in the first place? Humans learn language through interaction
with the environment. As the floogle/blint/zimmoid argument shows, you cannot safely
conclude that you have the right semantics just because you have the right syntax.
An AI that produce the right answers in a TT might have the worng semantics or no semantics.
The fact that humans are so dependent on sense-receptors for their information gathering does not lead to the conclusion that understanding is impossible in the absence of sense-receptors in all possible agents.
And that argument does not show that syntax is sufficient for human-type semantics. Would a putative AI have
quite a different form of language to a human (the Lion problem) ? Then Searle
has made his case. Would it have the same understanding , but not achieved solely
by virtue of syntax ? Again, Searle has made his case.
It makes the point that ability to fly a plane is not synonymous with understanding flight
The ability is part of an understanding which is more than merely theoretical understanding.
What red looks like is not understanding it is simply subjective experiential information.
And you don't live in a house, you live in a building made of bricks with doors and windows.
What red looks like to Mary is not necessarily the same as what red looks like to Tournesol
Naturalistically , it should be.
The full meaning of the word "red" (remember, this is ultimately about semantics).
Your argument continues to betray a peculiar anthropocentic perspective.
Strong AI is about duplicating human intelligence -- it should be anthopocentric.
What makes you think that the experiential quality of red is the same to you as it is to me?
Physicalism. Same cause, same effect. Why do you think it isn't ?
If the experiential qualities are not the same between two agents, then why should it then matter (in terms of understanding semantics) if the experiential quality of red is in fact totally absent in one of the agents? How could such an agent attach any meaning to a term like "qualia" if it has no examples whatsoeve
to draw on.
I can understand semantically just what is meant by the term red without ever experiencing seeing red, just as I can understand sematically just what is meant by the term x-rays without ever experiencing seeing x-rays.
They are not analogous, as I have shown. You don't need experience to understand X-ray as well
as anyone can understand it becuase no-one has experience OF THAT PARTICULAR PHENOMENON, not
because experience in general never contributes to semantics.
I claim that experience is necessary for a *full* understanding of *sensory* language, and that an entity without sensory exprience therefore lacks full
semantics.
There is no reason why all of the information required to understand red, or to understand a concept cannot be encoded directly into the computer (or CR) as part of its initial program.
Yes there is. If no-one can write down a definition of the experiential nature of "red" no-one can encode it into a programme.
Now (with respect) you are being silly. Nobody can write down a universal definition of the experiential nature of red" because it is a purely subjective experience.
It is a subjective experience because no-one can write down a definition.
And that holds true without making the physicalistically unwarranted assumption that
similar brains produce radically different qualia.
There is a pattern of information in Tournesols brain which corresponds to Tournesol seeing red, but that pattern means absolutely nothing to any other agent.
SO you can't teach a computer what "red" means by cutting-and-pasting information (or rather data)
from a human brain -- because it would no longer make sense in a different context.
Note that data is not information for precisely that reason -- information is data that
makes sense in a context. You seem to have got the transferability of data mixed
up with the transferability of information. Information can be transferred,if the "receiving" context
has the appropriate means to make sense of the data already present, but how the CR is to
have the means is precisely what is at stake.
Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)
We can ask questions about how things look (about the "carriers" of information as opposed to
infomation itself) , and a system with full semantics needs to understand those questions.
In principle, no sense-receptors are needed at all. The computer or CR can be totally blind (ie have no sense receptors) but still incorporate all of the information needed in order to understand red, syntactically and semantically. This is the thesis of strong AI, which you seem to dispute.
Yes. No-one knows how to encode all the information. You don't.
Oh really Tournesol. Whether MF knows know how to do it or not is irrelevant.
Whether anyone else does is highly relevant:"No-one knows how to encode all the information".
You still haven't shown , specifically, how to encode experiential infomation.
I don't see why you seem to think its such a problem.
Information is information.
If that were true, you could write a defition of the
experiential nature of "red". You can't, so it isn't.
The interesting aspect of experiential information is that it has meaning only to the agent to which it relates. In other words the information contained in the experiential state of Tournesol seeing red only means something to the agent Tournesol, the same information means nothing (indeed does not exist) to any other agent.
Wrong on several scores. I would have no way of telling whether a detailed
description of a brain state was a description of a "red" or "green" quale,
even if it was my own brain. So the "everyone has different qualia" definition
of "subjectivity" you are appealing to -- which is contradicted by physicalism --
is not the same as the "explanatory gap" version, which applies even to one's own
brain-states.
 
Last edited:
  • #155
When we staunchily remain steadfast in our separate and personal definitions of "understanding" we do not allow for any progress in the refinement of a "universal" and terminologically correct use of the concept or the word "understanding and the phenomenon it refers to.

It may be more constructive if we were to find, amongst ourselves, commonalities in our definitions that help us "reach an understanding" between all parties with regard to the meaning, definition and concept of the descriptor, "understanding".

1.) I propose that "understanding" is a result of a series of processes... not a process in itself. Understanding is a description of a plateau one reaches and from which one is able to continue in pursuit of other plateaus of understanding. (if you agree, please indicate by repeating the number of this proposal with an "agree" or "disagree" beside it. If in disagreement, please offer an explanation.

2.) Understanding is a result of cognitive processes. (agree or disagree)

Here are some secondary descriptors of the primary (in this case) descriptor "cognitive":

of, relating to, or being conscious intellectual activity (as thinking, reasoning, remembering, imagining, or learning words)
www.prostate-cancer.org/resource/gloss_c.html[/URL]

* Awareness with perception, reasoning and judgement, intuition, and memory; The mental process by which knowledge is acquired.
[PLAIN]www.finr.com/glossary.html[/URL]

* Refers to the ability to think, learn and remember.
[url]www.handsandvoices.org/resource_guide/19_definitions.html[/url]

* brain functions related to sense perception or understanding.
altweb.jhsph.edu/education/glossary.htm

* Relating mental awareness and judgment.
science.education.nih.gov/supplements/nih3/alcohol/other/glossary.htm

* Pertaining to the mental processes of perceiving, thinking, and remembering; used loosely to refer to intellectual functions as opposed to physical functions.
professionals.epilepsy.com/page/glossary.html

* Refers to a mental process of reasoning, memory, judgement and comprehension - as contrasted with emotional and volitional processes.
[PLAIN]www.into.ie/downloads/gloss1.htm[/URL]

* Pertaining to cognition, the process of being aware, knowing, thinking, learning and judging.
[url]www.memorydisorder.org/glossaryterms.htm[/url]

* thought processes

[PLAIN]www.macalester.edu/~psych/whathap/UBNRP/synesthesia/terms.html[/URL]
* relating to or involving the act or process of knowing, including both awareness and judgement. Cognition is characterized by the following: attention, language/symbols, judgement, reasoning, memory, problem-solving.
[PLAIN]www.inspection.gc.ca/english/corpaffr/publications/riscomm/riscomm_appe.shtml[/URL]

* Pertaining to functions of the brain such as thinking, learning, and processing information.
[PLAIN]www.azspinabifida.org/gloss.html[/URL]

* Thinking, getting, evaluating and synthesizing information.
oaks.nvg.org/wm6ra3.html

* in cognitive psychology this is the component of attitude that involves perceptual responses and beliefs about something (knowledge and assumption)
[PLAIN]www.oup.com/uk/booksites/content/0199274894/student/glossary/glossary.htm[/URL]

* an adjective referring to the processes of thinking, learning, perception, awareness, and judgment.
[url]www.nutrabio.com/Definitions/definitions_c.htm[/url]

* Most neuroimaging studies in mood disorders are largely restricted to patients with MDD. Magnetic resonance imaging (MRI) studies demonstrate that patients with late-life MDD have smaller focal brain volumes and larger high-intensity lesion volumes in the neocortical and subcortical regions than control subjects. 102,103 The focal reductions in brain volume have been identified in the prefrontal region, hippocampus, and the caudate nucleus. ...
ajgp.psychiatryonline.org/cgi/content/full/10/3/239

* Having to do with a person's thoughts, beliefs, and mental processes including intelligence.
access.autistics.org/resources/glossary/main.html

* function, all the normal processes associated with our thoughts and mental processes.
[PLAIN]www.srht.nhs.uk/sah/Glossary/Glossary.htm[/URL]

* Teacher takes all the responsibility for the process and the result The input-output during a lesson get more and more complex Learners get involved in individual, pair and group activities A lesson is notable for a variety of diverse activities Learners' output is either a monologue or a dialogue learned by heart Communicative message is in focus Learners are positively dependent on each other Lexis and grammar are in focus Listening and reading are in focus Activation of thought processes ...
tsu.tmb.ru/millrood1/interact/modern_e/mel10.htm

* Mental ability to gain knowledge, including perception, and reason.
cgi.[PLAIN]www.care.org.uk/student/abortion/fs/fs11.htm[/URL]

* of or being or relating to or involving cognition; "cognitive psychology"; "cognitive style"
wordnet.princeton.edu/perl/webwn

* The term cognition is used in several different loosely related ways. In psychology it is used to refer to the mental processes of an individual, with particular relation to a view that argues that the mind has internal mental states (such as beliefs, desires and intentions) and can be understood in terms of information processing, especially when a lot of abstraction or concretization is involved, or processes such as involving knowledge, expertise or learning for example are at work. ...
en.wikipedia.org/wiki/Cognitive[/quote]

We may note that 99.9% of the above descriptors describe "mental" processes. Beliefs, reasoning, thought/mental process (this brings up the previously noted idea that the physics of the brain and the physics of a computer are vastly different and perhaps require vastly different terminology to distiquish the two different processes), awareness, judgement, thinking and intuition.

What I'm still trying to demonstrate is that terminology serves the purpose of distinguishing processes and states that belong in certain categories.

To use the word "understanding" to describe a set of data being stored in a computer (on or off) is like describing an "organelle" in an animal cell as an "organ". It is clearly an example of incorrect use of terminology.

If you want to write prose or poetry about a computer... you are more than welcome to use the word "understanding" to describe what a computer does. However, in the world of professionals, I believe there are more appropriate terms that apply to the digital domain.

Thanks.
 
Last edited by a moderator:
  • #156
moving finger said:
“Y does not agree with X” is not synonymous with “Y thinks X is wrong”.

But normally that is what is implied when it comes to philosophical discussions e.g. “I do not agree with ethical relativism.”



I thought that we already established that definitions of words are normally not things that can be “true” or “false”

Yes we did, but at the time I was not aware you knew this.


Tisthammerw said:
Why is the fact that other people use different definitions of the terms make the statement “understanding requires consciousness” synthetic?

The answer is given already in my previous post. Because

it is NOT clear that all possible forms of understanding DO require consciousness

It is not clear why “understanding is a synthetic statement” logically follows form this. “Synthetic” in this context means “something that can be determined by observation.” As I said before:

Tisthammerw said:
The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.

To which you reply:


Tisthammerw said:
The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.

To my mind, whether or not “understanding requires consciousness” needs to be determined by observation.

Which really doesn’t answer my question why you believe it is synthetic. So far your argument is this:

  • People mean different things when they use the term “understanding.”

Therefore: whether understanding requires consciousness can be determined by observation.

It is terribly unclear why this is a valid argument, except for the sense that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.


As I pointed out several times already, two people may not be able to agree on whether a given statement is analytic or not if those two people are not defining the terms used in the statement in the same way. Do you agree with this?

Yes and no. Think of it this way. Suppose I “disagree” with your definition of “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?


Tisthammerw said:
moving finger said:
How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?

Because I have described it to you.

In fact you did NOT specify that the computer you have in mind is interpreting the data/information.

In fact I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms. To recap: I described the scenario, and I have subsequently asked you the questions regarding whether or not this fits your definition of perceiving etc.



Tisthammerw said:
I am merely pointing out that my definitions are not the ones that are unconventional.

My definitions can also be found in dictionaries, scientific textbooks, encyclopaedias and reference works.

Can they? You yourself use terms that are unclear (by what you mean by them) in part because you have refused to answer my questions of clarification.

Tisthammerw said:
Are you saying an entity can “perceive” an intensely bright light without being aware of it through the senses?

I have already answered this in a previous post

Yes and no. Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?


What makes you think that a person is anything more than a “complex set of instructions”?

Because people possesses understanding (using my definition of the term, what you have called TH-understanding), and I have shown repeatedly that a complex set of instructions is insufficient for TH-understanding to exist.


Tisthammerw said:
Can a computer (the model of a computer being manipulating input via a complex set of instructions etc.) have TH-understanding?

If the computer in question is conscious, then yes, it can (in principle) possesses TH-Understanding.
Can you show that no possible computer can possesses consciousness?

Given the computer model in question, I think I can with my program X argument (since program X stands for any computer program that would allegedly produce TH-understanding, and yet no TH-understanding is produced when program X is run).


Tisthammerw said:
If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”

This does not follow at all. How do you arrive at this conclusion?

As I said earlier, because the man in the Chinese room models a computer program. I can also refer to my program X argument, in which case there is no “new understanding” in this case either.


It is not clear from Searle’s description of the thought experiment whether or not he “allows” the CR to have any ability of acquiring new understanding (this would require a dynamic database and dynamic program, and it is not clear that Searle has in fact allowed this in his model).

Again, the variant I put forth does use a dynamic database and a dynamic program. We can do the same thing for program X.
 
  • #157
Part One of my reply :

Tournesol said:
Because there is another piece of information you have about me: I have a
human brain, and human brains are known to be able to implement semantics,
consciousness, etc.
Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.

I know that MF’s human brain “implements semantics”, but I have no evidence that any other brain, human or otherwise, does – apart from the evidence afforded by interrogating the owners of the brains – asking them questions to test their knowledge and understanding of semantics.

moving finger said:
If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?
Tournesol said:
The TT is more doubtful in the case of a machine than that of a human.
The solution to this problem is to try and develop a better test, not to “define our way out of the problem”

Tournesol said:
You are missing another point as well: the point is whether syntax is
sufficient for semantics.
I’m not missing that point at all. Searle assumes that a computer would not be able to understand semantics – I disagree with him. It is not a question of “syntax being sufficient for semantics”, I have never asserted that “synatx is sufficient for semantics” or that “syntax somehow gives rise to semantics”.
That the AI argument is necessarily based on the premise “syntax gives rise to semantics” is a fallacy that Searle has promulgated, and which you have swallowed.

Tournesol said:
What fills the gap in humans, setting aside
immaterial souls, is probably the physical embodiment and interactions
with the surroundings. Of course,
any actual computer will have a physical embodiment, and its
physical embodiment *might* be sufficient for semantics and cosnciousness.
Once again – place me in a state of sensory deprivation, and I still understand semantics. The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.

Tournesol said:
However, even if that is true, it does not mean the computer's
posession of semantics is solely due to syntactic abilities,
and Searle's point is still true.
I am not arguing that “syntax gives rise to semantics”, which seems to be Searle’s objection. I am arguing that a computer can understand both syntax and semantics.

Tournesol said:
Are you seriously asserting that the only think that prevents a beaver from lecturing on civil
engineering is its lack of a voicebox ?
I am saying that “possession of understanding alone” is not sufficient to be able to also “report understanding” – to report understanding the agent also needs to be able “to report”.

moving finger said:
Consciousness imho is the internal representation and manipulation of a self model within an information-processing agent, such that the agent can ask rational questions of itself, for example what do I know?, how do I know, do I know that I know?, etc etc. The ability of an agent to do this is NOT necessary for understanding per se,
Tournesol said:
That depends on what you mean by "understanding".
Goes without saying that we disagree on some definitions

Tournesol said:
There are terms in language with specifically sensory meanings, such as "colour", "taste", etc.

Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
MF claims that despite her lack of experiential knowledge, Mary nevertheless has complete semantic understanding of red.
Tournesol (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

Tournesol said:
There is nothing special about the inability of the CR to derive semantics from syntax, since
that is not possible in general.
But sematics is not derived from syntax. Why do you think it needs to be? Because Searle wrongly accuses the AI argument of assuming this?

Tournesol said:
Although they agree on syntax. So why should the CR specifically be able to avoid that problem ?
Again : Semantics is not derived from syntax.

Tournesol said:
Human agents are likely to do so using their embededness in the world -- "*this* [points] is what I mean by zimmoid"--
but the CR does not have that capacity.
There are many other ways of conveying and learning both meaning and knowledge apart from “pointing and showing”.


moving finger said:
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context. This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantic understanding of two different agents.
Tournesol said:
So..are you saying that the CR has the wrong semantics, and that having the wrong semantics
counts as "understanding" in a way that having no semantics does ?
No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.

Tournesol said:
And how does A TT
distinguish between having the wrong semantics and having no semantics (since the
symbol-manipulation is the same in each case ?) And *how* does the CR have the
wrong semantics, since it does not get them from syntax alone ?
The rules of sematics, along with the rules of syntax, are learned (or programmed). Any agent, including humans, can incorporate errors in learning (or programming) – making one error in syntax or semantics, or disagreeing on the particular semantics in one instance, does not show that the agent “does not understand semantics”.

Tournesol said:
Your observation
that there is nothing special about the CR's inability to derive semantics from syntax is
far short of a demonstration that Searle is wrong, and it actually *can* derive semantics from syntax ?
Once again - I have never said that semantics is derived from syntax – you have said this!
Both syntax and semantics follow rules, which can be learned, but it does not follow that “one is derived from the other”. Syntax and semantics are quite different concepts. Just as a “programmable computer” and a “pocket calculator” are both calculating machines, but one cannot necessarily construct a programmable computer by connecting together multiple pocket calculators.

Tournesol said:
(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)
moving finger said:
Who has suggested that semantics is redundant?
Tournesol said:
You have, in effect. If semantics can be derived from syntax, it is informationally redundant.
There you go again! I have never said that semantics is derivable from syntax, you have!

Tournesol said:
Truths-by-definition may not amount to empirical truths, but that does not mean they
are emprical falsehoods -- or do you think there are no umarried bachelors ?
We may agree on the definitions of some words, that it does not follow that we agree on the definitions of all words.

Tournesol said:
How can you establish a fact without definitions?
moving finger said:
Ask yourself what are the essential qualities of understanding that allow me to say this agent understands avoid prejudicial definitions and and avoid anthropocentrism
Tournesol said:
You haven't shown how to do all that without any pre-existing definitions.
There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities, and from there work towards discovering whether it is possible for a non-conscious agent to possesses those qualities of understanding. I am not saying your definition is wrong, just that I do not agree with it.

moving finger said:
Experiential qualities are agent-dependent (ie subjective). Tournesols experiential quality of seeing red is peculiar to Tournesol subjective - it is meaningless to any other agent.
Tournesol said:
Even if that is true, it is a far cry from the argumentatively relevant point that "red" has no meaning.
I never said that “red has no meaning”. But the “experiential knowledge of red” is purely subjective.
Tournesol – Really, if you wish to continue misquoting me there is not much point in continuing this discussion.

Tournesol said:
And I don't see why it should be true anyway; if consciousness is generated by the brain, then anatomically normal
brains should generate the same qualia. To argue otherwise is to assume some degree of non-physicalism.
Why should it be the case that the precise “data content” of MF seeing red should necessarily be the same as the “data content” of “Tournesol seeing red”? Both are subjective states, there is no a priori reason why they should be identical.

Tournesol said:
Need I point
out the eccentricity of appealing to anti-physicalism to support AI?
Need I point out that I have never appealed to such a thing?
Again you are either “making things up to suit your arguments”, or you are misquoting me.

Tournesol said:
Are you saying we should reject the easy option , and stick to the hard-to-impossible one just to keep the semantics-is-really-syntax flag flying ?
One more time : I never said that “semantics is syntax”! You really have swallowed Searle’s propaganda hook, line and sinker!

Tournesol said:
As to your distinction between knowledge and understanding , I don't think it is
sustainable. To know what a word means is to understand it.
“experiential knowledge of red” has nothing to do with “knowing what the word red means”. I know what the word “x-ray” means yet I have no experiential knowledge of x-rays.

Tournesol said:
You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.
What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
MF claims that despite her lack of experiential knowledge, Mary nevertheless has complete semantic understanding of red.
Tournesol (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

Tournesol said:
You have no grounds to suppose that understanding X-rays is just understanding per se -- it is
only partial understanding compared to understanding visible colours.
Only “partial understanding”?
What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?

Tournesol said:
You have conceded that experiential knowledge is knowledge.
If knowledge is required for understanding, as you say, experiential knowledge
is required for understaning. Since eperience is needed for
experiential knowledge, that means experience is required for
understanding.
I have said that knowledge is necessary for understanding, but it does not follow from this that all knowledge conveys understanding.
Experiential knowledge is 100% subjective, it does not convey any understanding at all.

Tournesol said:
We could copy the data across -- as in standard, non-AI computing-- but would that be sufficient for
meaning and understanding ? If a system has language, you can use that to cnvey
3rd-person non-experiential knowledge. But how do you bootstrap that process -- arrive
at linguistic understanding in the first place? Humans learn language through interaction
with the environment.
And once learned, all that data and knowledge is contained within the brain – there is no need for continued interaction with the environment in order for the agent to continue understanding. Thus interaction with the environment is simply one possible way of “programming” the data and knowledge that is required for understanding. It does not follow that this is the only way to program data and knowledge.

(see part 2)

MF
 
  • #158
Part 2 :

Tournesol said:
you cannot safely
conclude that you have the right semantics just because you have the right syntax.
Wrong assumption again. I have never suggested that syntax gives rise to semantics.

Tournesol said:
An AI that produce the right answers in a TT might have the worng semantics or no semantics.
In a poorly constructed Turing Test it might have, yes. Just as any human also might have. That is why we need to look at (a) trying to get a better understanding of just what understanding is (the qualities of understanding) with resorting to “defining our way” out of the problem and (b) improving the Turing Test

moving finger said:
The fact that humans are so dependent on sense-receptors for their information gathering does not lead to the conclusion that understanding is impossible in the absence of sense-receptors in all possible agents.
Tournesol said:
And that argument does not show that syntax is sufficient for human-type semantics.
There you go again! I have never said that “syntax is sufficient for semantics” – this is a false assumption attributed to AI which is promulgated by Searle and his disciples.

Tournesol said:
Would a putative AI have
quite a different form of language to a human (the Lion problem) ? Then Searle
has made his case.
Do you mean spoken language? Why would it necessarily be any different to an existing human language? Why would it necessarily be the same? What bearing does this have on the agent’s ability to understand?

Tournesol said:
Would it have the same understanding , but not achieved solely
by virtue of syntax ? Again, Searle has made his case.
Again, Searle’s case and yours is based on a false assumption – that AI posits syntax gives rise to semantics!

moving finger said:
It makes the point that ability to fly a plane is not synonymous with understanding flight
Tournesol said:
The ability is part of an understanding which is more than merely theoretical understanding.
Ability is not necessarily anything to do with understanding. I can learn something “by rote”, and reproduce it perfectly – it does not follow that I understand what I am doing

moving finger said:
What red looks like is not understanding it is simply subjective experiential information.
Tournesol said:
And you don't live in a house, you live in a building made of bricks with doors and windows.
I don’t need to “see” a house to understand what a house is.
I don’t need to “see” red to understand what red is

moving finger said:
What red looks like to Mary is not necessarily the same as what red looks like to Tournesol
Tournesol said:
Naturalistically , it should be.
You have no way of knowing whether it is or not

moving finger said:
Your argument continues to betray a peculiar anthropocentic perspective.
Tournesol said:
Strong AI is about duplicating human intelligence -- it should be anthopocentric.
Humans are also carbon based – does that mean all intelligent agents must necessarily be carbon-based? Of course not.
AI is about creating intelligence artificially. Humans happen to be just one example of a species that we know possesses intelligence, it does not follow that intelligence must be defined anthropocentrically.

moving finger said:
What makes you think that the experiential quality of red is the same to you as it is to me?
Tournesol said:
Physicalism. Same cause, same effect.
There is reason to doubt, because you have no way of knowing if the effect is indeed the same (you have no way of knowing what red looks like to Mary).

Tournesol said:
Why do you think it isn't ?
I said “What red looks like to Mary is not necessarily the same as what red looks like to Tournesol”.

Tournesol said:
How could such an agent attach any meaning to a term like "qualia" if it has no examples whatsoeve
to draw on.
If I have never seen a house I can nevertheless attach a meaning to the word “house” by the way the word is defined and the way it is used in language and reasoning. I can attach a meaning to “x-rays” even though I have absolutely no experiential knowledge (no qualia) associated with x-rays whatsoever. I can do the same with the word “red”.

I can understand semantically just what is meant by the term red without ever experiencing seeing red, just as I can understand sematically just what is meant by the term x-rays without ever experiencing seeing x-rays.

Tournesol said:
They are not analogous, as I have shown. You don't need experience to understand X-ray as well
as anyone can understand it becuase no-one has experience OF THAT PARTICULAR PHENOMENON, not
because experience in general never contributes to semantics.

Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
MF claims that despite her lack of experiential knowledge, Mary nevertheless has complete semantic understanding of red.
Tournesol (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?
Tournesol said:
If no-one can write down a definition of the experiential nature of "red" no-one can encode it into a programme.
Maybe so – but as you recall, experiential knowledge is not part of understanding so it doesn’t really matter

Tournesol said:
It is a subjective experience because no-one can write down a definition.
And that holds true without making the physicalistically unwarranted assumption that
similar brains produce radically different qualia.
I have simply cautioned you against the opposite unwarranted assumption – that the precise data connected with your experience of red is necessarily the same as the precise data connected with Mary’s experience of red.

Tournesol said:
SO you can't teach a computer what "red" means by cutting-and-pasting information (or rather data)
from a human brain -- because it would no longer make sense in a different context.
Nor would you need to in order to impart understanding to the computer. Experiential knowledge has nothing to do with understanding, remember?

Tournesol said:
Note that data is not information for precisely that reason -- information is data that
makes sense in a context. You seem to have got the transferability of data mixed
up with the transferability of information. Information can be transferred,if the "receiving" context
has the appropriate means to make sense of the data already present, but how the CR is to
have the means is precisely what is at stake.
Good point. You could in principle transfer the precise data corresponding to “Tournesol sees red” into Mary’s brain, but it does not follow that Mary’s brain will be able to make any sense of that data. But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway

moving finger said:
Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)
Tournesol said:
We can ask questions about how things look (about the "carriers" of information as opposed to
infomation itself) , and a system with full semantics needs to understand those questions.
What colour is a red object? Red. What is there to “understand semantically” about that which requires me to have experiential knowledge of “what red looks like”?

Once again, experiential knowledge has nothing to do with understanding

Tournesol said:
No-one knows how to encode all the information. You don't.
moving finger said:
Oh really Tournesol. Whether MF knows know how to do it or not is irrelevant.
Tournesol said:
Whether anyone else does is highly relevant:"No-one knows how to encode all the information".

With respect, to suggest that “it is not possible because nobody yet knows how to do it” seems like a rather churlish and infantile argument. Nobody “knew how to construct a programmable computer” in the 18th century, but that did not stop it from eventually happening.

Tournesol said:
If that were true, you could write a defition of the
experiential nature of "red". You can't, so it isn't.
Again very churlish, Tournesol.
Just because “MF cannot do it” does not lead to the conclusion “it cannot be done”

The subjective experiential nature of red is different for each agent. There IS no universal definition – the experience is subjective. Do I need to explain what subjective means?

And besides all this, experiential knowledge is not part of understanding so it doesn’t matter (in the context of producing a machine with understanding) if nobody ever succeeds in writing it down anyway.

Tournesol said:
I would have no way of telling whether a detailed
description of a brain state was a description of a "red" or "green" quale,
even if it was my own brain. So the "everyone has different qualia" definition
of "subjectivity" you are appealing to -- which is contradicted by physicalism --
is not the same as the "explanatory gap" version, which applies even to one's own
brain-states.
It is not “contradicted by physicalism”. Tournesol is not MF, therefore there is no reason to expect that Tournesol’s brain-states will be identical to MF’s brain states when both agents are “seeing red”.

Simply because I cannot write down a complete description of either of these brain-states does not lead to the conclusion that they cannot in principle be fully described physically.

Neither MF nor anyone else can accurately predict the weather, but there is no doubt in my mind that it is a deterministically chaotic process which is entirey physical.

MF
 
Last edited:
  • #159
moving finger said:
I don’t need to “see” a house to understand what a house is.
I don’t need to “see” red to understand what red is

If you do not see a house or red (or a red house), you will never attain a complete understanding of red, house or red house. Experiencing the visual stimulus that is caused by a house or the colour red is part of completely understanding the colour or the structure.

In fact, humans are able to experience red without seeing the colour. This is because humans are comprised of cells and everyone of these cells reacts to colours in a photosensitive manner that releases hormones in the human body. Experiencing this hormonal release results in part of what I'd call the experiencial understanding of the colour red. Similarily, the hormonal reaction to seeing a house would entail about 2 million years of instinctual, hormonal and basic human reactions to the concept of attaining shelter.

It is by way of these processes that humans are able to experience and understand red in a way that is thouroghly separate and distiquished from the programmed or auto-programmed data-storage and physics of a computer.

As I have already pointed out, when professional computer scientists describe digital processes they are, or should be, bound by terminological protocol to distiquish these processes and results from biological processes and results to aggressively avoid confusion and the mis-directed trust of the lay-public.



moving finger said:
Humans are also carbon based – does that mean all intelligent agents must necessarily be carbon-based? Of course not.
AI is about creating intelligence artificially. Humans happen to be just one example of a species that we know possesses intelligence, it does not follow that intelligence must be defined anthropocentrically.

Intellegence was defined by humans in the first place.

When it comes to digital computing we call it "artificial intelligence".


moving finger said:
(you have no way of knowing what red looks like to Mary).

Yes, isn't it amazing? However, we can tell what red looks like to any computer because we built the things and we set up their parameter of definnitions. If we had built Mary, we'd know what red looked like to her. She does, however, have the ability to describe "red" to us using her unique "understanding" of the colour.


moving finger said:
If I have never seen a house I can nevertheless attach a meaning to the word “house” by the way the word is defined and the way it is used in language and reasoning. I can attach a meaning to “x-rays” even though I have absolutely no experiential knowledge (no qualia) associated with x-rays whatsoever. I can do the same with the word “red”.

I can understand semantically just what is meant by the term red without ever experiencing seeing red, just as I can understand sematically just what is meant by the term x-rays without ever experiencing seeing x-rays.

I maintain that an incomplete "understanding" is not "understanding" per sey. An incomplete understanding demonstrates a process that is working toward understanding. It may never be reached and there is no law that says it will be reached.




moving finger said:
Maybe so – but as you recall, experiential knowledge is not part of understanding so it doesn’t really matter.

The way I see it, all knowledge is experencial. I can't say this is true for computers because they do not "experience" as far as I know. I would maintain that understanding is fully dependent upon experience (with requires a form of consciousness).

moving finger said:
But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway

Why do you keep saying this?


moving finger said:
What colour is a red object? Red. What is there to “understand semantically” about that which requires me to have experiential knowledge of “what red looks like”?

See what I've written about the effect of red on hormones... (in biologically active agents)

moving finger said:
Once again, experiential knowledge has nothing to do with understanding

Repeating a false statement does not make it correct.



moving finger said:
With respect, to suggest that “it is not possible because nobody yet knows how to do it” seems like a rather churlish and infantile argument. Nobody “knew how to construct a programmable computer” in the 18th century, but that did not stop it from eventually happening.

Actually, the idea of a programable computer stems from the mechanisms involved in "programming" a loom for weaving and this process dates from before the 1700s. Once a "card with holes in it" was introduced to facilitate a speedy programing of the loom, the path was clear for IBM to intervien... some 100 years later.


moving finger said:
Again very churlish, Tournesol.
Just because “MF cannot do it” does not lead to the conclusion “it cannot be done”

The subjective experiential nature of red is different for each agent. There IS no universal definition – the experience is subjective. Do I need to explain what subjective means?

And besides all this, experiential knowledge is not part of understanding so it doesn’t matter (in the context of producing a machine with understanding) if nobody ever succeeds in writing it down anyway.


It is not “contradicted by physicalism”. Tournesol is not MF, therefore there is no reason to expect that Tournesol’s brain-states will be identical to MF’s brain states when both agents are “seeing red”.

Simply because I cannot write down a complete description of either of these brain-states does not lead to the conclusion that they cannot in principle be fully described physically.

Neither MF nor anyone else can accurately predict the weather, but there is no doubt in my mind that it is a deterministically chaotic process which is entirey physical.

MF

These are weak arguments. The very fact that Mary and T are biological agents is enough to warrent the use of the word Understanding to describe their personal experiences of phenomena.

The only thing that warrents the consideration of using human terms and terminology to describe a machine's functions such as those in a computer is the fact that humans created computers. When we create something, we use our own impression of how we function to serve as a blueprint in our machines. However, this does not warrent confusing hourds of people with words that apply to subtle human interactions by applying them to the machines that a very small number of people have built.
 
  • #160
MF said:
I don’t need to “see” a house to understand what a house is.
I don’t need to “see” red to understand what red is.
"See" I believe has just been used as an arbitrary way in which to experience something but not the sole way in which to experience it as QC has pointed out that there are other fashions by which a person could gain experiencial knowledge either directly or indirectly with which to help them understand a concept. There are other senses other than sight.
But consider this. Imagine a person has been born with only one of five senses working. We'll say the person's hearing is the only sense available to it. None others what so ever. How would you go about teaching this person what the colour "red" is?
 
  • #161
quantumcarl said:
If you do not see a house or red (or a red house), you will never attain a complete understanding of red, house or red house. Experiencing the visual stimulus that is caused by a house or the colour red is part of completely understanding the colour or the structure.

What do you “understand” about the “colour” of red simply by experiencing seeing red? You “know what red looks like for quantumcarl”, yes – but “knowing what red looks like for quantumcarl” is NOT “semantic understanding of red”. And “knowing what red looks like” tells you nothing about the “structure” of red - whatever that might mean).

“A picture paints a thousand words” – that is indeed a common expression in English.
All (ALL) of the “information” contained in any picture (visual image) can be reduced to a string of binary digits. A house is “defined” by the relational aspects of components such as door, windows, roof, etc. “What a house looks like” can be reduced to words, and also to mathematical language.

Granted that my 6-year old son has a “picture dictionary” with nice images of houses inside. Why? Because a young child “takes in more information, and more easily” through pictures rather than through words. This is clearly the case with children.

But my own dictionary does not use any pictures or visual images in its definitions of words. Why? Because images (though sometimes useful, especially for young people) are not essential to convey the meaning of words (ie to convey semantic understanding).

Suppose that Mary claims to possesses semantic understanding of the term “house”, but she has never seen a house. Presumably quantumcarl would say there is something missing from Mary’s semantic understanding of “house”.

What exactly is missing? What is it that Mary necessarily CANNOT understand about the term “house”, which she WOULD understand if only she could see a house? Would you care to tell us?

quantumcarl said:
In fact, humans are able to experience red without seeing the colour. This is because humans are comprised of cells and everyone of these cells reacts to colours in a photosensitive manner that releases hormones in the human body. Experiencing this hormonal release results in part of what I'd call the experiencial understanding of the colour red.
Does this “hormonal release” convey any semantic understanding to the human?
No.

quantumcarl said:
Similarily, the hormonal reaction to seeing a house would entail about 2 million years of instinctual, hormonal and basic human reactions to the concept of attaining shelter.
“Hormonal reaction” is not “semantic understanding”

quantumcarl said:
It is by way of these processes that humans are able to experience and understand red in a way that is thouroghly separate and distiquished from the programmed or auto-programmed data-storage and physics of a computer.
“Hormonal reaction” is not “semantic understanding”
quantumcarl said:
when professional computer scientists describe digital processes they are, or should be, bound by terminological protocol to distiquish these processes and results from biological processes and results to aggressively avoid confusion and the mis-directed trust of the lay-public.
With respect, I suggest it is the “lay-public” confusion between “knowing what a colour looks like” (ie subjective “hormonal reaction” to use your phrase) and “knowing what a colour IS” (ie semantic understanding of the term) which is responsible for your own confusion here. These are two very different types of “knowing”.

In science, we have a duty to avoid lay-person-type confusion and to distinguish very carefully between the subjective “experiential knowledge of X” (which has nothing to do with semantic understanding of X) and the objective “definitional understanding of X”, which has everything to do with semantic understanding of X.

quantumcarl said:
Intellegence was defined by humans in the first place.
All human words are defined by humans. It does not follow that all words must be defined anthropocentrically (unless we deliberately wish to create an anthropocentric bias in everything).

quantumcarl said:
we can tell what red looks like to any computer because we built the things and we set up their parameter of definnitions.
With respect, if we create a computer which is able to “consciously and subjectively perceive the colour red” then we will have no way of knowing what that subjective experience is like for the computer, whether we “set up the parameters” or not.

Even if I know everything there is to know (objectively) about a bat, I can NEVER know “what it feels like” to be a bat, because “what it feels like” is purely subjective.

quantumcarl said:
If we had built Mary, we'd know what red looked like to her.
Why should this follow?
“Mary knowing what red looks like” is a subjective experience that is peculiar to Mary, nobody on the outside of Mary “knows what this is like for Mary”, only Mary does.

quantumcarl said:
She does, however, have the ability to describe "red" to us using her unique "understanding" of the colour.
What does red look like? Can you describe “what red looks” like to someone who can only see shades of grey?
No.
Why? Because your subjective experience of “red” is peculiar to you, it has no objective basis in the outside world, it cannot be described in objective terms which another person can understand.

quantumcarl said:
I maintain that an incomplete "understanding" is not "understanding" per sey. An incomplete understanding demonstrates a process that is working toward understanding. It may never be reached and there is no law that says it will be reached.
Have you shown that Mary has incomplete understanding of the term “red” simply because she has never experienced seeing red?

What Mary does not understand about red
Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
Quantumcarl (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can quantumcarl provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?
quantumcarl said:
The way I see it, all knowledge is experencial.
This simply betrays your anthropocentric bias again.
Objective knowledge is derived from information and the relational rules of that information (which in turn is also information). All information can be encoded into binary digits and “programmed” into an agent. Humans cannot be programmed (yet), therefore the only way they can acquire knowledge of or from the outside world is through their senses. But nevertheless I can still acquire a complete semantic understanding of the term “red” without ever seeing red.

quantumcarl said:
I can't say this is true for computers because they do not "experience" as far as I know. I would maintain that understanding is fully dependent upon experience (with requires a form of consciousness).
As I said, this simply shows your anthropocentric bias.

moving finger said:
But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway
quantumcarl said:
Why do you keep saying this?
Because it is true!
I have suggested a thought experiment (“What Mary does not understand about red” above) which would allow you to show (if you can) that experiential knowledge is a necessary part of understanding – can you do so?

quantumcarl said:
See what I've written about the effect of red on hormones... (in biologically active agents)
“Hormonal reaction” is not “semantic understanding”

quantumcarl said:
Repeating a false statement does not make it correct.
Can you show that experiential knowledge is necessary for understanding (rather than simply saying that it is)? Can you answer the “What Mary does not understand about red” example above?

quantumcarl said:
These are weak arguments.
With respect, a weak argument is better than none at all.
My position is that I can “semantically understand” all there is to know about red without ever seeing red, and there has been no rational counter-argument provided!
All quantumcarl and Tournesol have been able to do is to assert the equivalent of “experiential knowledge is required for semantic understanding” – but this has NOT been shown to be the case! Where is your evidence that this is the case? Can you answer the “What Mary does not understand about red” example above?

quantumcarl said:
In the absence of any evidence, The very fact that Mary and T are biological agents is enough to warrent the use of the word Understanding to describe their personal experiences of phenomena.
This again shows a lay-person’s confused use of the word “know”. To “know what red looks like” is subjective experiential knowledge, it has nothing to do with semantically understanding what is meant by the term red.
If you genuinely believe that Mary needs to see red in order to understand what is meant by red, then please reply to the “what Mary does not understand about red” argument above
quantumcarl said:
The only thing that warrents the consideration of using human terms and terminology to describe a machine's functions such as those in a computer is the fact that humans created computers. When we create something, we use our own impression of how we function to serve as a blueprint in our machines. However, this does not warrent confusing hourds of people with words that apply to subtle human interactions by applying them to the machines that a very small number of people have built.
A more responsible and objective approach to scientific understanding of “understanding” would be to avoid anthropocentric bias at all costs.

MF
 
Last edited:
  • #162
TheStatutoryApe said:
"See" I believe has just been used as an arbitrary way in which to experience something but not the sole way in which to experience it as QC has pointed out that there are other fashions by which a person could gain experiencial knowledge either directly or indirectly with which to help them understand a concept. There are other senses other than sight.
Agreed. The role of the senses as far as human semantic understanding is concerned is to convey information and knowledge about the world – the senses are merely conduits for information and knowledge transfer to the brain. If you like, they are the means by which we “program” our brains. But once the brain is programmed and we “understand” by virtue of the information and knowledge that we possess, then we do not need the senses in order to “continue understanding”.
TheStatutoryApe said:
But consider this. Imagine a person has been born with only one of five senses working. We'll say the person's hearing is the only sense available to it. None others what so ever. How would you go about teaching this person what the colour "red" is?
That is a very good question – and it gets right to the heart of the matter, hence I will answer it fully so that we all might understand what is going on.

The confusion between “experiential knowledge” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”

One meaning (based on subjective experiential knowledge of red) would be better expressed “what does the colour red look like?”. Let us call this question A.

The other meaning (the objective semantic meaning of red) would be better expressed as “what is the semantic meaning of the term red?”. Let us call this question B.

Now, TheStatutoryApe, which question have you asked above? Is it A or B? I will answer both.

A - “what does the colour red look like?”
What the colour red looks like is a purely subjective experiential brain state. I have no idea what the colour red looks like for TheStatutoryApe, I only know what it looks like for MF. I cannot describe in objective terms what this colour looks like. Can you? Can anyone? The best I can do is to point to a red object and to say “there, if you look at that object then you will see what the colour red looks like”, but that STILL does not mean that the colour red “looks the same” for TheStatutoryApe as it does for MF. And seeing the colour red is NOT necessary in order to convery any semantic understanding of the term “red”.

B - “what is the semantic meaning of the term red?”
The semantic meaning of the term red is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. I do not need to be able to see red in order to understand from this definition that “this is what red is”.

Thus, it all depends on what your question is.

If you are asking “what does the colour red look like?”, then it is not possible for anyone to objectively describe this, and it is impossible to “teach” this to an agent who cannot “see” red. But “what the colour red looks like” has nothing to do with semantic understanding of the term “red”, which is in fact the second question. An agent can semantically understand the term “red” (question B) without being able to see “red” (question A).

This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

MF
 
Last edited:
  • #163
Because there is another piece of information you have about me: I have a
human brain, and human brains are known to be able to implement semantics,
consciousness, etc.

Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.

That is a silly objection. Anyone I can actually speak to obviously has a
functioning brain. I am not going on their external behaviour alone; I have
an insight into how their behaviour is implemented, which is missing in the CR
and the TT.

If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?

The TT is more doubtful in the case of a machine than that of a human.

The solution to this problem is to try and develop a better test, not to “define our way out of the problem”

I have already suggested a better test. You were not very receptive.


You are missing another point as well: the point is whether syntax is
sufficient for semantics.
I’m not missing that point at all. Searle assumes that a computer would not be able to understand semantics – I disagree with him. It is not a question of “syntax being sufficient for semantics”, I have never asserted that “synatx is sufficient for semantics” or that “syntax somehow gives rise to semantics”.
That the AI argument is necessarily based on the premise “syntax gives rise to semantics” is a fallacy that Searle has promulgated, and which you have swallowed.


It would help if you spelt out what, IYO, the (strong) AI arguemnt does say.

Throughout this debate you seem to be assuming that there is some set of
rules that are sufficient for semantics. Above you reject the idea that
they are the same rules as syntax. Very well: let us call the thesis
you are promoting
"The Symbol Manipulation According to Rules Technique is sufficient for semantics thesis"
w is
or
"The SMART is sufficient for semantics thesis"
Where SMART is any kind of Symbol Manipulation According to Rules.

Note, that this distinction makes no real difference to the CR.
Searle uses "syntax" and "symbol manipulation" interchangably because it
does not strike him that semantics is or could be entirely rule-based. In fact,
it as never struck anybody except yourslef, since there are so many objections
to it.

What fills the gap in humans, setting aside
immaterial souls, is probably the physical embodiment and interactions
with the surroundings. Of course,
any actual computer will have a physical embodiment, and its
physical embodiment *might* be sufficient for semantics and cosnciousness.
Once again – place me in a state of sensory deprivation, and I still understand semantics.
Once again, BECAUSE YOU HAVE ALREADY ACQUIRED SEMANTICS.

The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.

How is that relevant to the CR ? Are you saying that the CR can *acquire*
semantics despite its lack of interaction and sensory contact with an
evironment ? Are you saying you can "download" the relevant information
from a human -- although you have already conceded that information may
fail to make sense when transplanted from one context to anothera ?


However, even if that is true, it does not mean the computer's
posession of semantics is solely due to syntactic abilities,
and Searle's point is still true.

I am not arguing that “syntax gives rise to semantics”, which seems to be Searle’s objection. I am arguing that a computer can understand both syntax and semantics.

By virture of SMART ?


Are you seriously asserting that the only think that prevents a beaver from lecturing on civil
engineering is its lack of a voicebox ?
I am saying that “possession of understanding alone” is not sufficient to be able to also “report understanding” – to report understanding the agent also needs to be able “to report”.

By standard semantics, possession of undertanding is *necessary* to report.


Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

1) "What red looks like"
2) "The experiential qualities of red which cannot be written down"


There is nothing special about the inability of the CR to derive semantics from syntax, since
that is not possible in general.

But sematics is not derived from syntax. Why do you think it needs to be? Because Searle wrongly accuses the AI argument of assuming this?

To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics. You have also yet to explain what
you consider the "correct" AI argument to be.


Human agents are likely to do so using their embededness in the world -- "*this* [points] is what I mean by zimmoid"--
but the CR does not have that capacity.

There are many other ways of conveying and learning both meaning and knowledge apart from “pointing and showing”.

*Some* direct demonstrations can be deferred in *some* cases. It is not clear
whether they can all be removed completely. The alternative would be a system
of essentially circular definitons -- like
"Gift: present"
"Present: gift"
but more complex.



No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.

What relevance does that have to the CR? If the TT cannot establish that a
system understands correctly, how can it establish that it understands at all
?


The rules of sematics, along with the rules of syntax, are learned (or programmed). Any agent, including humans, can incorporate errors in learning (or programming) – making one error in syntax or semantics, or disagreeing on the particular semantics in one instance, does not show that the agent “does not understand semantics”.

The fact that errors in semantics may be undetectable to a TT implies absence of
semantics may be undetectable to a TT.


Once again - I have never said that semantics is derived from syntax – you have said this!
Both syntax and semantics follow rules, which can be learned, but it does not follow that “one is derived from the other”. Syntax and semantics are quite different concepts. Just as a “programmable computer” and a “pocket calculator” are both calculating machines, but one cannot necessarily construct a programmable computer by connecting together multiple pocket calculators.

The argument that syntax undeterdetermines sematics relies on the fact that
syntactical rules specify transformations of symbols relative to each other --
the semantics is not "grounded". Appealing to another set of rules --
another SMART -- would face the same problem.


Truths-by-definition may not amount to empirical truths, but that does not mean they
are emprical falsehoods -- or do you think there are no umarried bachelors ?

We may agree on the definitions of some words, that it does not follow that we agree on the definitions of all words.

I most people agree on the definitons of the words in a sentence, what
would stop that sentence being an analytic truth, if it is analytic ?


How can you establish a fact without definitions?

Ask yourself what are the essential qualities of understanding that allow me to say this agent understands avoid prejudicial definitions and and avoid anthropocentrism

You haven't shown how to do all that without any pre-existing definitions.

There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities,

How do you know they are its qualities, in the complete absence of a
defition ? Do they have name-tags sewn into their shorts ?


Experiential qualities are agent-dependent (ie subjective). Tournesols experiential quality of seeing red is peculiar to Tournesol subjective - it is meaningless to any other agent.

Even if that is true, it is a far cry from the argumentatively relevant point that "red" has no meaning.

I never said that “red has no meaning”. But the “experiential knowledge of red” is purely subjective.
Tournesol – Really, if you wish to continue misquoting me there is not much point in continuing this discussion.


It is not a question of misquoting you, it is a question of guessing how your
comments relate to the CR. Why should it matter that "the “experiential knowledge of red” is purely subjective".
Are you supposing that subjective knowledge doesn't matter for semantics ?

And I don't see why it should be true anyway; if consciousness is generated by the brain, then anatomically normal
brains should generate the same qualia. To argue otherwise is to assume some degree of non-physicalism.

Why should it be the case that the precise “data content” of MF seeing red should necessarily be the same as the “data content” of “Tournesol seeing red”? Both are subjective states, there is no a priori reason why they should be identical.

They should be broadly similar if our brains are broadly similar.
They should be precisely similar if our brains are precisely similar.
They should *not* be radically different if our brains are similar -- that
would be a viloation of the physicalist "same cause, same effecct" principle.


Need I point
out the eccentricity of appealing to anti-physicalism to support AI?

Need I point out that I have never appealed to such a thing?

The idea that similar brains can have radically different qualia is
non-physicalism in my and most people's book.

Again you are either “making things up to suit your arguments”, or you are misquoting me.

Or you are not aware that some of the things you are saying have implications
contrary to what you are trying to assert explicitly.
 
  • #164
As to your distinction between knowledge and understanding , I don't think it is
sustainable. To know what a word means is to understand it.
“experiential knowledge of red” has nothing to do with “knowing what the word red means”. I know what the word “x-ray” means yet I have no experiential knowledge of x-rays.

You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.

What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

Aagh! Firstly that is a classically anti-physicalist argument.
Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.


You have no grounds to suppose that understanding X-rays is just understanding per se -- it is
only partial understanding compared to understanding visible colours.

Only “partial understanding”?
What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?

What they look like, experientially.


You have conceded that experiential knowledge is knowledge.
If knowledge is required for understanding, as you say, experiential knowledge
is required for understaning. Since eperience is needed for
experiential knowledge, that means experience is required for
understanding.

I have said that knowledge is necessary for understanding, but it does not follow from this that all knowledge conveys understanding.
Experiential knowledge is 100% subjective, it does not convey any understanding at all.

That does not follow. Clearly experiential semantics conveys understanding of
experience.


We could copy the data across -- as in standard, non-AI computing-- but would that be sufficient for
meaning and understanding ? If a system has language, you can use that to convey
3rd-person non-experiential knowledge. But how do you bootstrap that process -- arrive
at linguistic understanding in the first place? Humans learn language through interaction
with the environment.

And once learned, all that data and knowledge is contained within the brain – there is no need for continued interaction with the environment in order for the agent to continue understanding. Thus interaction with the environment is simply one possible way of “programming” the data and knowledge that is required for understanding. It does not follow that this is the only way to program data and knowledge.


What is the alternative ? We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?


you cannot safely
conclude that you have the right semantics just because you have the right syntax.
Wrong assumption again. I have never suggested that syntax gives rise to semantics.

You haven't supplied any other way the CR can acquire semantics.


An AI that produce the right answers in a TT might have the worng semantics or no semantics.
In a poorly constructed Turing Test it might have, yes. Just as any human also might have. That is why we need to look at (a) trying to get a better understanding of just what understanding is (the qualities of understanding) with resorting to “defining our way” out of the problem and (b) improving the Turing Test

I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP. But that does not show
Searle is wrong; solving the HP is showing how mind emerges
from physics; it might tell you that artificially intelligent
agents need certain material substrates (or that certan semantics-
the semantics of feelings and expriences -- needs a certain physical
embededness). The resulting AI would not have therefore
have its intelligence/consicousness/semantics purely by virtue
of SMART.


Would a putative AI have
quite a different form of language to a human (the Lion problem) ? Then Searle
has made his case.

Do you mean spoken language? Why would it necessarily be any different to an existing human language?

Because human languages contain vocabulary relating to human senses.


Why would it necessarily be the same? What bearing does this have on the agent’s ability to understand?

If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?


Would it have the same understanding , but not achieved solely
by virtue of syntax ? Again, Searle has made his case.

Again, Searle’s case and yours is based on a false assumption – that AI posits syntax gives rise to semantics!

Achieved solely by SMART has the same problems as achieved solely by syntax.



What red looks like to Mary is not necessarily the same as what red looks like to Tournesol

Naturalistically , it should be.

You have no way of knowing whether it is or not

Do I have a way of knowing whether phsycialism is true ?


Your argument continues to betray a peculiar anthropocentic perspective.

Strong AI is about duplicating human intelligence -- it should be anthopocentric.

Humans are also carbon based – does that mean all intelligent agents must necessarily be carbon-based? Of course not.
AI is about creating intelligence artificially. Humans happen to be just one example of a species that we know possesses intelligence, it does not follow that intelligence must be defined anthropocentrically.


It does if we are to avoid a situation where "is this a computer" is a matter
of idiosyncratic definition. We have been through all this: you can be too
anthropocentric, but you can be insufficiently anthropocentric too.

What makes you think that the experiential quality of red is the same to you as it is to me?

Physicalism. Same cause, same effect.

There is reason to doubt, because you have no way of knowing if the effect is indeed the same (you have no way of knowing what red looks like to Mary).

Well, that's the anti-physicalist's argument.


How could such an agent attach any meaning to a term like "qualia" if it has no examples whatsoeve
to draw on.

If I have never seen a house I can nevertheless attach a meaning to the word “house” by the way the word is defined and the way it is used in language and reasoning. I can attach a meaning to “x-rays” even though I have absolutely no experiential knowledge (no qualia) associated with x-rays whatsoever. I can do the same with the word “red”.

The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?


If no-one can write down a definition of the experiential nature of "red" no-one can encode it into a programme.

Maybe so – but as you recall, experiential knowledge is not part of understanding so it doesn’t really matter

Tu quoque.

It is a subjective experience because no-one can write down a definition.
And that holds true without making the physicalistically unwarranted assumption that
similar brains produce radically different qualia.

I have simply cautioned you against the opposite unwarranted assumption – that the precise data connected with your experience of red is necessarily the same as the precise data connected with Mary’s experience of red.

It is not necessarily the same, it naturalistically the same. For all your
adherence to the central dogma of anti-physicalism, inverted spectra, you
claim to be a physicalist.


SO you can't teach a computer what "red" means by cutting-and-pasting information (or rather data)
from a human brain -- because it would no longer make sense in a different context.

Nor would you need to in order to impart understanding to the computer. Experiential knowledge has nothing to do with understanding, remember?

Why should I "remember" something that relates to a definition of
"understanding" which *I* don't accept..as *you* point out.

Anyway, experience has to do with the semantics of expreiential language.

Note that data is not information for precisely that reason -- information is data that
makes sense in a context. You seem to have got the transferability of data mixed
up with the transferability of information. Information can be transferred,if the "receiving" context
has the appropriate means to make sense of the data already present, but how the CR is to
have the means is precisely what is at stake.

Good point. You could in principle transfer the precise data corresponding to “Tournesol sees red” into Mary’s brain, but it does not follow that Mary’s brain will be able to make any sense of that data. But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway

It is not part of your definition of understanding -- how remarkably
convenient.

Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)

We can ask questions about how things look (about the "carriers" of information as opposed to
infomation itself) , and a system with full semantics needs to understand those questions.

What colour is a red object? Red. What is there to “understand semantically” about that which requires me to have experiential knowledge of “what red looks like”?

Ask a blind person what red looks like.

No-one knows how to encode all the information. You don't.

Oh really Tournesol. Whether MF knows know how to do it or not is irrelevant.

Whether anyone else does is highly relevant:"No-one knows how to encode all the information".

With respect, to suggest that “it is not possible because nobody yet knows how to do it” seems like a rather churlish and infantile argument. Nobody “knew how to construct a programmable computer” in the 18th century, but that did not stop it from eventually happening.

Is that any easier than solving the Hard problem, or is it part of the Hard
problem?


I would have no way of telling whether a detailed
description of a brain state was a description of a "red" or "green" quale,
even if it was my own brain. So the "everyone has different qualia" definition
of "subjectivity" you are appealing to -- which is contradicted by physicalism --
is not the same as the "explanatory gap" version, which applies even to one's own
brain-states.

It is not “contradicted by physicalism”. Tournesol is not MF, therefore there is no reason to expect that Tournesol’s brain-states will be identical to MF’s brain states when both agents are “seeing red”.

Yes there is: all brains are broadly similar anatomically. If they were not,
you could not form a single brain out the two sets of genes you get from your
parents. (Argument due to Steven Pinker).

Simply because I cannot write down a complete description of either of these brain-states does not lead to the conclusion that they cannot in principle be fully described physically.

Summary:-

You might be able to give the CR full semantics by solving the HP; but that
leads to a version of AI that Searle does not disagree with.

You might be wriggle off the hook of full semantics by stipulating that
experience has nothing to do with understanding; but this is a style of
argument you dislike when others use it.

You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.
 
  • #165
moving finger said:
Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.
Tournesol said:
That is a silly objection.
It is not an objection, it is an observation. Do you dispute it?
Tournesol said:
Anyone I can actually speak to obviously has a
functioning brain.
Tournesol, your arguments are becoming very sloppy.
I can (if I wish) “speak to” my table – does that mean my table has a functioning brain?
Tournesol said:
I am not going on their external behaviour alone; I have
an insight into how their behaviour is implemented, which is missing in the CR
and the TT.
What “insight” do you have which is somehow independent of observing their behaviour?
How would you know “by insight” that a person in a coma cannot understand you, unless you put it to the test?
How would you know “by insight” that a 3-year old child cannot understand you, unless you put it to the test?
moving finger said:
The solution to this problem is to try and develop a better test, not to “define our way out of the problem”
Tournesol said:
I have already suggested a better test. You were not very receptive.
Sorry, I missed that one. Where was it?
Tournesol said:
It would help if you spelt out what, IYO, the (strong) AI arguemnt does say.
I am not here to defend the AI argument, strong or otherwise.
I am here to support my own position, which is that machines are in principle capable of possessing understanding, both syntactic and semantic.
Tournesol said:
you seem to be assuming that there is some set of
rules that are sufficient for semantics.
Agreed
Tournesol said:
let us call the thesis
you are promoting
"The Symbol Manipulation According to Rules Technique is sufficient for semantics thesis"
w is
or
"The SMART is sufficient for semantics thesis"
Where SMART is any kind of Symbol Manipulation According to Rules.
Not “any kind” of symbol manipulation – a particular symbol manipulation
Tournesol said:
Note, that this distinction makes no real difference to the CR.
Can you show this, or are you simply asserting it?
Tournesol said:
Searle uses "syntax" and "symbol manipulation" interchangably because it
does not strike him that semantics is or could be entirely rule-based.
That’s his opinion. I do not agree
Tournesol said:
In fact,
it as never struck anybody except yourslef, since there are so many objections
to it.
I do not think this is true. Even if it were true, what relevance does this have to the argument?
moving finger said:
The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.
Tournesol said:
How is that relevant to the CR ? Are you saying that the CR can *acquire*
semantics despite its lack of interaction and sensory contact with an
evironment ?
I am saying that the information and knowledge to understand semantics can be encoded into the CR, and once encoded it does not need continued contact with the outside world in order to understand
Tournesol said:
Are you saying you can "download" the relevant information
from a human -- although you have already conceded that information may
fail to make sense when transplanted from one context to anothera ?
Where did I say that the information needs to be downloaded from a human?
Are you perhaps suggesting that semantic understanding can only be transferred from a human?
The only “information” which I claim would fail to make sense when transplanted from one agent to another is subjective experiential information – which as you know by now is not necessary for semantic understanding.
Tournesol said:
By virture of SMART ?
By virtue of the fact that semantic understanding is rule-based
Tournesol said:
By standard semantics, possession of undertanding is *necessary* to report.
The question is not whether “the ability to report requires understanding” but whether “understanding requires the ability to report”
If you place me in situation where I can no longer report what I am thinking (ie remove my ability to speak and write etc), does it follow that I suddenly cease to understand? Of course not.
Tournesol said:
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?
Tournesol said:
1) "What red looks like"
2) "The experiential qualities of red which cannot be written down"
Mary can semantically understand the statement “what red looks like” without knowing what red looks like. The statement means literally “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “what red looks like”.
Mary can semantically understand the statement “the experiential qualities of red which cannot be written down” without knowing the experiential qualities of red. The statement means literally “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “the experiential qualities of red which cannot be written down”.
Thus I have shown that Mary can indeed semantically understand both your examples.
Now, can you provide an example of a statement containing the word “red” which Mary CANNOT semantically understand?
What red looks like is nothing to do with semantic understanding of the term red – it is simply “what red looks like”. What red looks like to Tournesol may be very different to what red looks like to MF, but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of what red looks like.
The experiential qualities of red are nothing to do with semantic understanding of the term red – these are simply “the experiential qualities of red”. The experiential qualities of red for Tournesol may be very different to The experiential qualities of red for MF, but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of the experiential qualities of red.
The confusion between “experiential qualities” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”
One meaning (based on subjective experiential knowledge of red) would be expressed “what does the colour red look like?”.
The other meaning (the objective semantic meaning of red) would be expressed as “what is the semantic meaning of the term red?”.
This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.
Tournesol said:
To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics.
You are the one asserting that semantics is necessarily NOT rule-based. I could equally say the onus is on you to show why it is not.
Tournesol said:
You have also yet to explain what
you consider the "correct" AI argument to be.
Answered above
moving finger said:
No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.
Tournesol said:
What relevance does that have to the CR? If the TT cannot establish that a
system understands correctly, how can it establish that it understands at all
?
Very relevant. If the CR passes most of the Turing test, but fails to understand one or two words because those words are simply defined differently between the CR and the human interrogator, that in itself is not sufficient to conclude “the CR does not understand”
Tournesol said:
The argument that syntax undeterdetermines sematics relies on the fact that
syntactical rules specify transformations of symbols relative to each other --
the semantics is not "grounded". Appealing to another set of rules --
another SMART -- would face the same problem.
“Grounded” in what in your opinion? Experiential knowledge?
What experiential knowledge do I necessarily need to have in order to have semantic understanding of the term “house”?
Tournesol said:
I most people agree on the definitons of the words in a sentence, what
would stop that sentence being an analytic truth, if it is analytic ?
If X and Y agree on the definitions of words in a statement then they may also agree it is analytic. What relevance does this have?
Tournesol said:
There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities,
Tournesol said:
How do you know they are its qualities, in the complete absence of a
defition ? Do they have name-tags sewn into their shorts ?
You are not reading my replies, are you? I never said there should be no definitions, I said there is a balance to be struck. Once again you seem to be making things up to suit your argument.
Tournesol said:
Why should it matter that "the “experiential knowledge of red” is purely subjective".
Are you supposing that subjective knowledge doesn't matter for semantics ?
I am suggesting that subjective experiential knowledge is not necessary for semantic understanding. How many times do you want me to repeat that?
Tournesol said:
They should be broadly similar if our brains are broadly similar.
“Broadly similar” is not “identical”.
A horse is broadly similar to a donkey, but they are not the same animal.
Tournesol said:
you are not aware that some of the things you are saying have implications
contrary to what you are trying to assert explicitly.
You are perhaps trying to read things into my arguments that are not there, to support your own unsupported argument. When I say “there is no a priori reason why they should be identical” this means exactly what it says. With respect if we are to continue a meaningful discussion I suggest you start reading what I am writing, instead of making up what you would prefer me to write.
Tournesol said:
You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.
I certainly do not. Red is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm. This is a semantic understanding of red. What more do I need to know? Whether or not I have known the experiential quality of seeing red makes absolutely no difference to this semantic understanding.
moving finger said:
What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.
Tournesol said:
that is a classically anti-physicalist argument.
It may be a true argument, but it is not necessarily anti-physicalist.
Tournesol said:
Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.
The semantics is completely embodied in the meaning of the term red – which is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm.
moving finger said:
What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?
Tournesol said:
What they look like, experientially.
What they look like is an experiential quality, it is not semantic understanding.
Perhaps you would claim that I also do not have a full understanding of red because I have not tasted red? And what about smelling red?
Tournesol said:
Clearly experiential semantics conveys understanding of
experience.
I can semantically understand what is meant by the term “experience” without actually “having” that experience.
Tournesol said:
We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?
The CR already contains information in the form of the rulebook
Tournesol said:
You haven't supplied any other way the CR can acquire semantics.
Sematics is rule-based, why should the CR not possesses the rules for semantic understanding?
Tournesol said:
I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.
Please define the Hard Problem.
Tournesol said:
Because human languages contain vocabulary relating to human senses.
And I can have complete semantic understanding of the term red, without ever seeing red.
Tournesol said:
If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?
By reasoning and experimental test.
Tournesol said:
Achieved solely by SMART has the same problems as achieved solely by syntax.
You have not shown that semantic understanding requires anything other than a knowledge and an understanding of the relevant semantic rules
Tournesol said:
Do I have a way of knowing whether phsycialism is true ?
You don’t. And I understand that many people do not believe it is true.
Tournesol said:
We have been through all this: you can be too
anthropocentric, but you can be insufficiently anthropocentric too.
And my position is that I believe arbitrary definitions such as “understanding requires consciousness” and “understanding requires experiential knowledge” are too anthropocentrically biased and cannot be defended rationally
Tournesol said:
Well, that's the anti-physicalist's argument.
It’s my argument. I’m not into labelling people or putting them into boxes.
I see no reason why X’s subjective experience of seeing red should be the same as Y’s
Tournesol said:
The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?
By “how do I semantically understand the term qualia”, do you mean “how do I semantically understand the term experiential quality”?
Let me give an example – “the experiential quality of seeing red” – which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. What is missing from this semantic understanding the experiential quality of seeing red?
Tournesol said:
you
claim to be a physicalist.
To my knowledge I have made no such claim in this thread
Tournesol said:
Anyway, experience has to do with the semantics of expreiential language.
Semantic understanding has nothing necessarily to do with experiential qualities, as I have shown several times above
Tournesol said:
It is not part of your definition of understanding -- how remarkably
convenient.
And remarkably convenient that it is part of yours?
The difference is that I can actually defend my position that experiential knowledge is not part of understanding with rational argument and example – the Mary experiment for example.
Tournesol said:
Ask a blind person what red looks like.
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red, which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. Experiential knowledge is not part of this semantic understanding.
Tournesol said:
Is that any easier than solving the Hard problem, or is it part of the Hard
problem?
Please define what you understand to be the Hard Problem
Tournesol said:
Yes there is: all brains are broadly similar anatomically
As before, “broadly similar” is not synonymous with “identical”.
Tournesol said:
. If they were not,
you could not form a single brain out the two sets of genes you get from your
parents. (Argument due to Steven Pinker).
Genetically identical twins may behave similarly, but not necessarily identically. Genetic makeup is only one factor in neurophysiology.
Tournesol said:
You might be able to give the CR full semantics by solving the HP; but that
leads to a version of AI that Searle does not disagree with.
Please define what you mean by the Hard Problem
Tournesol said:
this is a style of
argument you dislike when others use it.
It is not a question of “disliking”.
If a position can be supported and defended with rational argument (and NOT by resorting solely to “definition” and “popular support”) then it is worthy of discussion. I have put forward the “What Mary does not understand about red” thought experiment in defence of my position that experiential knowledge is not necessary for semantic understanding, and so far I am waiting for someone to come up with a statement including the term red which Mary cannot semantically understand. The two statements you have offered so far I have shown can be semantically understood by Mary.
Tournesol said:
You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.
What “explanatory gap” is this?
MF
 
Last edited:
  • #166
moving finger said:
What do you “understand” about the “colour” of red simply by experiencing seeing red? You “know what red looks like for quantumcarl”, yes – but “knowing what red looks like for quantumcarl” is NOT “semantic understanding of red”. And “knowing what red looks like” tells you nothing about the “structure” of red - whatever that might mean).

“A picture paints a thousand words” – that is indeed a common expression in English.
All (ALL) of the “information” contained in any picture (visual image) can be reduced to a string of binary digits. A house is “defined” by the relational aspects of components such as door, windows, roof, etc. “What a house looks like” can be reduced to words, and also to mathematical language.

Granted that my 6-year old son has a “picture dictionary” with nice images of houses inside. Why? Because a young child “takes in more information, and more easily” through pictures rather than through words. This is clearly the case with children.

But my own dictionary does not use any pictures or visual images in its definitions of words. Why? Because images (though sometimes useful, especially for young people) are not essential to convey the meaning of words (ie to convey semantic understanding).

Suppose that Mary claims to possesses semantic understanding of the term “house”, but she has never seen a house. Presumably quantumcarl would say there is something missing from Mary’s semantic understanding of “house”.

What exactly is missing? What is it that Mary necessarily CANNOT understand about the term “house”, which she WOULD understand if only she could see a house? Would you care to tell us?

An incomplete understanding of a house means not experiencing the house as a whole. It means not receiving the entire gammut of information with regard to a house. Mary does not see, smell or understand the plumbing of the house... therefore, her understanding of a house is incomplete... and this does not constitute an understanding of a house. Mary has not seen the blueprints and has not inspected the foundations. Mary has not walked into the house and inspected or had an inspection done for insects or pests. These are things that a house can contain. She perhaps doesn't understand that and so, her understanding is incomplete and still in the process of being formed. Mary does not understand these implications with regard to the term "house".[/quote]

moving finger said:
Does this “hormonal release” convey any semantic understanding to the human?
No.

Yes. The hormonal release demonstrates how red makes the human feel in the presence of red. This is true in every human although not widely known. Note: eg. red light districts. Note: eg. (the popular saying in advertising) "red sells".


moving finger said:
“Hormonal reaction” is not “semantic understanding”

Hormonal reaction is an experience and therefore constitutes knowledge. As noted above it is understood by many professionals as "common knowledge" or a "semantic understanding".


moving finger said:
“Hormonal reaction” is not “semantic understanding”

You're repeating yourself again.



moving finger said:
With respect, I suggest it is the “lay-public” confusion between “knowing what a colour looks like” (ie subjective “hormonal reaction” to use your phrase) and “knowing what a colour IS” (ie semantic understanding of the term) which is responsible for your own confusion here. These are two very different types of “knowing”.

With continued respect, part of knowing what a colour is includes experiencing its effects. When a colour stimulates the cones of the retina this experience helps one toward an understanding the physics of colour. When you read an equation that explains the physical properties of colour... the experience helps one toward an understanding of colour.

moving finger said:
In science, we have a duty to avoid lay-person-type confusion and to distinguish very carefully between the subjective “experiential knowledge of X” (which has nothing to do with semantic understanding of X) and the objective “definitional understanding of X”, which has everything to do with semantic understanding of X.


All human words are defined by humans. It does not follow that all words must be defined anthropocentrically (unless we deliberately wish to create an anthropocentric bias in everything).

We don't have to "create" the anthropocentric bias. As humans we cannot escape the bias. However, we can avoid using terms that only apply to human physiology and function and process such as the word "understanding". If you you can build a computer that "loves" and "understands" you... I'd really like to see that.


moving finger said:
With respect, if we create a computer which is able to “consciously and subjectively perceive the colour red” then we will have no way of knowing what that subjective experience is like for the computer, whether we “set up the parameters” or not.

I agree. In fact, there are more things that we don't know or understand than there are things we do know or understand. That's why caution is imperitive in every endevour, especially in the sciences that directly effect humankind, such as computer sciences.

moving finger said:
Even if I know everything there is to know (objectively) about a bat, I can NEVER know “what it feels like” to be a bat, because “what it feels like” is purely subjective.

That's right. That's why you will never have a complete understanding (and therefore will not understand a bat) of a bat. That's why when you asked me if the individual components of the brain, ie. neurons, "understand" the tasks they perform and their implications... I said... I don't know, I'm not a neuron.


moving finger said:
“Mary knowing what red looks like” is a subjective experience that is peculiar to Mary, nobody on the outside of Mary “knows what this is like for Mary”, only Mary does.

Yes, that's what I'm getting at. That is one of the many determiners of understanding. It is individual, relative and dependent upon the person understanding. I have given examples of "reaching and understanding" between more than one person, however.


moving finger said:
What does red look like? Can you describe “what red looks” like to someone who can only see shades of grey?
No.
Why? Because your subjective experience of “red” is peculiar to you, it has no objective basis in the outside world, it cannot be described in objective terms which another person can understand.

Yes it can... but, its not a scientific language people use... its also not binary language. One uses words to describe red like "warm", "to the yellow", " a little blue", "makes me horney"... "makes me want to buy"... and so on... There are some studies that have yeilded standard results with regard to the qualities of red... but... you'll just have to believe me cause we're almost out of time.


moving finger said:
Have you shown that Mary has incomplete understanding of the term “red” simply because she has never experienced seeing red?

What Mary does not understand about red
Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
Quantumcarl (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can quantumcarl provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

Mary said sitting in the red room made her feel sexy, horney and warm all over. This room also triggered a memory of her father who happened to be Satin and it reminded her of the places that glowed deep like the reddened coals of a fire where her dad would take her on Christmas eve.



moving finger said:
This simply betrays your anthropocentric bias again.
Objective knowledge is derived from information and the relational rules of that information (which in turn is also information). All information can be encoded into binary digits and “programmed” into an agent. Humans cannot be programmed (yet), therefore the only way they can acquire knowledge of or from the outside world is through their senses. But nevertheless I can still acquire a complete semantic understanding of the term “red” without ever seeing red.

I disagree. Please see my above statements.


moving finger said:
As I said, this simply shows your anthropocentric bias.

Oh my god, is it showing!... yes I admit it... I'm a f@cking human!


moving finger said:
I have suggested a thought experiment (“What Mary does not understand about red” above) which would allow you to show (if you can) that experiential knowledge is a necessary part of understanding – can you do so?

Done and done(r). All knowledge is experiencial. It must be experienced before it can be stored as knowledge.

Let me tell you how things are shaping up in my mind with regard to the terminology of the CR experiment.

Computers store data.

Humans Understand.

-----


Computers are programmed.

Humans experience.

That's my take on it.

The rest of your argument seems to repeat most of the above points. Got to run. This is most enlightening because the more you try to discredit the state of understanding as a specifically human trait... the more you expose how it really is. Thank you.


moving finger said:
With respect, a weak argument is better than none at all.

Yes, and I'm sorry I through that in. Just getting cocky I suppose. My respect. Cheers.
MF[/QUOTE]
 
  • #167
quantumcarl said:
An incomplete understanding of a house means not experiencing the house as a whole. It means not receiving the entire gammut of information with regard to a house. Mary does not see, smell or understand the plumbing of the house... therefore, her understanding of a house is incomplete... and this does not constitute an understanding of a house.

Ahhhh, I see now! Perhaps Mary actually needs to “be” the house to really understand it? Mary cannot really understand what a house is unless she is part of the house. Yes, I see what you mean…… :smile:

quantumcarl said:
Mary has not seen the blueprints and has not inspected the foundations. Mary has not walked into the house and inspected or had an inspection done for insects or pests. These are things that a house can contain. She perhaps doesn't understand that and so, her understanding is incomplete and still in the process of being formed. Mary does not understand these implications with regard to the term "house".

Yes. And it follows that Mary can never truly understand what a house “IS” unless she herself is part of the house….. built into the foundations…. Cemented into the brickwork….. why didn’t I see that before? :biggrin:
quantumcarl said:
Does this “hormonal release” convey any semantic understanding to the human?
quantumcarl said:
Yes. The hormonal release demonstrates how red makes the human feel in the presence of red. This is true in every human although not widely known. Note: eg. red light districts. Note: eg. (the popular saying in advertising) "red sells".
“how red makes the human feel” – this is semantic understanding to you?
Or is it perhaps an emotional response to a stimulus?

With respect - I think you and I are on different planets.

Bye!MF
 
Last edited:
  • #168
moving finger said:
“Y does not agree with X” is not synonymous with “Y thinks X is wrong”.
Tisthammerw said:
But normally that is what is implied when it comes to philosophical discussions e.g. “I do not agree with ethical relativism.”
If X does not agree with Y’s opinion it does not follow that X thinks Y is wrong. They may simply have different opinions. X can have different opinions to Y and still respect Y’s right to hold his/her opinion. Simple as that.
I can see that perhaps some may not respect the right of others to hold different opinions, but I'm not one of them.

I here shall trim most of the parts on “analytic vs synthetic” because I consider we have done that to death, we are simply repeating things over and over again.
moving finger said:
As I pointed out several times already, two people may not be able to agree on whether a given statement is analytic or not if those two people are not defining the terms used in the statement in the same way. Do you agree with this?
Tisthammerw said:
Yes and no. Think of it this way. Suppose I “disagree” with your definition of “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?
If Tisthammerw has a different definition of bachelor then it is up to Tisthammerw to decide whether the statement “bachelors are unmarried” is analytic or not according to his definitions of bachelor and unmarried. I cannot tell you since I do not know what Tisthammerw’s “different definition of bachelor” actually is.

Whether a statement is “analytic or not” depends on the definitions of the words used in the statement.

moving finger said:
How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
Because I have described it to you.
moving finger said:
In fact you did NOT specify that the computer you have in mind is interpreting the data/information.
Tisthammerw said:
In fact I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms. To recap: I described the scenario, and I have subsequently asked you the questions regarding whether or not this fits your definition of perceiving etc.
With resepct, what part of “you did NOT specify that the computer you have in mind is interpreting the data/information” do you not understand?
If I do not know whether the computer YOU have in mind is interpreting the data/information then I have no idea whether it is perceivng or not. So tell me – is the computer you have in mind interpreting the data/information?

I also trim the parts on “my definition is better than yours” since I consider this rather puerile.

Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?
There is more than one meaning of “to perceive”. There is the “introspective-perception” meaning, which does NOT require any sense receptors. There is the “sense-perception” meaning, which does require sense receptors.
For an entity to “sense-perceive” a bright light it must possesses suitable sense receptors which respond to the stimulus of that light.
Whether that entity is necessarily “aware” of that bright light is a different question and it depends on one’s definition of awareness. I am sure that you define awareness as requiring consciousness. Which definition would you like to use?

moving finger said:
What makes you think that a person is anything more than a “complex set of instructions”?
Tisthammerw said:
Because people possesses understanding (using my definition of the term, what you have called TH-understanding), and I have shown repeatedly that a complex set of instructions is insufficient for TH-understanding to exist.
And I have shown repeatedly that you have “shown” no such thing – your argument is not necessarily sound because the premise "Bob's consciousness is the only consciousness in the system" is not necessarily true - (see post #256 in the “can artificial intelligence……” thread)

moving finger said:
Can you show that no possible computer can possesses consciousness?
Tisthammerw said:
Given the computer model in question, I think I can with my program X argument (since program X stands for any computer program that would allegedly produce TH-understanding, and yet no TH-understanding is produced when program X is run).
see above

Tisthammerw said:
If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”
moving finger said:
This does not follow at all. How do you arrive at this conclusion?
Tisthammerw said:
As I said earlier, because the man in the Chinese room models a computer program. I can also refer to my program X argument, in which case there is no “new understanding” in this case either.
see above

moving finger said:
It is not clear from Searle’s description of the thought experiment whether or not he “allows” the CR to have any ability of acquiring new understanding (this would require a dynamic database and dynamic program, and it is not clear that Searle has in fact allowed this in his model).
Tisthammerw said:
Again, the variant I put forth does use a dynamic database and a dynamic program. We can do the same thing for program X.
see above

Suggestion : If you wish to continue discussing the Program X argument can we please do that in just one thread (let’s say the AI thread and not this one)? That way we do not have to keep repeating ourselves and cross-referencing.

MF
 
Last edited:
  • #169
moving finger said:
Agreed. The role of the senses as far as human semantic understanding is concerned is to convey information and knowledge about the world – the senses are merely conduits for information and knowledge transfer to the brain. If you like, they are the means by which we “program” our brains. But once the brain is programmed and we “understand” by virtue of the information and knowledge that we possess, then we do not need the senses in order to “continue understanding”.
This last part again is a straw man. No One here has argued that continued sensory information is necessary for understanding. I myself have said this a number of times so continually responding with it gets us no where.
The contention is that aquisition of information is necessary for understanding. You have said that "possession" of information is what is necessary as opposed to the "aquisition". The fact is that you can not possesses information unless you acquire it in some fashion and the manner in which you acquire that information will influence your "understanding" of it.
When I say "experience" I am referring to the aquisition and correlation of information in one fashion or another. I agree that continuous aquisition (a steady feed) of information is not necessary.

MF said:
That is a very good question – and it gets right to the heart of the matter, hence I will answer it fully so that we all might understand what is going on.

The confusion between “experiential knowledge” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”

One meaning (based on subjective experiential knowledge of red) would be better expressed “what does the colour red look like?”. Let us call this question A.

The other meaning (the objective semantic meaning of red) would be better expressed as “what is the semantic meaning of the term red?”. Let us call this question B.

Now, TheStatutoryApe, which question have you asked above? Is it A or B? I will answer both.

A - “what does the colour red look like?”
What the colour red looks like is a purely subjective experiential brain state. I have no idea what the colour red looks like for TheStatutoryApe, I only know what it looks like for MF. I cannot describe in objective terms what this colour looks like. Can you? Can anyone? The best I can do is to point to a red object and to say “there, if you look at that object then you will see what the colour red looks like”, but that STILL does not mean that the colour red “looks the same” for TheStatutoryApe as it does for MF. And seeing the colour red is NOT necessary in order to convery any semantic understanding of the term “red”.

B - “what is the semantic meaning of the term red?”
The semantic meaning of the term red is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. I do not need to be able to see red in order to understand from this definition that “this is what red is”.

Thus, it all depends on what your question is.

If you are asking “what does the colour red look like?”, then it is not possible for anyone to objectively describe this, and it is impossible to “teach” this to an agent who cannot “see” red. But “what the colour red looks like” has nothing to do with semantic understanding of the term “red”, which is in fact the second question. An agent can semantically understand the term “red” (question B) without being able to see “red” (question A).

This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

MF
Unfortunately this does not answer my question but I'll respond to it before I go back to what my original question was.
For one I would say that unless there is some sort of difference between our "software" and "hardware" that you and I do in fact see the same or very nearly the same thing when we look at red. Considering that the software and hardware are nearly identical there is not reason to believe otherwise and after we have correlated our experiences side by side with samples of colours I'd say that we will find we can deduce that we see the same thing.
Next, why is the sensory experience of "red" insufficient information for understanding? As far as I see it this is one of multiple viable manners by which to acquire information for the purpose of understanding.
Would you say that your average kindergartener has no understanding of what the word "red" means because they have never been explained to the scientific definition of "red"? If so then we'd probably have to say that the majority of the people in the world have no idea what the word "red" (or it's equivilent in their own language) means. We'd further probably have to conclude that the persons who came up with the word themselves had no idea what the word meant. I wonder what the word meant to the people who came up with it?
At any rate, I'd personally say that the two manners of aquiring the information about rede you have defined above would probably best be described as "direct" and "indirect" understanding. Or rather understanding by virtue of direct experience or understanding by virtue of parallel experience. Let me get into the "parallel experience" a bit more.
Remember my question? How would you go about teaching a person who possesses only hearing and no other sense what so ever? You never actually answered this. With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
 
  • #170
TheStatutoryApe said:
The contention is that aquisition of information is necessary for understanding. You have said that "possession" of information is what is necessary as opposed to the "aquisition". The fact is that you can not possesses information unless you acquire it in some fashion and the manner in which you acquire that information will influence your "understanding" of it.
In the sense that “acquisition of data” must be followed by “interpretation of data” before the agent can make sensible use of that data, then yes I agree that the precise form of the acquisition of the data may “colour” the interpretation of the data. But it does not follow from this that a particular sense-experience is necessary for semantic understanding – only that all understanding may be coloured by the manner in which data is acquired and interpreted. A Chinese person’s understanding of the meaning of the word “house” (or it’s Chinese equivalent) may not be precisely the same as an English person’s understanding of the meaning of the same word – but this does not mean that “one of them understands and the other does not” – it means simply that they attach slightly different meanings to the same word.

moving finger said:
If you are asking “what does the colour red look like?”, then it is not possible for anyone to objectively describe this, and it is impossible to “teach” this to an agent who cannot “see” red. But “what the colour red looks like” has nothing to do with semantic understanding of the term “red”, which is in fact the second question. An agent can semantically understand the term “red” (question B) without being able to see “red” (question A).
TheStatutoryApe said:
Unfortunately this does not answer my question
With respect, I have indeed answered your question(s), but perhaps it was not the answer you wanted.

TheStatutoryApe said:
but I'll respond to it before I go back to what my original question was.
For one I would say that unless there is some sort of difference between our "software" and "hardware" that you and I do in fact see the same or very nearly the same thing when we look at red.
In the case of human beings, I agree our experiential data may be similar (but not necessarily identical). But what happens if you confront an intelligent alien (with visual ability)? The alien can look at a red object, and it will have an experiential quality associated with seeing that red object, but it may be completely different to the experiential quality that you have when you look at the same object. Yet the alien can learn English, and it can then refer to the colour it sees as “red”. The key point is that the meaning of the term red is the same for both you and the alien, even though the experiential data may be vastly different – because the definition of the term red (the meaning of the word) is not determined by any particular experiential quality.

TheStatutoryApe said:
Considering that the software and hardware are nearly identical there is not reason to believe otherwise and after we have correlated our experiences side by side with samples of colours I'd say that we will find we can deduce that we see the same thing.
I will agree that you and I would probably see similar things – but not that we necessarily see exactly the same.
However in the above example of the alien intelligence, you cannot deduce that you and the alien see even similar things. Yet the alien can still have a semantic understanding of the word red which may be identical to your understanding – because semantics is determined by the definitions of and relationships between words, not necessarily by any associated sense-experience

TheStatutoryApe said:
why is the sensory experience of "red" insufficient information for understanding? As far as I see it this is one of multiple viable manners by which to acquire information for the purpose of understanding.
Would you say that your average kindergartener has no understanding of what the word "red" means because they have never been explained to the scientific definition of "red"?
A young child may have a semantic understanding of red, but it would not necessarily be the same as my own semantic understanding of red. Presumably quantumcarl (based on his recent post regarding "understanding a house") would say that a young child does not understand anything.

As to the question "is experiential knowledge alone sufficient for semantic understanding?" - Would you say that an agent can claim to understand the meaning of any word before the agent has developed a reasonably basic vocabulary? A newly-born baby might start visually-sensing objects which have the colour “red” at a very early age, but that sensory experience alone is not sufficient for it to claim that it understands the meaning of the word red. The infant needs to develop a basic vocabulary before it can do that. It follows that sensory experience of "red" is insufficient for understanding the meaning of the term “red”.

TheStatutoryApe said:
If so then we'd probably have to say that the majority of the people in the world have no idea what the word "red" (or it's equivilent in their own language) means.
To take the position that “everyone must mean exactly what I mean when I refer to the word red” would be unreasonable arrogance akin to the anthropocentriuc arrogance displayed by those who insist on defining terms with a human bias. Of course it goes without saying that the meaning of a word may be different to different agents, and the meaning of a word to an agent may change over time as the agent acquires more knowledge and understanding of the world. None of this makes one agent “right” and the other “wrong”.

Mary may start to grasp the understanding of a word like “consciousness” at a very early age – let's say as a young teenager. If Mary grows to become a neurophysiologist, her understanding of the word will likely change dramatically as she learns more about the subject. This does not mean that the teenager Mary “did not understand the meaning of the word consciousness”, just that the word consciousness had a different meaning to the teenager Mary compared to what it does to the adult Mary. But at the same time I think it is reasonable to assume that the adult Mary’s understanding of the meaning of the word is much more highly developed, more complex, and more subtle, and shows a much greater “depth” of understanding, than the teenager Mary’s understanding. Wouldn’t you agree?

TheStatutoryApe said:
We'd further probably have to conclude that the persons who came up with the word themselves had no idea what the word meant. I wonder what the word meant to the people who came up with it?
Why would we conclude this? The meanings of words change over time. As we have seen above, the meanings of words can even change over an agent’s lifetime. A caveman might attach a meaning to the word “red”, but he certainly knows nothing of “wavelengths of electromagnetic radiation”. He would be totally confused by such words. From his position of a very limited and shallow understanding of the world about him, he might even deny that there IS any other meaning to the word “red” apart from his experiential knowledge of red (shock, horror!) - simply because his semantics is more basic, more crude and less developed than the semantics of modern man. From our vantage point we can see that the caveman’s semantics are not the only possible semantics, we can also see it is not the case that one meaning is “wrong” and the other is “right”, there can be enormous differences in the complexity, subtlety and depth of understanding of meaning between two agents. The important thing is that (unlike the caveman) we know that this depth of understanding is possible without experiential knowledge.

TheStatutoryApe said:
I'd personally say that the two manners of aquiring the information about red you have defined above would probably best be described as "direct" and "indirect" understanding. Or rather understanding by virtue of direct experience or understanding by virtue of parallel experience.
That is your opinion and of course you are entitled to it. I would personally say that experiential knowledge is not synonymous with, and is not necessary for, semantic understanding.

TheStatutoryApe said:
Remember my question? How would you go about teaching a person who possesses only hearing and no other sense what so ever? You never actually answered this.
With respect, remember my answer? I pointed out that your question is ambiguous. I pointed out the two different possible meanings, and I asked you which one you actually meant. You never actually clarified what you meant. In fact I did provide answers to both possible meanings of your question. Perhaps my answers were not the answers you wanted, but that is beside the point. Please go check again.

TheStatutoryApe said:
With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
Again, your question is ambiguous and I ask again the same question that you still have not answered. Your question is (if I understand correctly) “is his experience sufficient for conveying the concept of red?” – yes?

This question MIGHT mean :

A : “is his experience sufficient for conveying his experiential quality of seeing red?”

Or it MIGHT mean :

B : “is his experience sufficient for conveying his semantic understanding of red?”

These are two very different questions. Only the latter question (B) explicitly refers to semantic understanding.

As before, I must ask : Which question are you asking?

Let me know which one, and I will then answer it.

Let me repeat once again what I said in my previous reply, in case you missed it - this is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

Now I have a question for you.

Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
TheStatutoryApe (presumably) would argue that experiential knowledge is necessary for semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can TheStatutoryApe provide an example of any sentence in the English language which includes the term “red” which Mary must necessarily and demonstrably fail to semantically understand by virtue of her lack of experiential knowledge of red?

Thanks

MF
 
Last edited:
  • #171
moving finger said:
Ahhhh, I see now! Perhaps Mary actually needs to “be” the house to really understand it? Mary cannot really understand what a house is unless she is part of the house. Yes, I see what you mean…… :smile:
Mary must experience every aspect of a house in order to understand the implications of the word house. If all Mary possessed in order to understand a house was binary language... all Mary would understand would be the binary interpretation of a house.
moving finger said:
Yes. And it follows that Mary can never truly understand what a house “IS” unless she herself is part of the house….. built into the foundations…. Cemented into the brickwork….. why didn’t I see that before? :biggrin:
That's you're understanding of what I have written?
moving finger said:
“how red makes the human feel” – this is semantic understanding to you?
Or is it perhaps an emotional response to a stimulus?
Yes. Evaluating emotional response to a phenomenon is a part of understanding the effects of a phenomenon and thusly the phenomenon itself.
moving finger said:
With respect - I think you and I are on different planets.
This really sums things up (not). Let's hope its true.
StatuApe said:
The contention is that aquisition of information is necessary for understanding. You have said that "possession" of information is what is necessary as opposed to the "aquisition". The fact is that you can not possesses information unless you acquire it in some fashion and the manner in which you acquire that information will influence your "understanding" of it.
When I say "experience" I am referring to the aquisition and correlation of information in one fashion or another. I agree that continuous aquisition (a steady feed) of information is not necessary.
Hi, Once information is stored in a human brain upon retreaval the stimulus of that retreaval is an connection and an experience that is a building block of understanding.
Drawing on an earlier understanding is an experience that leads to further understanding... or can... but not necessarily in every case... in fact it is a rare occurence by my observations.

Lets say Larry got hit by lightning. Now, no one understands the phrase, "hit by lightning" as well or as correctly as Larry and the 1400 other people who have been hit by lightning and lived.

Lets say Tim, who has never seen anything and relys on his hearing to experience the world has several different coloured lights shone on his skin. One is red. Now Tim has tactile understanding of red... and can use that to build on his total understanding of the colour red.

Lets say Hal the computer can only interpret the world through its use of binary language. He can't experience anything because he is not as complex as an organic organism like a human.

The data Hal recovers from its environment is immediately translated into 0s and 1s and stored in a specific area of its storage capacity. When it either stimulates itself to build a data base on what it has analyzed or is asked to regurgitate what it has compiled with regard to its environment, it does so, unknowing of the purpose of the task or even how it feels to be following the orders of an independent operator or a program that was installed to provoke this process.

Understanding is a rare comodoty. It doesn't come fast, cheap or easy. It requires am empathetic and gallant attempt of someone willing to put themselves in the actual currcumstances they need to understand. That's how I understand it.
 
Last edited:
  • #172
What is red?

Suppose Houston makes radio contact with an alien intelligence on the far side of our galaxy. Over a long period of time the Houston scientists are teaching the aliens, via radio, the meanings of our words and language - in other words our semantics.

It has been established already in previous communications that the aliens have visual sense-perception organs similar in function to our "eyes", and they can sense-perceive electromagnetic radiation with wavelengths in the range 400nm to 700nm.

Here we listen in on one of the radio conversations between Houston and the Aliens…..

Alien : You have taught me that to understand your language, to grasp your semantics, I must understand the meanings of your words, which in turn means I must understand the defining properties and characteristics of your words as used within your language. Is this not so?
Houston Yes, that’s exactly right
Alien Then I have a question please, to help me understand
Houston OK, let’s have it
Alien What is “red”?
Houston Ummmm, well, red is a colour
Alien What is “colour”?
Houston “colour” is the set of things such as red, yellow, green
Alien Thus, red is characterised by being a member of the set of colours, and a colour is the set of things one member of which is red. Is this supposed to help me truly understand what is meant either by “red” or by “colour”?
Houston ummmm, well no I guess not
Alien When you were teaching me what is “horse”, you did not simply tell me “horse is an animal”. This tells me only that horse is a member of the set of animals, it does not tell me what is horse. Likewise, telling me that “red” is a member of the set of “colours” does not tell me what is red.
Houston Yes, I guess you are right
Alien For me to understand what is horse, I needed to know the defining properties and characteristics of horse, was it not so?
Houston Yes, that’s true
Alien Thus – for me to understand what is red, I need to know the defining properties and characteristics of red, is it not so?
Houston Yes, that’s correct
Alien Thus – what are the defining properties and characteristics of red?
Houston (after a long pause) – ummmmm, redness?
Alien Redness? If I ask what are the defining properties and characteristics of horse, and you reply “horsiness”, would you expect me to then understand from this reply what is horse?
Houston ummm, well no of course not
Alien Then let us stop wasting time. Let me ask again - what are the defining properties and characteristics of red?
Houston OK, we’ll try a bit more detail. You aliens have visual sense-receptors, right? Red is the experiential quality that you have when you sense-perceive a red object with those visual sense-receptors
Alien Human, do I need to explain the circularity of your explanation? “Red is the experiential quality when one perceives a red object” – this is much like saying “horse is the entity which is characterised by being a horse object”. How can I understand from this what is horse if I do not first know what a horse object looks like? How can I understand what is red from your explanation, if I have no idea what is a red object?
Houston (embarrassed silence)
Alien With respect, human, for me to understand what is red, you need to explain the defining properties and characteristics of red without tautologically defining these characteristics in terms of redness.
Houston (still more silence)
Alien Let me help you. It is clear from what you have told me so far that “red” is an experiential quality associated with visual sense-receptors. As you know, we aliens have visual sense-receptors similar in function to your human “eyes”, and like you humans we aliens can sense-perceive electromagnetic radiation in the wavelength range 400nm to 700nm.
Houston yes, that’s correct
Alien Then is it possible to define red in terms of something we both understand - sense-perceiving electromagnetic radiation of a particular wavelength?
Houston (lots of applause and cheers) Yes – of course! That’s it!
Alien Well?
Houston : OK, here goes…. Red is the experiential quality that you have when you sense-perceive electromagnetic radiation with wavelengths of the order of 650nm
Alien Ahhhhh, NOW I see! We call that experiential quality “qrkzmnthlog” – thus “red” is equivalent to qrkzmnthlog. NOW I understand. Thank you!

MF
 
Last edited:
  • #173
moving finger said:
Suppose Houston makes radio contact with an alien intelligence on the far side of our galaxy. Over a long period of time the Houston scientists are teaching the aliens, via radio, the meanings of our words and language - in other words our semantics.
It has been established already in previous communications that the aliens have visual sense-perception organs similar in function to our "eyes", and they can sense-perceive electromagnetic radiation with wavelengths in the range 400nm to 700nm.
Here we listen in on one of the radio conversations between Houston and the Aliens…..
Alien : You have taught me that to understand your language, to grasp your semantics, I must understand the meanings of your words, which in turn means I must understand the defining properties and characteristics of your words as used within your language. Is this not so?
Houston Yes, that’s exactly right
Alien Then I have a question please, to help me understand
Houston OK, let’s have it
Alien What is “red”?
Houston Ummmm, well, red is a colour
Alien What is “colour”?
Houston “colour” is the set of things such as red, yellow, green
Alien Thus, red is characterised by being a member of the set of colours, and a colour is the set of things one member of which is red. Is this supposed to help me truly understand what is meant either by “red” or by “colour”?
Houston ummmm, well no I guess not
Alien When you were teaching me what is “horse”, you did not simply tell me “horse is an animal”. This tells me only that horse is a member of the set of animals, it does not tell me what is horse. Likewise, telling me that “red” is a member of the set of “colours” does not tell me what is red.
Houston Yes, I guess you are right
Alien For me to understand what is horse, I needed to know the defining properties and characteristics of horse, was it not so?
Houston Yes, that’s true
Alien Thus – for me to understand what is red, I need to know the defining properties and characteristics of red, is it not so?
Houston Yes, that’s correct
Alien Thus – what are the defining properties and characteristics of red?
Houston (after a long pause) – ummmmm, redness?
Alien Redness? If I ask what are the defining properties and characteristics of horse, and you reply “horsiness”, would you expect me to then understand from this reply what is horse?
Houston ummm, well no of course not
Alien Then let us stop wasting time. Let me ask again - what are the defining properties and characteristics of red?
Houston OK, we’ll try a bit more detail. You aliens have visual sense-receptors, right? Red is the experiential quality that you have when you sense-perceive a red object with those visual sense-receptors
Alien Human, do I need to explain the circularity of your explanation? “Red is the experiential quality when one perceives a red object” – this is much like saying “horse is the entity which is characterised by being a horse object”. How can I understand from this what is horse if I do not first know what a horse object looks like? How can I understand what is red from your explanation, if I have no idea what is a red object?
Houston (embarrassed silence)
Alien With respect, human, for me to understand what is red, you need to explain the defining properties and characteristics of red without tautologically defining these characteristics in terms of redness.
Houston (still more silence)
Alien Let me help you. It is clear from what you have told me so far that “red” is an experiential quality associated with visual sense-receptors. As you know, we aliens have visual sense-receptors similar in function to your human “eyes”, and like you humans we aliens can sense-perceive electromagnetic radiation in the wavelength range 400nm to 700nm.
Houston yes, that’s correct
Alien Then is it possible to define red in terms of something we both understand - sense-perceiving electromagnetic radiation of a particular wavelength?
Houston (lots of applause and cheers) Yes – of course! That’s it!
Alien Well?
Houston : OK, here goes…. Red is the experiential quality that you have when you sense-perceive electromagnetic radiation with wavelengths of the order of 650nm
Alien Ahhhhh, NOW I see! We call that experiential quality “qrkzmnthlog” – thus “red” is equivalent to qrkzmnthlog. NOW I understand. Thank you!
MF
Shared knowledge about a colour or an animal doesn't constitute a shared understanding of the colour or the animal.
It is the shared experience of a horse or red that can allow for an understanding to take place between Houston and Alien.
If the Alien has its visual receptors in its arm pits, his understanding of red will be a different understanding of red from a human's, who's eyes are in his head. In fact it may be that aliens percieve green to be what we call red. Please note "colour blindness". This isn't necessarily a condition of "blindness" but, perhaps a different way of seeing and experiencing colour. In fact, if you look at red lights and green lights and yellow lights on traffic controlers... you will notice that the red is shifted more to the yellow... the green to the blue and the yellow to the red to facilitate the percentage of the population that has difficulty understanding a pure red etc...
Similarily, if the alien has never been on a horse, brushed down the horse, shoveled the fresh dung of the horse or fed a horse, the alien doesn't not possesses a complete understanding of a horse.
The stimulus that is the shared knowledge or data of a phenomenon is insufficient to use in arriving at what I am terming as an understanding of the said phenomena.
 
  • #174
quantumcarl said:
Shared knowledge about a colour or an animal doesn't constitute a shared understanding of the colour or the animal.
Neither does "shared sense perception".
Understanding is not an absolute property like "the speed of light". No two agents (even two human "genetically identical" twins) will have "precisely" the same understanding of the world. Nevertheless, two agents can still reach a "shared understanding" about a word or concept, even the world in general, even if their individual understandings of some ideas and concepts is not identical. If you are demanding that agent A must have completely identical understanding to agent B in order for them to share understanding then no two agents share understanding in your idea of the word.

And just because there may be some differences in understanding between agent A and agent B does not necessarily give agent A the right to claim that agent B "does not understand".

quantumcarl said:
It is the shared experience of a horse or red that can allow for an understanding to take place between Houston and Alien.
It is the shared information and knowledge that can allow for an understanding to take place between Houston and Alien

quantumcarl said:
If the Alien has its visual receptors in its arm pits, his understanding of red will be a different understanding of red from a human's, who's eyes are in his head.
I disagree. If I could transplant your eyes from your head to your armpits, your semantic understanding of red could remain exactly the same - what red "is" to you does not necessarily change just because your eyes have changed location.

quantumcarl said:
In fact it may be that aliens percieve green to be what we call red.
In fact it may be that I perceive green to be what you call red (and red to be green - ie just a colour-swap). How would we ever find out? We could not. Would it make any difference at all to the understanding of red and green of either of us, or between us? It would not.

quantumcarl said:
Please note "colour blindness". This isn't necessarily a condition of "blindness" but, perhaps a different way of seeing and experiencing colour.
Colour blindness is an inability to correctly distinguish between two or more different colours.

quantumcarl said:
In fact, if you look at red lights and green lights and yellow lights on traffic controlers... you will notice that the red is shifted more to the yellow... the green to the blue and the yellow to the red to facilitate the percentage of the population that has difficulty understanding a pure red etc...
If red looked to me the same as green looks to you, and green looks to me the same as red looks to you, this would not change anything about either my understanding of traffic lights, or about your understanding of traffic lights, or about the understanding between us.

quantumcarl said:
Similarily, if the alien has never been on a horse, brushed down the horse, shoveled the fresh dung of the horse or fed a horse, the alien doesn't not possesses a complete understanding of a horse.
In that case neither do I.
What is "complete understanding"?
I could argue that no agent possesses "complete understanding" of Neddy the horse except possibly for Neddy himself (but arguably not even Neddy "completely underdstands" Neddy). Thus no agent possesses complete understanding of anything. Is this a very useful concept? Of course not. There is no absolute in understanding, different agents understand differently. This does not necessarily give one agent the right to claim its own understanding is "right" and all others wrong.

quantumcarl said:
The stimulus that is the shared knowledge or data of a phenomenon is insufficient to use in arriving at what I am terming as an understanding of the said phenomena.
You are entitled to your rather unusual definition of understanding - I do not share it. Information and knowledge are required for semantic understanding, experiential data are not necessary.

MF
 
Last edited:
  • #175
MF said:
As before, I must ask : Which question are you asking?

Let me know which one, and I will then answer it.
I have already agreed with you that one can arrive at an understanding without direct experiencial knowledge several times. Why you continue to treat my statements as if I don't agree I have no idea.
I think I have also been quite clear that by "experiencial knowledge/information" I mean any sort of information that is acquired and correlated without any restrictions on the apparati used for this purpose while you seem to continue to regard my statements as if I mean a person must have eyes like I have.
So from now on if you could possibly remember that I have agreed a person can understand the concept of red without ever having actually seen "red". And also please remember that when I state "experience" I am not referring to "seeing" with ones eyes or aquiring information in the same exact manner that I do but only the aquisition and correlation of information in what ever form it may take.

Now my question was: How would you teach a person with 'hearing' as their sole means for aquiring information what 'red' is so that the person understands?
I would think the question of whether I mean the direct visual experience or not is rather moot by the sheer fact that Tim's eyes do not work and never have. Tim must then learn what "red" is via some indirect method. You have already communicated the definition you would give in this instance which I already knew. You have yet to give me the "How". HOW would you communicate your definition in such a manner that such a person understands what you mean?

MF said:
Let me repeat once again what I said in my previous reply, in case you missed it - this is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

Now I have a question for you.

Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
TheStatutoryApe (presumably) would argue that experiential knowledge is necessary for semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can TheStatutoryApe provide an example of any sentence in the English language which includes the term “red” which Mary must necessarily and demonstrably fail to semantically understand by virtue of her lack of experiential knowledge of red?
I have never asserted that Mary must have experienced red to understand what it is or that she lacks semantic understanding. I have asserted that her definition will be different than others and have never stated that this makes her definition any less viable or usable. The only other thing that I have asserted is that she does in fact require some sort of "experiencial knowledge" to understand what red is (At this point please refer to what I have stated is my definition of "experience" or "experiencial knowledge"). I believe that Mary is capable of understanding through indirect experience. That is to say that her experiencial knowledge contains information that she can parallel with the information she is attempting to understand and by virtue of this parallel she can come to understand this new information. I believe I more or less already stated this here...
With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
In that last line I am specifically referring to Tim. The reason for my Tim scenario is for us to discuss what I see as the importance of experience to understanding (again note my earlier definition of experience). Ultimately the questions in my mind are "Can we even teach Tim english?" "How would we go about this?" "Does his experience afford sufficient information for understanding?" and "Will his understanding be limited by the nature of the information his experience can afford him?".
 

Similar threads

Replies
5
Views
2K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • General Discussion
Replies
6
Views
2K
  • General Discussion
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
711
  • General Discussion
Replies
3
Views
847
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
Replies
4
Views
1K
  • Sci-Fi Writing and World Building
Replies
30
Views
2K
  • Special and General Relativity
Replies
1
Views
1K
Back
Top