Can computers understand?Can understanding be simulated by computers?

  • Thread starter quantumcarl
  • Start date
  • Tags
    China
In summary, the conversation discusses the concept of the "Chinese Room" thought experiment and its implications on human understanding and artificial intelligence. John Searle, an American philosopher, argues that computers can only mimic understanding, while others argue that understanding is an emergent property of a system. The conversation also touches on the idea of conscious understanding and the potential of genetic algorithms in solving complex problems.
  • #176
moving finger said:
Neither does "shared sense perception".
Understanding is not an absolute property like "the speed of light". No two agents (even two human "genetically identical" twins) will have "precisely" the same understanding of the world. Nevertheless, two agents can still reach a "shared understanding" about a word or concept, even the world in general, even if their individual understandings of some ideas and concepts is not identical. If you are demanding that agent A must have completely identical understanding to agent B in order for them to share understanding then no two agents share understanding in your idea of the word.
And just because there may be some differences in understanding between agent A and agent B does not necessarily give agent A the right to claim that agent B "does not understand".
It is the shared information and knowledge that can allow for an understanding to take place between Houston and Alien
I disagree. If I could transplant your eyes from your head to your armpits, your semantic understanding of red could remain exactly the same - what red "is" to you does not necessarily change just because your eyes have changed location.
In fact it may be that I perceive green to be what you call red (and red to be green - ie just a colour-swap). How would we ever find out? We could not. Would it make any difference at all to the understanding of red and green of either of us, or between us? It would not.
Colour blindness is an inability to correctly distinguish between two or more different colours.
If red looked to me the same as green looks to you, and green looks to me the same as red looks to you, this would not change anything about either my understanding of traffic lights, or about your understanding of traffic lights, or about the understanding between us.
In that case neither do I.
What is "complete understanding"?
I could argue that no agent possesses "complete understanding" of Neddy the horse except possibly for Neddy himself (but arguably not even Neddy "completely underdstands" Neddy). Thus no agent possesses complete understanding of anything. Is this a very useful concept? Of course not. There is no absolute in understanding, different agents understand differently. This does not necessarily give one agent the right to claim its own understanding is "right" and all others wrong.
You are entitled to your rather unusual definition of understanding - I do not share it. Information and knowledge are required for semantic understanding, experiential data are not necessary.
MF
Yes I got your drift upon reading one of your first posts.

My "unusual (on your planet) definition" of "understanding" stems from the original meaning of the word which is described in a number of dictionaries. The origin is middle english and it describes standing under something.

Standing under something is a curcumstance one attains by going to meet it and going to experience it in order to further oneself toward an understanding of it.

When you speak of "understanding' a world or "understanding" a traffic light without ever having seen one or having experienced the effects of its emfs etc... what you mean... by my standard of english and use of certain terminologies... is knowledge as in Having knowledge of a world or a traffic light. This is not what I would term as an understanding of a world or of a traffic light.


You can share knowledge about a world or you can share knowledge about a traffic light...but, by your own admission, you cannot share understanding without both parties having experienced being in the currcumstances created by the subject.


So, when we program a computer are we getting it to brush the horse and shovel the horse hockeys? No.

We are sharing the knowledge we have of a horse with the computer through the use of binary language.


By this process, and by many scholar's definitions of "undertstanding", does the computer understand what a horse is? Or does the computer only hold a repository set of data that defines, for its records, a horse? I choose the latter.


(Don't forget to buy our Flammable Safety Cabinets, they burn like hell!) another example of poor english... what what?
 
Last edited:
Physics news on Phys.org
  • #177
quantumcarl said:
My "unusual (on your planet) definition" of "understanding" stems from the original meaning of the word which is described in a number of dictionaries. The origin is middle english and it describes standing under something.
And this definition implies to you that to have semantic understanding of the term "horse" an agent must necessarily have groomed a horse, and shovelled the horses dung? With respect, that is plain silly. But if this is what you choose to believe then that's OK with me.

quantumcarl said:
Standing under something is a curcumstance one attains by going to meet it and going to experience it in order to further oneself toward an understanding of it.
"Standing under something" in this context does not mean literally "physically placing your body underneath that thing" - or perhaps you do not understand what a metaphor is?

quantumcarl said:
When you speak of "understanding' a world or "understanding" a traffic light without ever having seen one or having experienced the effects of its emfs etc... what you mean... by my standard of english and use of certain terminologies... is knowledge as in Having knowledge of a world or a traffic light. This is not what I would term as an understanding of a world or of a traffic light.
I am talking here about semantic understanding - which is the basis of Searle's argument. Semantic understanding means understanding the meanings of words as used in a language - it does not mean (as you seem to think) making some kind of "intimate physical or spiritual connection" with the objects that those words represent. An agent can semantically understand what is meant by the term "horse" without ever having seen a horse, let alone mucked out the horses dung.

quantumcarl said:
You can share knowledge about a world or you can share knowledge about a traffic light...but, by your own admission, you cannot share understanding without both parties having experienced being in the currcumstances created by the subject.
No, I have admitted no such thing. You are mistaken here.

quantumcarl said:
So, when we program a computer are we getting it to brush the horse and shovel the horse hockeys? No.
And by your definition, I and at least 95% of the human race do not semantically understand what is meant by the term "horse". Ridiculous.

quantumcarl said:
We are sharing the knowledge we have of a horse with the computer through the use of binary language.
Billions of humans share knowledge with each other through the use of binary language - what do you think the internet is? Are you suggesting that the internet does not contribute towards shared understanding?

quantumcarl said:
By this process, and by many scholar's definitions of "undertstanding", does the computer understand what a horse is? Or does the computer only hold a repository set of data that defines, for its records, a horse? I choose the latter.
You choose to define understanding such that only an agent who has shared an intimate physical connection with a horse (maybe one needs to have spent the night sleeping with the horse as well?) can semantically understand the term "horse".

I noticed that you chose not to reply to my criticism of your rather quaint phrase "complete understanding". Perhaps because you now see the folly of suggesting that any agent can ever have complete understanding of anything. There are many different levels of understanding, no two agents ever have the same understanding on everything, none of these levels of understanding can ever be said to be "complete", and no agent can simply assert that "my understanding is right and yours is wrong"

As I have said many times already, you are entitled to your rather eccentric definition of understanding, but I don't share it.

Bye

MF
 
Last edited:
  • #178
TheStatutoryApe said:
I have already agreed with you that one can arrive at an understanding without direct experiencial knowledge several times. Why you continue to treat my statements as if I don't agree I have no idea.
Perhaps you will understand later in this post why your position on “experience is not necessary for understanding” seems rather confusing and contradictory to me…..

TheStatutoryApe said:
Now my question was: How would you teach a person with 'hearing' as their sole means for aquiring information what 'red' is so that the person understands?
Actually this was NOT your original question. Never mind.

TheStatutoryApe said:
I would think the question of whether I mean the direct visual experience or not is rather moot by the sheer fact that Tim's eyes do not work and never have. Tim must then learn what "red" is via some indirect method. You have already communicated the definition you would give in this instance which I already knew. You have yet to give me the "How". HOW would you communicate your definition in such a manner that such a person understands what you mean?
It is always good to seek confirmation of meaning where any ambiguity of meaning is even remotely possible – this is all part of better understanding. Wouldn’t you agree?

Thus – we now agree that the meaning of your original question was NOT “is his experience sufficient for conveying his experiential quality of seeing red?”, but in fact it was “is his experience sufficient for conveying his semantic understanding of red?”

And my answer to this is very clearly “YES”.

Tim’s semantic understanding of “red” is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. It makes no difference whether or not Tim actually has the physical ability to “perceive electromagnetic radiation with wavelengths of the order of 650nm”, his blindness does not change the semantic understanding of red that he has. Even though blind himself, Tim can introspectively perceive the possibility of an agent which does have such sense receptors, and which can then “perceive electromagnetic radiation with wavelengths of the order of 650nm”. Similarly, Tim does not need to have seen, heard, touched, smelled, or mucked out a horse to semantically understand the term “horse”.

Thus, when you ask “is his experience sufficient for conveying his semantic understanding of red?” the answer is Yes – because Tim can simply state that “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” IS his understanding of the term red. By the way, it’s also my understanding of the term red. And it is the only understanding of the term red which allows any mutual understanding of the concept red to take place between Houston and the aliens in my earlier example.

TheStatutoryApe said:
I have never asserted that Mary must have experienced red to understand what it is or that she lacks semantic understanding. I have asserted that her definition will be different than others and have never stated that this makes her definition any less viable or usable.
If one probes closely enough, I think one will find that most agents do not agree on all aspects of semantic understanding – there are differences in semantic understanding between most humans (witness my discussion with quantumcarl on the semantic understanding of the term “semantic understanding” in this very thread). And it seems we agree that this does not necessarily give one human agent the right to claim that the other human agent “does not understand”. It simply means that all agents, human or otherwise, might understand some terms or concepts in different ways.

In my example of Houston and the Aliens, could one say that they had the same understanding of the term red? On one (objective) level they did have the same understanding – because both Houston and the Aliens finally agreed that red is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. But on another (subjective) level they did not, because the data corresponding to the sense-experience of red to a human agent means nothing to an alien agent (in fact the precise data corresponding to the sense-experience of red to TheStatutoryApe does not necessarily mean anything to MF). The ONLY common factor between all agents is the objective definition of red as “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. It seems to me that such an objective semantic understanding, which can then be understood by all agents, is a much greater and deeper level of understanding than one based solely on a subjective experientially-based semantic understanding.

TheStatutoryApe said:
The only other thing that I have asserted is that she does in fact require some sort of "experiencial knowledge" to understand what red is (At this point please refer to what I have stated is my definition of "experience" or "experiencial knowledge").
I’m sorry, but there is that ambiguous phrase again – “what red is”. Please explain exactly what you mean by this phrase, because (as I have shown several times now) this phrase has at least two very different possible meanings. It might mean “what red looks like”, or it might mean “the semantic meaning of the term red”.
Remember - you have already agreed quite pointedly that experiential knowledge is not necessary for the latter. Now - Which meaning of “what red is” do you actually mean here?

TheStatutoryApe said:
I believe that Mary is capable of understanding through indirect experience. That is to say that her experiencial knowledge contains information that she can parallel with the information she is attempting to understand and by virtue of this parallel she can come to understand this new information. I believe I more or less already stated this here...
You have in fact stated something rather stronger than this - that experience is not necessary for understanding. I agree with this.

TheStatutoryApe said:
With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is?
That ambiguous phrase again!
Please clarify.
If you mean “how is he to have any semantic understanding of the term red” then I have already shown how – and you have agreed that he needs no experience to understand.
If you mean “how is he to know what red looks like” then I think we both agree this is a meaningless question in Tim’s case – but this is not important because we already agree that an agent does not “need to know what X looks like” to have semantic understanding of X.

TheStatutoryApe said:
It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending.
This statement contradicts your earlier very strong assertion that experience is not necessary for semantic understanding. Are you now saying that experience IS necessary for semantic understanding?

TheStatutoryApe said:
There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
By “the concept of red” do you mean some universal, objective concept of red? The only possible universal concept of red that means the same to you as to me, and the same to Houston and the Aliens, is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”

TheStatutoryApe said:
In that last line I am specifically referring to Tim. The reason for my Tim scenario is for us to discuss what I see as the importance of experience to understanding (again note my earlier definition of experience). Ultimately the questions in my mind are "Can we even teach Tim english?"
Of course we can. Blind people can learn English, or are you suggesting otherwise? The additional lack of the perceptions of touch-feeling, smell and taste I do not see as any greater impediments to understanding than the lack of sight.

TheStatutoryApe said:
"How would we go about this?" "Does his experience afford sufficient information for understanding?" and "Will his understanding be limited by the nature of the information his experience can afford him?".
His precise understanding may differ in some ways to yours and mine. But my understanding differs to yours anyway. There is no “absolute” in understanding. ALL agents understand differently to a greater or lesser externt. We have already agreed this does not mean that one agent understands and the other does not. And arguably a semantic understanding of a concept like “red” in objective terms which can be translated between all agents is a much greater and deeper understanding of red than a simple subjective experiential understanding.

MF
 
Last edited:
  • #179
I'm not sure where to start and what to quote so I will try to just recreate your points that we are having issues with...

Is experience required for understanding?
I never stated that experience is not required for understanding. I stated that direct experience is not required for understanding, that indirect experience could suffice.
Remember again my definition of experience... aquisition and correlation of information in what ever manner this may take shape. If a person can not acquire and correlate information about something then they can not understand it. Again you say that possession and correlation of the information is enough but all you are doing is skipping the step of aquiring the information in the first place. In my mind it is necessary to be considered because I believe that the actual act of gathering the information, aside from being necessary to possessing it, is important to the process of understanding in and of itself. I'll expand more on this later if you would like me to.
So once again I have not stated that experience is not necessary but that direct experience is not necessary.

MF said:
That ambiguous phrase again!
Please clarify.
If you mean “how is he to have any semantic understanding of the term red” then I have already shown how – and you have agreed that he needs no experience to understand.
If you mean “how is he to know what red looks like” then I think we both agree this is a meaningless question in Tim’s case – but this is not important because we already agree that an agent does not “need to know what X looks like” to have semantic understanding of X.
Yes I have already agreed that "seeing" isn't necessary and stated I do not mean "seeing" since this would be moot in Tim's case because we already know he can not "see" red.
The problem here is that you still have not answered the question. You have asserted that theoretically he can understand what "red" is and asserted a definition that he would be theoretically capable of understanding. You have yet to explain how this definition would be imparted to him in such a manner that he would understand. HOW as in the manner method or means by which.

MF said:
Of course we can. Blind people can learn English, or are you suggesting otherwise? The additional lack of the perceptions of touch-feeling, smell and taste I do not see as any greater impediments to understanding than the lack of sight.
It's so obvious is it? Then explain how.
I do not suggest that blind people are unable to learn english. Yet another strawman. It would help us greatly in our discussion if you would drop this tactic.
How it is that you do not see there being any stronger impediment to understanding for tim than for any other blind person is really quite beyond me.
Your average human possesses five avenues by which to gather information. A blind person only has four which will naturally hinder the blind persons ability to gather information by which to understand relative to the person with a total of five. Helen Keller, both blind and deaf, had only three and had great difficulty in learning to understand things. Tim has only one avenue by which to gather information. Considering this extreme lack of ability to gather information in comparison to your average human how do you justify your idea that it should pose no more of an impediment to be in Tim's shoes than it does to be in those of a blind person?
Can you accurately define the location of an object in three dimensional space with only one coordinate? Do you see the parallel between this and the difficulty that an agent with only one avenue for information gathering may have in understanding the world around it let alone the words that are supposed to describe it?
 
  • #180
TheStatutoryApe said:
I'm not sure where to start and what to quote so I will try to just recreate your points that we are having issues with...

Is experience required for understanding?
I never stated that experience is not required for understanding. I stated that direct experience is not required for understanding, that indirect experience could suffice.

Remember again my definition of experience... aquisition and correlation of information in what ever manner this may take shape. If a person can not acquire and correlate information about something then they can not understand it.
Given your definition I agree with this conclusion, though I do find your definition rather strange. Because of this strange definition we must be very careful to distinguish between “purely informational experience” on the one hand (which does not involve any “sensory experiential quality or data”), and “sensory experience” on the other hand (which is directly associated with “sensory experiential quality or data”). Sensory experience may be classed as a subset of informational experience, but purely informational experience involves no sensory experience. Would you agree?

TheStatutoryApe said:
Again you say that possession and correlation of the information is enough but all you are doing is skipping the step of aquiring the information in the first place. In my mind it is necessary to be considered because I believe that the actual act of gathering the information, aside from being necessary to possessing it, is important to the process of understanding in and of itself. I'll expand more on this later if you would like me to.
I understand what you are saying. And I think you will find that I have already, in a previous post, agreed that the precise form in which the data and information are acquired may “colour” the interpretation of that information and hence may also “colour” any subsequent understanding that the agent may derive from that data and information. This is also at the root of my argument that all agents understand things differently to a greater or lesser extent, because our understanding is based on our experience (your definition of experience), and our experiences are all different. Thus it seems we agree that agents may understand some things (more or less) differently because their experiences are different, yes?

Getting back to the subject of this thread - the important point (I suggest) is “whether a machine can in principle semantically understand at all”, and not whether “all agents understand everything in exactly the same way”

TheStatutoryApe said:
So once again I have not stated that experience is not necessary but that direct experience is not necessary.
Understood. I apologise, because I always have in mind a slightly different definition of experience, which is “sensory experience”, rather than “informational experience”. It is now clear to me what you mean by “indirect experience”. I suggest to avoid future misunderstanding that we explicitly refer to indirect experience whenever we mean indirect experience (and not simply to experience, which may be misinterpreted).

TheStatutoryApe said:
Yes I have already agreed that "seeing" isn't necessary and stated I do not mean "seeing" since this would be moot in Tim's case because we already know he can not "see" red.
The problem here is that you still have not answered the question. You have asserted that theoretically he can understand what "red" is and asserted a definition that he would be theoretically capable of understanding.
I do not understand what you mean by “theoretically” here.
Do I only “theoretically” semantically understand what is “horse” if I have never seen, heard, smelled or touched a horse?
To my mind, Tim “semantically understands” the definition of red as given. Period. There is nothing "theoretical" about it. Semantic understanding is semantic understanding.

If you can explain your distinction between “theoretical semantic understanding” and “practical semantic understanding” then I may be able to grasp what you are trying to say here.

BTW – I have indeed answered the question that you asked.

Your original question was :
TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?

My answer was :

moving finger said:
Tim can simply state that “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” IS his understanding of the term red.

Once again, you may not “like” this answer, and it may not answer the question that YOU think you asked, but it certainly answered the question as I understood it. I'm sorry, but I cannot help it if the meanings of your questions are ambiguous.

TheStatutoryApe said:
You have yet to explain how this definition would be imparted to him in such a manner that he would understand. HOW as in the manner method or means by which.

You now seem to be asking “how is this definition of red imparted to Tim?”. With respect, this was not your original question (at least it was not my understanding of your question - again perhaps because the way you phrased the question was ambiguous).

Tim has the sense of hearing, yes? In Tim’s case he can learn his semantic understanding of English, including his semantic understanding of the term “red”, via his sense of hearing. Is this the answer you are looking for?

TheStatutoryApe said:
I do not suggest that blind people are unable to learn english. Yet another strawman. It would help us greatly in our discussion if you would drop this tactic.
With the greatest respect, TheStatutoryApe, it would help us greatly in our discussion if you would state your argument clearly at the outset, instead of making one ambiguous statement and question after another, which forces me to guess at your meaning and to ask for clarifications, after which you then frequently (it seems to me) change the sense of your questions.

When I ask a question in an attempt to clarify what I see as an ambiguity or an uncertainty in a post, that is just what it is - a question to seek clarification. You may call it a "strawman" if that makes you feel any better - but I'm afraid that as long as your statements and questions remain ambiguous then I must continue to ask questions to seek clarification on your meaning.

Please understand that I’m not guessing at the meaning of your questions because I want to – with the greatest respect, I am forced to guess at your meanings, and to offer up a question in the form of what you term a "strawman", because your questions are unclear, ambiguous, or keep changing each time you re-state them.

TheStatutoryApe said:
Your average human possesses five avenues by which to gather information. A blind person only has four which will naturally hinder the blind persons ability to gather information by which to understand relative to the person with a total of five. Helen Keller, both blind and deaf, had only three and had great difficulty in learning to understand things. Tim has only one avenue by which to gather information. Considering this extreme lack of ability to gather information in comparison to your average human how do you justify your idea that it should pose no more of an impediment to be in Tim's shoes than it does to be in those of a blind person?
Here, with respect, is the error in your reasoning at this point : It is not the “number” of senses which are available to an agent which is important – it is the information content imparted via those senses. In most human agents the data and information required for semantic understanding of a language are imparted mainly via the two senses of hearing and sight – thus in humans these two senses are much more critical in learning semantics than the other 3 senses. An agent which possesses neither hearing nor sight must try to acquire almost all of the external data and information about a language via the sense of touch (the senses of taste and smell would not be very efficient conduits for most semantically useful information transfer). In other words, most of the agent's learning about language would be via braille. This would indeed be a massive problem for the agent - but it STILL would not necessarily mean that the agent would have NO semantic understanding, which is the point of the CR argument.

TheStatutoryApe said:
Can you accurately define the location of an object in three dimensional space with only one coordinate? Do you see the parallel between this and the difficulty that an agent with only one avenue for information gathering may have in understanding the world around it let alone the words that are supposed to describe it?
No, I don’t see the parallel. Why does having only the sense of hearing necessarily limit an agent to “one coordinate space”? Or is this a bad metaphor?

I am not suggesting that Tim will not have problems learning. Of course he will. He needs to learn everything about the outside world via his sense of hearing alone. Of course this will be a problem. But your analogy with Helen Keller is poor – as I have stated already arguably the two most important senses for human language learning are sight and hearing, both of which Helen lacked, and one of which Tim has. Thus I could argue that Tim’s problem will in fact be less severe than Helen’s.

But I have no interest in going off at a tangent just to satisfy your curiosity about Tim’s unusual predicament in your pet thought experiment. What relevance does any of this have to the subject of this thread, and what is the point you are trying to make? With respect I am tired of continually guessing your meanings.

The CR argument is not based on the problems that a severely disabled human agent will have in “learning about the world”. The CR argument is based on the premise that a machine “cannot in principle semantically understand a language”. If you can show the relevance of Tim’s predicament and your thought experiment to the subject of this thread then I’ll be happy to continue this line of reasoning.

MF
 
Last edited:
  • #181
MF said:
Thus it seems we agree that agents may understand some things (more or less) differently because their experiences are different, yes?
Yes. Though I would add that the understanding between agents (e.g... communication by means of language) is contingent upon significant similarity in "informational experience". A gap is expected but a significant gap will result in the break down of communicative understanding.

MF said:
I do not understand what you mean by “theoretically” here.
Do I only “theoretically” semantically understand what is “horse” if I have never seen, heard, smelled or touched a horse?
To my mind, Tim “semantically understands” the definition of red as given. Period. There is nothing "theoretical" about it. Semantic understanding is semantic understanding.

If you can explain your distinction between “theoretical semantic understanding” and “practical semantic understanding” then I may be able to grasp what you are trying to say here.
I say "theoretical" because it is your theory that based off of only one avenue of information gathering Tim will be able to gain a semantic understanding of what the word "red" means. However you have yet to explain how Tim will accomplish this. I have set up a scenario where Tim does not understand what "red" means and asked you how you would teach him what red means. You have replied by saying that he can understand and "X" is the definition that he will be capable of understanding. You have continually failed to adress the "how" portion of my question. How will you teach him. How will he come to understand. How would you theorize the process occurring in his mind would unfold.

MF said:
You now seem to be asking “how is this definition of red imparted to Tim?”. With respect, this was not your original question (at least it was not my understanding of your question - again perhaps because the way you phrased the question was ambiguous).
With respect, every version of the question I have asked has included the word "How"...
post 160 said:
But consider this. Imagine a person has been born with only one of five senses working. We'll say the person's hearing is the only sense available to it. None others what so ever. How would you go about teaching this person what the colour "red" is?
post 169 said:
Remember my question? How would you go about teaching a person who possesses only hearing and no other sense what so ever? You never actually answered this. With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
post 175 said:
Now my question was: How would you teach a person with 'hearing' as their sole means for aquiring information what 'red' is so that the person understands?
I would think the question of whether I mean the direct visual experience or not is rather moot by the sheer fact that Tim's eyes do not work and never have. Tim must then learn what "red" is via some indirect method. You have already communicated the definition you would give in this instance which I already knew. You have yet to give me the "How". HOW would you communicate your definition in such a manner that such a person understands what you mean?
post 175 said:
In that last line I am specifically referring to Tim. The reason for my Tim scenario is for us to discuss what I see as the importance of experience to understanding (again note my earlier definition of experience). Ultimately the questions in my mind are "Can we even teach Tim english?" "How would we go about this?" "Does his experience afford sufficient information for understanding?" and "Will his understanding be limited by the nature of the information his experience can afford him?".
post 179 said:
The problem here is that you still have not answered the question. You have asserted that theoretically he can understand what "red" is and asserted a definition that he would be theoretically capable of understanding. You have yet to explain how this definition would be imparted to him in such a manner that he would understand. HOW as in the manner method or means by which.
I have very plainly asked how from the very begining. I have reworded my questions and tossed in a couple of extra questions along the way because I am trying to help you understand what I am asking of you and you obviously aren't getting it. Only now do you seem to begin to understand with this...
MF said:
Tim has the sense of hearing, yes? In Tim’s case he can learn his semantic understanding of English, including his semantic understanding of the term “red”, via his sense of hearing. Is this the answer you are looking for?
But you still don't seem to understand what I mean by "how". I did not mean to ask "with what?". His ears/hearing/auditory sense is obviously what he will be using to gather information by which to understand, this is very plain by the set up of the scenario.
Hopefully in the previous part of this particular post I have cleared up what I mean by "how" and perhaps you will take a stab at answering the question.

MF said:
With the greatest respect, TheStatutoryApe, it would help us greatly in our discussion if you would state your argument clearly at the outset, instead of making one ambiguous statement and question after another, which forces me to guess at your meaning and to ask for clarifications, after which you then frequently (it seems to me) change the sense of your questions.

When I ask a question in an attempt to clarify what I see as an ambiguity or an uncertainty in a post, that is just what it is - a question to seek clarification. You may call it a "strawman" if that makes you feel any better - but I'm afraid that as long as your statements and questions remain ambiguous then I must continue to ask questions to seek clarification on your meaning.

Please understand that I’m not guessing at the meaning of your questions because I want to – with the greatest respect, I am forced to guess at your meanings, and to offer up a question in the form of what you term a "strawman", because your questions are unclear, ambiguous, or keep changing each time you re-state them.
With respect, I must again point out that every single one of my posts requested that you answer "How". It's been the one thing that has not changed at all what so ever. The other words may have changed in order to try adapting to the manner in which you are misinterpreting what I mean by "How" and I may have asked other questions in conjunction with the one main question in order to flesh out my meaning but the word "How" has been quite consistant through out and you seem to have glossed over it every time.
Please, in the future, if you do not understand a question simply ask me to clarify it. Do not make assumptions because we all know what happens when we "assume" right?

MF said:
Here, with respect, is the error in your reasoning at this point : It is not the “number” of senses which are available to an agent which is important – it is the information content imparted via those senses. In most human agents the data and information required for semantic understanding of a language are imparted mainly via the two senses of hearing and sight – thus in humans these two senses are much more critical in learning semantics than the other 3 senses. An agent which possesses neither hearing nor sight must try to acquire almost all of the external data and information about a language via the sense of touch (the senses of taste and smell would not be very efficient conduits for most semantically useful information transfer). In other words, most of the agent's learning about language would be via braille. This would indeed be a massive problem for the agent - but it STILL would not necessarily mean that the agent would have NO semantic understanding, which is the point of the CR argument.
____________________________________________________

I am not suggesting that Tim will not have problems learning. Of course he will. He needs to learn everything about the outside world via his sense of hearing alone. Of course this will be a problem. But your analogy with Helen Keller is poor – as I have stated already arguably the two most important senses for human language learning are sight and hearing, both of which Helen lacked, and one of which Tim has. Thus I could argue that Tim’s problem will in fact be less severe than Helen’s.
Have you yet to imagine yourself in Tim's or Helen's shoes?
A human being takes advantage of all five senses to learn and understand. If you watch a child you will see it looking constantly at everything and reacting to just about every noise. You will also see it grab for and touch anything it can get it's hands on. When it does get it's hands on things they go straight to it's mouth and nose. One of the issues here is that we take for granted so much of our sensory input that we don't realize just how important those senses are.

MF said:
No, I don’t see the parallel. Why does having only the sense of hearing necessarily limit an agent to “one coordinate space”? Or is this a bad metaphor?
Perhaps it is a weak metaphor but it's purpose is to point out the importance of multiple senses. When you look to establish an objects location you use multiple coordinates. When you look to establish it's size you use multiple dimensions. When you look to establish it's composition you run multiple tests. The correlation of the data from multiple sources is always used to determine validity of information and to understand that information. With out multiple sources, or rather with only one source, you are stuck regarding only a single aspect of anything and largely are unable to substantiate much in the way of logical conclusions.
Helen Keller had three sources of information to compare and draw conclusions with. She relied mainly on her tactile sense which is actually a very important sense though you seem to regard it as lesser than vision and hearing.
Bats rely on hearing quite a bit. The problem though is that bats have very specialized hearing mechanisms and possibly even an instinctual program of how to interpret the information. Even still a bats hearing ability is not terribly reliable and easily thrown off. Only a couple species of bat are actually blind and rely heavily on their hearing ability but they always augment this with their other senses, probably most notably their sense of smell.
But really have you imagined what it must be like for Tim?
How do you determine what noises are, where they come from? You can hear when a human speaks but how do you know what a human is or that you are a human or that the noises you hear are words? How do you know that there are such things as "tagible objects"? You could maybe tell that when you are moving because you can hear the "air" passing your "ears" but wait... what if there is a wind and that is why air is passing your ears? How do you tell the difference. You can't "taste" and you can't "feel" but do you eat? Do you realize when you are eating? If so how? Does someone else feed you? Can you tell that that someone else is feeding you? Can you figure this out because they tell you? How do you know what they mean when they tell you if you have no idea what it is they are doing to you which they are explaining because you can't feel taste see or smell the food or their hands or the spoon or the plate or anything like? All you know is what you hear. Can you even tell the difference between being awake and being asleep?


I am asking this because I am trying to establish, regardless of the answers, that your personal "direct experience" is vital to your ability to understand even those things that are "indirect experiences". Note that I have not said that Tim is unable to possesses semantic understanding. I do believe Tim can possesses semantic understanding but that it would likely be severely limited by his situation.
Also I am asking this question because it is a step in my process but I wish to establish where we stand in this matter before continuing to the next step. I believe that by imagining Tim's situation we come closer to imagining the situation of a computer without sensory input attempting to understand. The CR computer at least has access to only one information input. After we discuss the parallel, if the discussion is even necessary after we determine where we stand in regards to Tim, I would like to discuss the question I previously posed in regards to "justification" of knowledge.
This is how I am linking my questions in regards to Tim with the CR and it's situation. Right now I just want to focus on what we can or can not agree about Tim and then run with what ever information I gleen from that.
 
Last edited:
  • #182
TheStatutoryApe said:
I say "theoretical" because it is your theory that based off of only one avenue of information gathering Tim will be able to gain a semantic understanding of what the word "red" means. However you have yet to explain how Tim will accomplish this. I have set up a scenario where Tim does not understand what "red" means and asked you how you would teach him what red means.
You have replied by saying that he can understand and "X" is the definition that he will be capable of understanding. You have continually failed to adress the "how" portion of my question. How will you teach him. How will he come to understand. How would you theorize the process occurring in his mind would unfold.

It is my position that Tim can “possess” understanding. It is also my position that an agent with NO senses at all can “possess” understanding – but obviously the question of how the agent is to acquire that understanding in the first place is a separate problem. In the case of an arbitrary agent I have many more possibilities than the 5 human senses. But I do not need to show how an agent has “acquired” its understanding in order to claim that it can “possess” understanding.
The problem of how TIM “acquires” that understanding in the first place is a separate issue. Tim is able to communicate – he can speak and he can hear. Given a means of communication then it is possible to transfer information. I am not suggesting that teaching Tim will be easy, but it will not be impossible. My concern here is only that the transfer of information can take place – you have certainly not shown that transfer of information cannot take place. I have no interest to go into the details of how Tim’s complete education would be accomplished in practice, so if you need to know these details then please go ask someone else.
This is a thought-experiment, not a detailed lesson in “how to teach Tim”. The relevance for the CR argument lies in the idea that Tim can in principle semantically understand “red”, just as the CR can in principle semantically understand “red”. When Searle proposed his CR thought experiment, nobody asked him “but HOW would you go about writing the rulebook in the first place?” – because that is a practical problem which does not change the in principle nature of the argument. Everyone KNOWS that “writing the rulebook” is one hell of a practical problem, and nobody has attempted to show how it could be done in practice, but this does not invalidate the thought experiment.

TheStatutoryApe said:
With respect, every version of the question I have asked has included the word "How"...
With respect, this is simply untrue. Your original question (which I even quoted in my last post, but you obviously failed to read) was

TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?

Which I interpreted to mean “can Tim convey his concept of red to others?”. And that original interpretation has stuck in my mind as the question you are asking, until you quite pointedly stated that you mean something completely different.
Do you see now how the confusion is caused by your ambiguity?

TheStatutoryApe said:
A human being takes advantage of all five senses to learn and understand. If you watch a child you will see it looking constantly at everything and reacting to just about every noise. You will also see it grab for and touch anything it can get it's hands on. When it does get it's hands on things they go straight to it's mouth and nose. One of the issues here is that we take for granted so much of our sensory input that we don't realize just how important those senses are.
And your point is?
All intelligent agents will utilise whatever information sources are available to them. The more sources of information then (in general) the easier it will be for the agent to gain knowledge of the world around them. I have never said otherwise.

TheStatutoryApe said:
When you look to establish it's size you use multiple dimensions. When you look to establish it's composition you run multiple tests. The correlation of the data from multiple sources is always used to determine validity of information and to understand that information. With out multiple sources, or rather with only one source, you are stuck regarding only a single aspect of anything and largely are unable to substantiate much in the way of logical conclusions.
Tim is stuck with his one sense of hearing. It’s still stereo hearing and he can still develop spatial awareness and an understanding of coordinate systems, distances, motion etc based on this. The fact that he only has one sense is going to make it very tough for Tim to learn. But it does NOT mean that Tim cannot develop a semantic understanding of any particular concept or language. It simply means it will not be as easy for him as it would be for an agent with multiple other senses. But this is not relevant to the CR argument anyway.

TheStatutoryApe said:
But really have you imagined what it must be like for Tim?
I do not need to, because I still do not see the relevance of your argument to the CR question.

TheStatutoryApe said:
I am asking this because I am trying to establish, regardless of the answers, that your personal "direct experience" is vital to your ability to understand even those things that are "indirect experiences".
You have so far not shown that “direct experience” is necessary for semantic understanding.
If this is now your position then you are contradicting your earlier position, which was that “indirect” and not “direct” experience is necessary for semantic understanding.
In the case of Tim, the agent has NO other means of acquiring information other than via the sense of hearing. In the case of AI we are not talking of a human agent which is necessarily limited to learning via any of its five senses. I could argue that an artificial intelligence, able to access and process data in ways that humans cannot, could develop an even deeper, more complex, more consistent and more complete semantic understanding of a language than any human is capable of – and it would not necessarily need to have any particular human sense, or any direct experience, to be able to do this.

At the same time there is also no reason why our AI should not be equipped with information-input devices corresponding to the human senses of vision, hearing, touch, even smell and taste, IF we should so wish – the AI is not necessarily restricted to any particular sense, in terms of its learning ability. Thus I do not see the relevance of your Tim thought-experiment to the question of whether a machine can semantically understand.

TheStatutoryApe said:
Also I am asking this question because it is a step in my process but I wish to establish where we stand in this matter before continuing to the next step. I believe that by imagining Tim's situation we come closer to imagining the situation of a computer without sensory input attempting to understand.
The analogy is completely inappropriate. In Tim’s case, the only source of information he has about the outside world is via his sense of hearing.
In the case of a computer there are many ways in which the information can be imparted to the computer.

TheStatutoryApe said:
The CR computer at least has access to only one information input.
The CR is ALREADY PROGRAMMED. It already possesses the information it needs to do the job it is supposed to do, by definition. It does not necessarily NEED any senses at all for the purpose of learning – because Searle has not indicated whether the CR has ANY ability to learn from new information at all – this is one of the questions I already asked a long time ago. The CR experiment does not ask “how does the CR acquire it’s understanding in the first place?”, it asks “does the CR semantically understand?”

MF
 
  • #183
TheStatutoryApe said:
I would add that the understanding between agents (e.g... communication by means of language) is contingent upon significant similarity in "informational experience". A gap is expected but a significant gap will result in the break down of communicative understanding.

I would say that understanding between agents is contingent upon similarities in semantic understanding, and NOT necessarily on similarities in “informational experience”.

Take my example of Houston and the Aliens.

If our semantic understanding of “red” is based simply on “Red is the experiential quality when one perceives a red object”, then Houston and the Aliens cannot reach any level of understanding about what is meant by the term red, BECAUSE defining red as the experiential quality when one perceives a red object is a circular argument, and the Aliens have no way of directly experiencing seeing a red object without knowing in advance what a red object is. Thus if one’s definition of red is indeed “Red is the experiential quality when one perceives a red object” then I can see how one erroneously might conclude that understanding between agents is contingent upon significant similarity in "informational experience".

But if our semantic understanding of red is based on “Red is the experiential quality that you have when you sense-perceive electromagnetic radiation with wavelengths of the order of 650nm”, then Houston and the Aliens can indeed understand each other, even though their “informational experiences” (ie exactly how they have ariived at that definition and understanding) may be very different.

Thus : Understanding between agents is contingent simply upon significant similarities in semantic understanding, and not necessarily on similarities in informational experience.

MF
 
  • #184
Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.

That is a silly objection.

It is not an objection, it is an observation. Do you dispute it?

If it is not an objection, it has no relevance to the debate...

Anyone I can actually speak to obviously has a
functioning brain.

Tournesol, your arguments are becoming very sloppy.
I can (if I wish) “speak to” my table – does that mean my table has a functioning brain?



I am not going on their external behaviour alone; I have
an insight into how their behaviour is implemented, which is missing in the CR
and the TT.

What “insight” do you have which is somehow independent of observing their behaviour?

The insight that "someone with a normal, funcitoning brain has consciousness
and understanding broadly like mine, because my cosnciousness and
understanding are genrated by my brain".

How would you know “by insight” that a person in a coma cannot understand you, unless you put it to the test?
How would you know “by insight” that a 3-year old child cannot understand you, unless you put it to the test?


The solution to this problem is to try and develop a better test, not to “define our way out of the problem”

I have already suggested a better test. You were not very receptive.

Sorry, I missed that one. Where was it?

"figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness."

It would help if you spelt out what, IYO, the (strong) AI arguemnt does say.

I am not here to defend the AI argument, strong or otherwise.
I am here to support my own position, which is that machines are in principle capable of possessing understanding, both syntactic and semantic.

You certainly should be defending the strong AI argument, since that is
what this thread is about.
If you really are only saying that
"machines are in principle capable of possessing understanding, both syntactic and semantic"
you are probably wasting my time and yours. Neither Searle nor I rule
out machine understanding in itself. The argument is about whether machine
understanding can be achieved purely a system of abastract rules (SMART).
It can be read as favouring one approach to AI, the Artificial Life, or
bottom-up approach , over another, the top-down or GOFAI.


Note, that this distinction between (syntax and SNART) makes no real difference to the CR.

Can you show this, or are you simply asserting it?

Re-read the CR, and see if it refers to syntactic rules to the exlcusion
of all others.


In fact,
(there are rules for semantics) has never struck anybody except yourslef, since there are so many objections
to it.

I do not think this is true. Even if it were true, what relevance does this have to the argument?

It explains why it is quite natural to assume SMART is restricted to syntax --
this is not some malicious misreading of what you are saying.

The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.

How is that relevant to the CR ? Are you saying that the CR can *acquire*
semantics despite its lack of interaction and sensory contact with an
evironment ?


I am saying that the information and knowledge to understand semantics can be encoded into the CR, and once encoded it does not need continued contact with the outside world in order to understand

You have not explained how it is encoded. It cannot be acquired naturally, by
interaction with an environmnent, it cannot be cut-and-pasted.


Are you saying you can "download" the relevant information
from a human -- although you have already conceded that information may
fail to make sense when transplanted from one context to another ?


Where did I say that the information needs to be downloaded from a human?

It was a guess.
So far you haven't said anything at all about where it is to come
from, which is not greatly to the advantage of your case.

Are you perhaps suggesting that semantic understanding can only be transferred from a human?
The only “information” which I claim would fail to make sense when transplanted from one agent to another is subjective experiential information – which as you know by now is not necessary for semantic understanding.

If the variability of subjective information is based on variability of brain
anatomy, why doesn't that affect everything else as well ?

And if you think non-subjective information can be downloaded from a brain --
why mention that ? Are you saying the CR rulebook is downloaded from a brain
or what.

By virture of SMART ?

By virtue of the fact that semantic understanding is rule-based

You have yet to support that.

By standard semantics, possession of undertanding is *necessary* to report.

The question is not whether “the ability to report requires understanding” but whether “understanding requires the ability to report”
If you place me in situation where I can no longer report what I am thinking (ie remove my ability to speak and write etc), does it follow that I suddenly cease to understand? Of course not.

You are blurring the distinction between being able to report under specific
circumstances, and being able to report under any circumstances. When
we say people are conscious, we do not mean they are awake all the time.
When we say understanding requires the ability to report, we
do not mean that people produce an endless monologue on their internal
state.

Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?


1) "What red looks like"
2) "The experiential qualities of red which cannot be written down"


Mary can semantically understand the statement “what red looks like” without knowing what red looks like.

Mary has a partial understanding. She acquires a deeper understanding when she
leaves her prison and sees red for the first time.

The statement means literally “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “what red looks like”.

It is not the meaning that "what red looks like" has to someone who has
actually seen red. It is not the full meaning.

Mary can semantically understand the statement “the experiential qualities of red which cannot be written down” without knowing the experiential qualities of red. The statement means literally “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “the experiential qualities of red which cannot be written down”.

If you have seen a cat, you can understand the sentence "a cat you have not
seen". If you have not seen a cat it is more difficult. If you have not seen
a mammal it is more difficult still...etc...etc. There is no fixed border not
understanding and understanding.


Thus I have shown that Mary can indeed semantically understand both your examples.

Given a minimal definition of "understanding" -- setting the bar low, in other
words.


Now, can you provide an example of a statement containing the word “red” which Mary CANNOT semantically understand?

I already have :- given that semantics means semantics and not the bits of
semantics Mary can understand while still in the room.

What red looks like is nothing to do with semantic understanding of the term red – it is simply “what red looks like”.

How can the semantic meaning of "what red looks like" fail to have anything to
do with what red, in fact, looks like?


What red looks like to Tournesol may be very different to what red looks like to MF,

Possibly. About as possible as Zombies.


but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of what red looks like.

Tu quoque.

The experiential qualities of red are nothing to do with semantic understanding of the term red – these are simply “the experiential qualities of red”.

Tu quoque.


The experiential qualities of red for Tournesol may be very different to The experiential qualities of red for MF,


Do you know that ? Or do you just mean they are not necessarily the same ? How does that relate to meaning anyway ?
"What is ny name", "What is the time" and "Where are we" tend to have different meanings accordimg to who says them, when, and where. Are you completely sure that "X has different meanings for different people" equates to "X has no (objective) semantic meaning" ?

but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of the experiential qualities of red.

Tu quoque. You seem to be asserting that, in Fregean terminology, meaning is
purely sense and not reference. But, for Frege,
sense and reference are both constituents of meaning.

The confusion between “experiential qualities” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”
One meaning (based on subjective experiential knowledge of red) would be expressed “what does the colour red look like?”.

The other meaning (the objective semantic meaning of red) would be expressed as “what is the semantic meaning of the term red?”.

The fact that the second meaning is "objective" does not imply that it and it
alone is semantic. You are avoding the idea that meaning can have a
subjective component, rather than arguing against it.

This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

Using "objective" and "semantic" as ineterchangeable synoymns is conufusing
meanings.

To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics.

You are the one asserting that semantics is necessarily NOT rule-based. I could equally say the onus is on you to show why it is not.

I already have. To recap:

1) The circularity argument: "gift" means "present", "present" means "gift",
etc.
2) The floogle/blint/zimmoid argument. Whilst small, local variations in
semantics wil probably show up as variations in symbol-manipulation, large,
global variations conceivably won't -- whaever variatiopns are entailed by substituting "pigeon" for "strawberry"
are canceled out by further substitutions. Hence the "global" versus "local"
aspect. Therefore, one cannot safely infer
that one's coloquitor has the same semantics as oneself just on the basis
that they fail to make errors (relative to your semantic model) in respect of symbol-manipulation.
3) The CR argument itself.


No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.

What relevance does that have to the CR? If the TT cannot establish that a
system understands correctly, how can it establish that it understands at all
?

Very relevant. If the CR passes most of the Turing test, but fails to understand one or two words because those words are simply defined differently between the CR and the human interrogator, that in itself is not sufficient to conclude “the CR does not understand”

The floogle/blint/zimmoid argument shows that a CR could systematically
misunderstand (have the wrong semantic model for) all its terms without displaying any errors with regard to
symbol-manipulation.

The argument that syntax undeterdetermines sematics relies on the fact that
syntactical rules specify transformations of symbols relative to each other --
the semantics is not "grounded". Appealing to another set of rules --
another SMART -- would face the same problem.

“Grounded” in what in your opinion? Experiential knowledge?
What experiential knowledge do I necessarily need to have in order to have semantic understanding of the term “house”?

If experiential knowledge is unecessary, you should have no trouble with
"nobbles made gulds, plobs and giffles"
"plobs are made of frint"
"giffles are made vob"
etc, etc.

IOW, it only *seems* to you that experience is unnecessary because YOU ALREADY
KNOW what terms like "brick" , "window" and "door" mean.

IOW if your theory is correct you should be able to tell me what
nobbles, gulds, plobs, giffles, frint and vob are.


So...can you ?
 
Last edited:
  • #185
I most people agree on the definitons of the words in a sentence, what
would stop that sentence being an analytic truth, if it is analytic ?

If X and Y agree on the definitions of words in a statement then they may also agree it is analytic. What relevance does this have?

Your highly selective objcetions to analytic truths. You claim that
"understanding requires consciousness" is tantamount to a falsehood
but "understanding does not require experience" is a close to a necessary
truth. Yet both claims depends on the definitons of the terms involved.



There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities,

How do you know they are its qualities, in the complete absence of a
defition ? Do they have name-tags sewn into their shorts ?


You are not reading my replies, are you? I never said there should be no definitions, I said there is a balance to be struck. Once again you seem to be making things up to suit your argument.

Indeed there is a balance to be struck. You cannot claim that your approach to
the nature of understanding does not depend on a the way you define
"understanding" and you have not made it clear how your definition is
preferable to the "understanding requires consciousness" definition.

Why should it matter that "the “experiential knowledge of red” is purely subjective".
Are you supposing that subjective knowledge doesn't matter for semantics ?

I am suggesting that subjective experiential knowledge is not necessary for semantic understanding. How many times do you want me to repeat that?


You have made it abundantly clear that you are suggesting it.

The (unanswered) question is why you are suggesting it.

They should be broadly similar if our brains are broadly similar.
“Broadly similar” is not “identical”.
A horse is broadly similar to a donkey, but they are not the same animal.

So ? My point is that naturalistically you would expect variations in
conscious experience to be proportionate to variations in the physical
substrate. IOW, while your red might be a slightly different red to my
red, there is no way it is going to be the same as my grren, unless
one of us has some highly unusual neural wiring.

you are not aware that some of the things you are saying have implications
contrary to what you are trying to assert explicitly.

You are perhaps trying to read things into my arguments that are not there, to support your own unsupported argument. When I say “there is no a priori reason why they should be identical” this means exactly what it says. With respect if we are to continue a meaningful discussion I suggest you start reading what I am writing, instead of making up what you would prefer me to write.

There is a good aposteriori reason. If consciousness doesn't follow the
same-cause-same-effect rule, it is the only thing that doesn't.


You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.

I certainly do not. Red is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm. This is a semantic understanding of red. What more do I need to know?

what "red" looks like.

Whether or not I have known the experiential quality of seeing red makes absolutely no difference to this semantic understanding.


According to your tendentious defintion of "semantic understanding".


What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

that is a classically anti-physicalist argument.


It may be a true argument, but it is not necessarily anti-physicalist.

How can conscious experience vary in a way that is not accounted for by
variations in physical brain states ? That knocks out the possibility
that CE is caused by brain-states, and also the possibility that it
is identical with brains states. Anything other possibiliy is surely
mind-body dualism. Have you thought this issue thorugh at all ?

Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.

The semantics is completely embodied in the meaning of the term red – which is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm.

Given your tendentious defintion of "semantic meaning".

What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?

What they look like, experientially.

What they look like is an experiential quality, it is not semantic understanding.
Given your tendentious defintion of "semantic understanding".

Perhaps you would claim that I also do not have a full understanding of red because I have not tasted red? And what about smelling red?

What the full meaning of a term in a human language is, depends on human
senses. Perhaps Martians can taste "red", bu the CR argument is about human
language.

Clearly experiential semantics conveys understanding of
experience.

I can semantically understand what is meant by the term “experience” without actually “having” that experience.

You need to demonstrate that you can understrand "experience" without having
any experience. All your attempts to do so lean on some pre-existing
semantics acquired by interaction with a world. You have never
really tackled the problem of explaing how an abstract SMART-based
semantics bootstraps itself.

We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?

The CR already contains information in the form of the rulebook

But how do you propose to get it into the CR?

You haven't supplied any other way the CR can acquire semantics.

Sematics is rule-based, why should the CR not possesses the rules for semantic understanding?

Nobody but you assumes apriori that semantics is rule-based. You are making
the extraordinary claim, the burden is on you to defend it.

I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.

Please define the Hard Problem.

The problem of how brains, as physical things (not abstract SMART systems),
generate conscious experience (as opposed to information processing).

Surf "chalmers hard problem consciousness"


Because human languages contain vocabulary relating to human senses.

And I can have complete semantic understanding of the term red, without ever seeing red.

Given your tendentious defintion of "semantic meaning".

If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?

By reasoning and experimental test.

Reasoning involves definitions and experimental test rquires pre-existing
standards.

Do I have a way of knowing whether phsycialism is true ?

You don’t. And I understand that many people do not believe it is true.

Are you one of them ? Are you going to get off the fence on the issue ?

We have been through all this: you can be too
anthropocentric, but you can be insufficiently anthropocentric too.

And my position is that I believe arbitrary definitions such as “understanding requires consciousness” and “understanding requires experiential knowledge” are too anthropocentrically biased and cannot be defended rationally

Well, that's the anti-physicalist's argument.

It’s my argument. I’m not into labelling people or putting them into boxes.
I see no reason why X’s subjective experience of seeing red should be the same as Y’s

Oh puh-leaze! This isn't a PC-thing. Saying that physically similar brains produce consciousness
in the same way is just a value-neutral statement, like saying that physically
similar kidneys generate urine the same way.



The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?

By “how do I semantically understand the term qualia”, do you mean “how do I semantically understand the term experiential quality”?
Let me give an example – “the experiential quality of seeing red” – which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. What is missing from this semantic understanding the experiential quality of seeing red?

What red actaully looks like.

you
claim to be a physicalist.

To my knowledge I have made no such claim in this thread

So people have immaterial souls ? But then how can you be sure
that AI's have real understanding or consciousness. Can't
you see that "SMART is sufficient for understanding" is
a more-physicalist-than-physicalism stance. Not only
does it imply that no non-physical component is needed,
it also implies that the nature of the physical basis
is fairly unimportant.

Anyway, experience has to do with the semantics of expreiential language.

Semantic understanding has nothing necessarily to do with experiential qualities, as I have shown several times above

You have not shown it to be true apart from your
tendentious defintion of "semantic meaning".


It is not part of your definition of understanding -- how remarkably
convenient.

And remarkably convenient that it is part of yours?
The difference is that I can actually defend my position that experiential knowledge is not part of understanding with rational argument and example – the Mary experiment for example.

The point of the Mary parable is exactly the opposite of what you are trying
to argue. And you complain about being misunderstood !


Ask a blind person what red looks like.
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red,

it obviously follows that they do not have the same seamntic understanding as
someone who actually does know what red looks like. You keep trying
to pass off "some understanding" as "full understanding".

which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. Experiential knowledge is not part of this semantic understanding.

It is not part of your tendentiously stripped-down defintion of
"understanding", certainly.

Yes there is: all brains are broadly similar anatomically

As before, “broadly similar” is not synonymous with “identical”.

Again, that is not relevant. The question is whether gross subjective
differences could emerge from slight physical differences.

. If they were not,
you could not form a single brain out the two sets of genes you get from your
parents. (Argument due to Steven Pinker).

Genetically identical twins may behave similarly, but not necessarily identically. Genetic makeup is only one factor in neurophysiology.

Again irrelevant. You keep using "not identical" to mean "radically
different".

this is a style of
argument you dislike when others use it.

It is not a question of “disliking”.
If a position can be supported and defended with rational argument (and NOT by resorting solely to “definition” and “popular support”)

Your position is based on defintions that DON'T EVEN HAVE popular support!

You have already conceded that the question of definitions cannot be
short-circuited by empirical investigation.


then it is worthy of discussion. I have put forward the “What Mary does not understand about red” thought experiment in defence of my position that experiential knowledge is not necessary for semantic understanding, and so far I am waiting for someone to come up with a statement including the term red which Mary cannot semantically understand. The two statements you have offered so far I have shown can be semantically understood by Mary.

According to your tendentious defintion of "semantic understanding".


You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.

What “explanatory gap” is this?

Surf "explanatory gap Levine".
 
  • #186
moving finger said:
Tisthammerw said:
Yes and no. Think of it this way. Suppose I “disagree” with your definition of “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?

If Tisthammerw has a different definition of bachelor then it is up to Tisthammerw to decide whether the statement “bachelors are unmarried” is analytic or not according to his definitions of bachelor and unmarried.

I see, so it is only analytic depending on how one defines the terms, something I have been saying from the beginning and yet you continued to ignore and misconstrue my words.

Anyway, let's move on.


Tisthammerw said:
In fact I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms. To recap: I described the scenario, and I have subsequently asked you the questions regarding whether or not this fits your definition of perceiving etc.

With resepct, what part of “you did NOT specify that the computer you have in mind is interpreting the data/information” do you not understand?

With respect, what part of "I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms" do you not understand? I described the computer, and asked you if this computer was "interpreting data/information" according to your definitions of those terms (in addition to perceiving).

Let’s recap:

Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)?


So tell me – is the computer you have in mind interpreting the data/information?

You tell me.


I also trim the parts on “my definition is better than yours” since I consider this rather puerile.

Then why did you start it?


Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?


There is more than one meaning of “to perceive”.

Fine, but that doesn’t answer my question. Does this entity perceive using your definition of term (whatever that is)?


For an entity to “sense-perceive” a bright light it must possesses suitable sense receptors which respond to the stimulus of that light.
Whether that entity is necessarily “aware” of that bright light is a different question and it depends on one’s definition of awareness. I am sure that you define awareness as requiring consciousness. Which definition would you like to use?

It is not a different question it is crucial to the question I asked: Under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?

My definition of consciousness is the same as I defined it earlier.

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Thus my definition of consciousness is such that if an entity possesses awareness, the entity possesses consciousness.

For those who wish further precision, these individual characteristics could also be defined. “Perception” is the state of being able to perceive. Using the Eleventh edition of Merriam Webster’s dictionary, I refer to to “perceive” (definitions 1a and 2), “sensation” (1b), “thought” (1a), and “awareness” (2). These definitions are also available at http://www.m-w.com/


And I have shown repeatedly that you have “shown” no such thing

And I have shown repeatedly that what you have shown no such thing regarding what I have shown.

Okay, this is getting a bit unwieldy…

(see post #256 in the “can artificial intelligence……” thread)

See my response to that post.


Suggestion : If you wish to continue discussing the Program X argument can we please do that in just one thread (let’s say the AI thread and not this one)? That way we do not have to keep repeating ourselves and cross-referencing.

As you wish.
 
  • #187
Tournesol said:
If it is not an objection, it has no relevance to the debate...
Why is an observation necessarily not relevant to a debate?

Tournesol said:
"figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness."
Yes, this is one possible approach. But not necessarily a good one. It implicitly assumes that “the physics which produces consciousness in the brain is the only kind of physics which can produce consciousness”, which is not necessarily the case.

Tournesol said:
You certainly should be defending the strong AI argument, since that is
what this thread is about.
Thank you for telling me what I “should” be doing. With respect, I’ll ignore your advice. I do what I wish to do, not what you think I should do.

Tournesol said:
If you really are only saying that
"machines are in principle capable of possessing understanding, both syntactic and semantic"
you are probably wasting my time and yours. Neither Searle nor I rule
out machine understanding in itself. The argument is about whether machine
understanding can be achieved purely a system of abastract rules (SMART).
It can be read as favouring one approach to AI, the Artificial Life, or
bottom-up approach , over another, the top-down or GOFAI.
You must be reading a different version of the CR to me.

Tournesol said:
Re-read the CR, and see if it refers to syntactic rules to the exlcusion
of all others.
You must be reading a different version of the CR to me.

Tournesol said:
It explains why it is quite natural to assume SMART is restricted to syntax --
this is not some malicious misreading of what you are saying.
As I said, you must be reading a different version of the CR to me.

Tournesol said:
You have not explained how it is encoded. It cannot be acquired naturally, by
interaction with an environmnent, it cannot be cut-and-pasted.
Why can it not be cut and pasted? Have you shown why?

Tournesol said:
So far you haven't said anything at all about where it is to come
from, which is not greatly to the advantage of your case.
I am not the one asserting that the “CR shows semantic understanding in machines is impossible”. The onus is on the owner of the thought experiment to defend the logic and conclusions of the thought experiment. The basic assumption of the CR (that the premise “syntax gives rise to semantics” is false) is false – because the CR does not need a premise that “syntax gives rise to semantics”.

Tournesol said:
If the variability of subjective information is based on variability of brain
anatomy, why doesn't that affect everything else as well ?
Not everything is subjective

Tournesol said:
And if you think non-subjective information can be downloaded from a brain --
why mention that ? Are you saying the CR rulebook is downloaded from a brain
or what.
I’m saying the CR rulebook must be created, but not necessarily “downloaded from a brain”.

moving finger said:
By virtue of the fact that semantic understanding is rule-based
Tournesol said:
You have yet to support that.
I am not the one assering that semantic information is NOT rule-based – Searle is. Let Searle (or his supporters) defend the CR experiment by showing that semantic understanding is NOT rule-based.

Tournesol said:
You are blurring the distinction between being able to report under specific
circumstances, and being able to report under any circumstances. When
we say people are conscious, we do not mean they are awake all the time.
When we say understanding requires the ability to report, we
do not mean that people produce an endless monologue on their internal
state.
I dispute that understanding requires the ability to report. Are you saying it does?

Tournesol said:
Mary has a partial understanding. She acquires a deeper understanding when she
leaves her prison and sees red for the first time.
I would say that understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a much DEEPER understanding of “red” than simply “knowing the experiential quality of seeing red”.

Tournesol said:
It is not the meaning that "what red looks like" has to someone who has
actually seen red. It is not the full meaning.
What is “full meaning”? Is this similar to the “complete understanding” of a horse which quantumcarl insists can only be obtained by someone who feeds, grooms and mucks out a horse? What makes you think that you have access to the “full meaning” of red?
I would say that understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a much FULLER understanding of “red” than simply “knowing the experiential quality of seeing red”.

moving finger said:
Thus I have shown that Mary can indeed semantically understand both your examples.
Tournesol said:
Given a minimal definition of "understanding" -- setting the bar low, in other
words.
Again, to my mind understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a much HIGHER understanding of “red” than simply “knowing the experiential quality of seeing red”

moving finger said:
Now, can you provide an example of a statement containing the word “red” which Mary CANNOT semantically understand?
Tournesol said:
I already have :- given that semantics means semantics and not the bits of
semantics Mary can understand while still in the room.
Are you saying that Mary’s “semantic understanding of red” while still in the room is NOT in fact a semantic understanding of red?

Tournesol said:
To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics.
The onus is on Searle, or his followers, to defend their thought-experiment which is based on the assumption that AI necessarily posits that syntax gives rise to semantics. The assumption is false, hence the thought experiment needs re-stating.

moving finger said:
You are the one asserting that semantics is necessarily NOT rule-based. I could equally say the onus is on you to show why it is not.

Tournesol said:
I already have. To recap:

1) The circularity argument: "gift" means "present", "present" means "gift",
etc.
I know that this does not follow – why can a machine not know the same?
That a “gift” is also a “present”, whereas “present” has more than one meaning, is still following a “rule”. I know this rule, a machine can know it also.

Tournesol said:
2) The floogle/blint/zimmoid argument. Whilst small, local variations in
semantics wil probably show up as variations in symbol-manipulation, large,
global variations conceivably won't -- whaever variatiopns are entailed by substituting "pigeon" for "strawberry"
are canceled out by further substitutions. Hence the "global" versus "local"
aspect. Therefore, one cannot safely infer
that one's coloquitor has the same semantics as oneself just on the basis
that they fail to make errors (relative to your semantic model) in respect of symbol-manipulation.
Then don’t just test their symbol manipulation – test their understanding of the language. This is within the scope of the Turing test. The TT is not meant to be ONLY a test of symbol manipulation, it is meant also to be a test of understanding.

Tournesol said:
3) The CR argument itself.
The CR argument neither assumes nor shows that semantics is not rule-based. It assumes that AI necessarily posits “syntax gives rise to semantics”, which it does not. Hence the premises of the CR argument are untrue.

Thus NONE of the above shows that semantics is NOT rule-based.
NOTHING in the CR argument shows that semantics is NOT rule-based.

Tournesol said:
The floogle/blint/zimmoid argument shows that a CR could systematically
misunderstand (have the wrong semantic model for) all its terms without displaying any errors with regard to
symbol-manipulation.
Which is why the TT must test not only symbol manipulation (syntax), but also understanding (semantics).

Tournesol said:
If experiential knowledge is unecessary, you should have no trouble with
"nobbles made gulds, plobs and giffles"
"plobs are made of frint"
"giffles are made vob"
etc, etc.
I have no trouble with understanding these terms at all.
Whether you define them in the same way as me is another issue.
If we wanted to share our definitions of the meanings of these words, does it follow that either of us needs to have any particular experiential knowledge in order to understand those meanings?

Tournesol said:
IOW if your theory is correct you should be able to tell me what
nobbles, gulds, plobs, giffles, frint and vob are.
I can tell you if you wish. Whether you will agree with me or not is another matter. Would the fact that you do not agree with me mean that I do not understand the terms?

MF
 
Last edited:
  • #188
MF said:
TheStatutoryApe said:
With respect, every version of the question I have asked has included the word "How"...
With respect, this is simply untrue. Your original question (which I even quoted in my last post, but you obviously failed to read) was

TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?
Which I interpreted to mean “can Tim convey his concept of red to others?”. And that original interpretation has stuck in my mind as the question you are asking, until you quite pointedly stated that you mean something completely different.
Do you see now how the confusion is caused by your ambiguity?
It would seem obvious to me that you are the one failing to read and pay attention. If you refer back to my last post you will see that I have quoted every instance in which I asked a question regarding Tim in order from the very first. I have been worried that perhaps I am not being clear enough in my posts but it seems quite obvious to me that you really aren't paying any attention.
Thank you for your time.
Have a nice day.
 
  • #189
MF said:
But I do not need to show how an agent has “acquired” its understanding in order to claim that it can “possess” understanding.
That's brilliant. I simply need to assert that something is possible without at all indicating how it is possible and it should be accepted as fact. I wonder why scientists go through all the trouble then.

MF said:
When Searle proposed his CR thought experiment, nobody asked him “but HOW would you go about writing the rulebook in the first place?”
Searle may not have gone into detail but he did explain how the rule book functioned with regard to the CR.

MF said:
I do not need to, because I still do not see the relevance of your argument to the CR question.
If you don't pay attention and simply dismiss this without regard for the implications then you never will see the relevence will you?
 
  • #190
moving finger said:
And this definition implies to you that to have semantic understanding of the term "horse" an agent must necessarily have groomed a horse, and shovelled the horses dung?

Yes. A true understanding requires every type of experiencial input. If one relies soley on text or binary data or even word of mouth and pictures, there is only enough knowledge in these formats to form a definition of a horse. Not an understanding of a horse.


moving finger said:
"Standing under something" in this context does not mean literally "physically placing your body underneath that thing" - or perhaps you do not understand what a metaphor is?

Clearly you are the one who does not understand what a metaphor is. Are you telling me that the word "understanding' is a metaphor? Please let us know how "understanding" works as a metaphor.

Can you please explain what the word "inside" is a metaphor for.


moving finger said:
I am talking here about semantic understanding - which is the basis of Searle's argument. Semantic understanding means understanding the meanings of words as used in a language

Where did you get your definition for "semantic understanding?". "Semantic undstanding" is an oxymoron. "Semantic" implies a standard system while "understanding" implies relative individuality and points of view.

moving finger said:
An agent can semantically understand what is meant by the term "horse" without ever having seen a horse, let alone mucked out the horses dung.

More like "an agent can use semantic knowledge to know the definition of a horse". However, if the agent has never been in close proximity with a horse, the agent does not understand horses... not how to ride them... not how to care for them... and pretty well everything else about a horse.


moving finger said:
No, I have admitted no such thing. You are mistaken here.

I think what you are referring to is that I read how you think of understanding as an individual experience that can not be quantified or qualified by another person... very easily.


moving finger said:
And by your definition, I and at least 95% of the human race do not semantically understand what is meant by the term "horse". Ridiculous.

"Semantically understand" is an empty and ill defined phrase. I've already shown it to be null and void. Semantic knowledge is what you are referring to.


moving finger said:
Billions of humans share knowledge with each other through the use of binary language - what do you think the internet is? Are you suggesting that the internet does not contribute towards shared understanding?

I haven't suggested this.


moving finger said:
You choose to define understanding such that only an agent who has shared an intimate physical connection with a horse (maybe one needs to have spent the night sleeping with the horse as well?) can semantically understand the term "horse".

Those who read about horses or hear about them have semantic knowledge of horses. They have no experience with horses... they do not understand the actual, true implications of the animal the horse.

moving finger said:
I noticed that you chose not to reply to my criticism of your rather quaint phrase "complete understanding". Perhaps because you now see the folly of suggesting that any agent can ever have complete understanding of anything.

In your case I do see the folly.

moving finger said:
As I have said many times already, you are entitled to your rather eccentric definition of understanding, but I don't share it.

You are entitled to your rather uninteresting and misleading definition of understanding, but I don't share it.

b' Bye.
 
  • #191
Tournesol said:
You claim that
"understanding requires consciousness" is tantamount to a falsehood
This is a false allegation. If one wishes to claim that "understanding requires consciousness" is a true statement then one must first SHOW that "understanding requires consciousness". This has not been done (except tautologically)

Tournesol said:
but "understanding does not require experience" is a close to a necessary
truth.
I am not saying this is necessarily true. Just that this is what I believe, based on rational examination of understanding. You clearly believe differently. Hence the reason for our debate here.

Tournesol said:
Yet both claims depends on the definitons of the terms involved.
How many times do I need to repeat it?
Whether a statement is analytic or not depends on the definitions of the terms used. If two agents do not agree on the definitions of the terms used then they may also not agree on whether the statement is analytic. Period.

Tournesol said:
You cannot claim that your approach to
the nature of understanding does not depend on a the way you define
"understanding" and you have not made it clear how your definition is
preferable to the "understanding requires consciousness" definition.
Where did I claim this? All terms used in arguments need to be defined either implicitly or explicitly. I choose to define understanding one way, you choose to define it another. Your choice assumes consciousness and experiential qualities are required, mine does not. Simple as that.

Tournesol said:
The (unanswered) question is why you are suggesting it.
Because that is my belief, and I believe it because nobody, including Tournesol, has come up with a rational and coherent argument to show experiential knowledge IS necessary for semantic understanding, except by the tautological method of defining semantic understanding such that it requires experiential knowledge “by definition”.

If someone claimed “a human heart is necessary for semantic understanding”, but they were unable to SHOW this to be the case, then why should I believe them? The same applies to experiential knowledge, and consciousness. The onus is on the person making the claim of “necessity” to show rationally why the necessity follows (without resorting to tautological arguments). In absence of such demonstration there is no reason to believe in necessity.

I have answered your question of why I believe that experiential knowledge is not necessary for semantic understanding. Can you now answer the question of why you think it IS necessary?

Tournesol said:
My point is that naturalistically you would expect variations in
conscious experience to be proportionate to variations in the physical
substrate.
I do not see that this follows at all. The genetic differences (the differences in the genes) between humans and chimpanzees is very very tiny – but the consequences of these tiny differences in genetic makeup are enormous. One cannot assume that a small difference in physical substrate results in a similarly small difference in the way the system behaves.

Tournesol said:
IOW, while your red might be a slightly different red to my
red, there is no way it is going to be the same as my grren, unless
one of us has some highly unusual neural wiring.
I see no reason why my subjective sensation of red should not be more similar to your subjective sensation of green than it is to your subjective sensation of red. We have absolutely no idea how the precise “qualia” arise in the first place, thus there is no a priori reason to assume that qualia are the same in different agents.

Tournesol said:
There is a good aposteriori reason. If consciousness doesn't follow the
same-cause-same-effect rule, it is the only thing that doesn't.
I have never suggested it does. I am simply suggesting that “similar physical substrate” does not necessarily imply “similar details of operating system”. The way that a system behaves can be critically dependent on very minor properties of the substrate, such that small changes in substrate result in enormous changes in system behaviour. Heard of chaos?

moving finger said:
Red is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm. This is a semantic understanding of red. What more do I need to know?
Tournesol said:
what "red" looks like.
This is a purely subjective quality and imho does not add to my semantic understanding of red. If the subjective quality “what red looks like” suddenly changed for me overnight, nothing would change in my semantic understanding of red.

Tournesol said:
According to your tendentious defintion of "semantic understanding".
I can argue that understanding red as “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a far greater and deeper semantic understanding of red than simply defining it tautologically as “red is the experiential quality associated with seeing a red object”, which latter arguably gives us no “understanding” at all.

moving finger said:
What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

Tournesol said:
that is a classically anti-physicalist argument.
Call it what you like. The conclusion is true. Or do you perhaps deny this conclusion is true?

Tournesol said:
How can conscious experience vary in a way that is not accounted for by
variations in physical brain states ?
Where have I suggested that it does? I am claiming that two different agents necessarily have two different physical substrates (even if only slightly different), and we have no a priori reason for assuming that a small difference in physical substrate necessarily equates to a small difference in conscious experience.

Tournesol said:
That knocks out the possibility
that CE is caused by brain-states, and also the possibility that it
is identical with brains states. Anything other possibiliy is surely
mind-body dualism. Have you thought this issue thorugh at all ?
Have you thought out the fact that you are STILL not reading my posts correctly, and continuing to invent your own incorrect ideas?
Where have I suggested that variations in CE is not accounted for by differences in brain-states?

Tournesol said:
Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.
It could be. And it could be that I am a Zombie and I have no understanding at all. That is why we need to develop objective tests for syntax and semantics.

moving finger said:
The semantics is completely embodied in the meaning of the term red – which is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm.
Tournesol said:
Given your tendentious defintion of "semantic meaning".
And you think that defining red as “the subjective experiential quality associated with seeing a red object” is a deeper and more insightful meaning of red?

Tournesol said:
What the full meaning of a term in a human language is, depends on human
senses. Perhaps Martians can taste "red", bu the CR argument is about human
language.
I cannot be sure of what red looks like to you, and (despite your claims to the contrary) you cannot be sure of what red looks like to me. The ONLY reason that we share a common understanding of red is BECAUSE “red” literally MEANS “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” – and it means this to ALL agents – and that is THE common semantic understanding of red.

Tournesol said:
You need to demonstrate that you can understrand "experience" without having
any experience. All your attempts to do so lean on some pre-existing
semantics acquired by interaction with a world.
Because that is the only way a human has of acquiring information and knowledge in the first place. Humans are born without any semantic understanding – the only way for a human to acquire the information and knowledge needed to develop that understanding is via the conduits of the senses. But this limitation does not necessarily apply to all agents, and it does not follow that direct sense-experience is the only way for all agents to acquire information and knowledge.

Tournesol said:
You have never
really tackled the problem of explaing how an abstract SMART-based
semantics bootstraps itself.
Searle has never tackled the problem of explaining how he would “write” the CR rulebook in the first place – but that is not used as an objection to the CR thought-experiment.

Tournesol said:
We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?
moving finger said:
The CR already contains information in the form of the rulebook
Tournesol said:
But how do you propose to get it into the CR?
Ask Searle – its HIS thought experiment, not mine. Searle has never tackled the problem of explaining how he would “write” the CR rulebook in the first place – but that is not used as an objection to the CR thought-experiment.

Tournesol said:
You haven't supplied any other way the CR can acquire semantics.
I am claiming that semantics is rule-based. Can you show it is not?
Can you give me an example of “the meaning of a word in English” which is NOT based on rules?
Since semantics is rule-based, it follows algorithms, and can be programmed into a machine. Whether we humans yet consciously understand all these rules and algorithms and thus whether we yet have the capability to program the CR is a separate practical problem.

Tournesol said:
Nobody but you assumes apriori that semantics is rule-based. You are making
the extraordinary claim, the burden is on you to defend it.
How do you know that I am the only person who believes semantics is rule-based, or is this simply your opinion again? And why does this make any difference to the argument anyway?
The proof of the pudding is in the eating. Give me any word in the English language and I can give you some of the rules that define my semantic understanding of the meaning of that word.
Can you give me an example of “the meaning of a word in English” which is NOT based on rules?

Tournesol said:
I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.
moving finger said:
Please define the Hard Problem.
Tournesol said:
The problem of how brains, as physical things (not abstract SMART systems),
generate conscious experience (as opposed to information processing).
We do not fully understand the detailed mechanisms underlying the operating of the conscious brain. I do not see how this strawman is relevant to the question of whether semantic understanding is rule-based or not.

Tournesol said:
Because human languages contain vocabulary relating to human senses.
moving finger said:
And I can have complete semantic understanding of the term red, without ever seeing red.
Tournesol said:
Given your tendentious defintion of "semantic meaning".
I cannot be sure of what red looks like to you, and (despite your claims to the contrary) you cannot be sure of what red looks like to me. The ONLY reason that we share a common understanding of red is BECAUSE “red” literally MEANS “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” – and it means this to ALL agents – and that is THE common semantic understanding of red.

Tournesol said:
If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?
moving finger said:
By reasoning and experimental test.
Tournesol said:
Reasoning involves definitions and experimental test rquires pre-existing
standards.
I never said otherwise. Are you suggesting that “ability to understand” can be established any other way than by reasoning and experimental test? If so would you care to explain how you would go about it?

Tournesol said:
Are you one of them ? Are you going to get off the fence on the issue ?
It seems important to you that you call me either a believer or disbeliever in physicalism. Please define exactly what you mean by physicalism and I might be able to tell you whether I believe in it or not.

Tournesol said:
Saying that physically similar brains produce consciousness
in the same way is just a value-neutral statement, like saying that physically
similar kidneys generate urine the same way.
But I have already shown above that your argument is unsound. One cannot assume that small differences in substrate necessarily lead to small differences in system behaviour. To use your terminology - this is not anti-physicalist, it is purely physicalist.

Tournesol said:
The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?
moving finger said:
By “how do I semantically understand the term qualia”, do you mean “how do I semantically understand the term experiential quality”?
Let me give an example – “the experiential quality of seeing red” – which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. What is missing from this semantic understanding the experiential quality of seeing red?
Tournesol said:
What red actaully looks like.
Which I have already said many times – is a subjective experiential quality and is not necessary for semantic understanding

Tournesol said:
You have not shown it to be true apart from your
tendentious defintion of "semantic meaning".
Whether my definition is tendentious or not is a matter of opinion.
You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

moving finger said:
The difference is that I can actually defend my position that experiential knowledge is not part of understanding with rational argument and example – the Mary experiment for example.
Tournesol said:
The point of the Mary parable is exactly the opposite of what you are trying
to argue. And you complain about being misunderstood !
I was referring to the Mary argument that I gave in this thread (see post #157), and not to some other Mary argument that you might have in mind. I apologise for not making that clear.

Tournesol said:
Ask a blind person what red looks like.
moving finger said:
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red,
Tournesol said:
it obviously follows that they do not have the same seamntic understanding as
someone who actually does know what red looks like. You keep trying
to pass off "some understanding" as "full understanding".
Would you care to define what you mean by “full understanding”?
Why is understanding red to be “my subjective experiential quality of seeing a red object” necessarily a more full understanding of red than understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”?

Tournesol said:
It is not part of your tendentiously stripped-down defintion of
"understanding", certainly.
Whether my definition is tendentious or not is a matter of opinion.
You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

Tournesol said:
The question is whether gross subjective
differences could emerge from slight physical differences.
The question is whether similar substrates necessarily give rise to similar system behaviour. See above for an answer to this question.

Tournesol said:
You keep using "not identical" to mean "radically
different".
You keep assuming that similar substrates necessarily give rise to similar system behaviour. This does not follow.

moving finger said:
If a position can be supported and defended with rational argument (and NOT by resorting solely to “definition” and “popular support”)
Tournesol said:
Your position is based on defintions that DON'T EVEN HAVE popular support!
That is your opinion. I am not claiming that my argument is based on popular support – I am claiming it is based on defensible rationality and logic

Tournesol said:
You have already conceded that the question of definitions cannot be
short-circuited by empirical investigation.
I have stated that a balance needs to be drawn. One could choose to “define everything”, assume everything in one’s definitions, and leave nothing to rational argument or empirical investigation (this seems to be your tactic), or one could choose to start with minimal definitions and then use these plus empirical investigation to construct a rational and consistent model of the world.

moving finger said:
I have put forward the “What Mary does not understand about red” thought experiment in defence of my position that experiential knowledge is not necessary for semantic understanding, and so far I am waiting for someone to come up with a statement including the term red which Mary cannot semantically understand. The two statements you have offered so far I have shown can be semantically understood by Mary.
Tournesol said:
According to your tendentious defintion of "semantic understanding".
Your opinion again

Tournesol said:
You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.
moving finger said:
What “explanatory gap” is this?
Tournesol said:
Surf "explanatory gap Levine".
Another strawman. How is this relevant to the question of whether semantic understanding is rule-based or not?

MF
 
Last edited:
  • #192
TheStatutoryApe said:
It would seem obvious to me that you are the one failing to read and pay attention. If you refer back to my last post you will see that I have quoted every instance in which I asked a question regarding Tim in order from the very first. I have been worried that perhaps I am not being clear enough in my posts but it seems quite obvious to me that you really aren't paying any attention.
And there it is in post #169, where you said, quite clearly :

TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?

Which is the question I actually answered, and which is an ambiguous question, as explained already.

Ignore this fact if you wish.

As I have explained many times, the other meaning of the question which is “how would you go about teaching Tim” is a strawman – it is not relevant to the question of whether the CR can possesses understanding, thus it does not NEED to be answered in a debate on whether the CR can possesses understanding.

Bye

MF
 
Last edited:
  • #193
moving finger said:
But I do not need to show how an agent has “acquired” its understanding in order to claim that it can “possess” understanding.
TheStatutoryApe said:
That's brilliant. I simply need to assert that something is possible without at all indicating how it is possible and it should be accepted as fact.
With respect, your thinking is very irrational here.
“Showing how an agent has acquired its understanding” (what you insist I must do) is NOT synonymous with “showing that an agent understands” (which is what I am claiming can be done) – or perhaps you think the two are synonymous?
TheStatutoryApe said:
I wonder why scientists go through all the trouble then.
Maybe because scientists are (by and large) rational agents – and your above argument is irrational

moving finger said:
When Searle proposed his CR thought experiment, nobody asked him “but HOW would you go about writing the rulebook in the first place?”
TheStatutoryApe said:
Searle may not have gone into detail but he did explain how the rule book functioned with regard to the CR.
Again, “showing how the rulebook was created in the first place” is not synonymous with “how the rulebook functions” (which latter Searle only desribed in a very cursory and simplistic way). By showing the basic principles of how the rulebook functions, Searle did not show (and did not need to show) how the rulebook was created in the first place.

moving finger said:
I do not need to, because I still do not see the relevance of your argument to the CR question.
TheStatutoryApe said:
If you don't pay attention and simply dismiss this without regard for the implications then you never will see the relevence will you?
The “implications” as you call them are based on your irrational attempt to equate “showing how an agent has acquired its understanding” with “showing that an agent understands”. I do not accept the two are equated. Your insistence that I must explain how an agent acquires its understanding is thus a strawman.

Bye

MF
 
  • #194
moving finger said:
If Tisthammerw has a different definition of bachelor then it is up to Tisthammerw to decide whether the statement “bachelors are unmarried” is analytic or not according to his definitions of bachelor and unmarried.
Tisthammerw said:
I see, so it is only analytic depending on how one defines the terms, something I have been saying from the beginning
This is exactly what I have been saying all along – see post #107 in this thread :

moving finger said:
Whether “conscious is required for understanding” is either an analytic or a synthetic statement is open to question, and depends on which definition of understanding one accepts.

If you agreed with this why didn’t you just say so at the time, and save us all this trouble?
Tisthammerw said:
would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving”…..?
The above does NOT tell me whether or not the computer is interpreting the data.
It’s your computer and your example – you tell me if it is interpreting or not – it’s not up to me to “guess”.
IF your computer is acquiring, storing, processing and interpreting the data, then by definition it is perceving. But only YOU can tell me if the computer you have in mind is doing any interpretation – I cannot guess it from your simple description.
Please answer the question – is the computer you have in mind doing any interpretation of the data?

Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?
moving finger said:
For an entity to “sense-perceive” a bright light it must possesses suitable sense receptors which respond to the stimulus of that light.
Whether that entity is necessarily “aware” of that bright light is a different question and it depends on one’s definition of awareness. I am sure that you define awareness as requiring consciousness. Which definition would you like to use?
Tisthammerw said:
It is not a different question it is crucial to the question I asked:
Perception is NOT synonymous with awareness, thus “does the agent perceive?” is a DIFFERENT question to “is the agent aware?”
I cannot answer the question you asked unless you first tell me how you wish to define “aware”.
Please answer the question.
(Note : The definition you gave of consciousness in your last post is NOT a definition of awareness – it USES the concept of awareness as one of the defining characteristics of consciousness, without defining awareness itself)

MF
 
Last edited:
  • #195
quantumcarl said:
A true understanding requires every type of experiencial input. If one relies soley on text or binary data or even word of mouth and pictures, there is only enough knowledge in these formats to form a definition of a horse. Not an understanding of a horse.
Thus, following your definition of semantic understanding, at least 95% of the human race has no semantic understanding of the term “horse”.
I see.
And you expect me to agree with this? :yuck:
quantumcarl said:
"Semantic undstanding" is an oxymoron. "Semantic" implies a standard system while "understanding" implies relative individuality and points of view.
“Semantic” relates to the meaning of words as used in a language
“Understanding” relates to the intelligent use of knowledge and information generally, and is not necessarily restricted to languages and words
There are other types of “understanding” apart from “semantic understanding” – thus the phrase “semantic understanding” is not an oxymoron - or perhaps you do not understand this?

quantumcarl said:
However, if the agent has never been in close proximity with a horse, the agent does not understand horses... not how to ride them... not how to care for them... and pretty well everything else about a horse.
This is where you are confusing “understanding” with “semantic understanding” – and why it is important to understand the difference. I can semantically understand what a “Chinese person” is, but I cannot understand that person (in the sense of understanding his language).

quantumcarl said:
"Semantically understand" is an empty and ill defined phrase. I've already shown it to be null and void. Semantic knowledge is what you are referring to.
And I’ve shown where you misunderstand understanding
Knowledge is a different concept – knowledge forms the basis of understanding, but the fact than an agent possesses knowledge does not imply that the agent also possesses any understanding. Searle's whole CR argument is based on the premise that an agent can possesses knowledge of the English language without understanding the semantics of that language.

moving finger said:
Billions of humans share knowledge with each other through the use of binary language - what do you think the internet is? Are you suggesting that the internet does not contribute towards shared understanding?
quantumcarl said:
I haven't suggested this.
Then you won’t have a problem with the idea that we can gain understanding by communicating in binary

quantumcarl said:
Those who read about horses or hear about them have semantic knowledge of horses. They have no experience with horses... they do not understand the actual, true implications of the animal the horse.
Again you are referring to a different type of understanding – not simply semantic understanding of the term “horse”, but instead an intimate, physical, empathic, possibly even emotional and psychic, connection with the agent “horse”.

moving finger said:
I noticed that you chose not to reply to my criticism of your rather quaint phrase "complete understanding". Perhaps because you now see the folly of suggesting that any agent can ever have complete understanding of anything.
quantumcarl said:
In your case I do see the folly.
I’m glad to see that though you lost the argument you nevertheless haven’t lost your wit :biggrin:

MF
 
Last edited:
  • #196
What “explanatory gap” is this?
Surf "explanatory gap Levine".

Another strawman. How is this relevant to the question of whether semantic understanding is rule-based or not?

a) How can an answer to a question -- your question -- be a strawman argument ?

b) If you did the surfing you might be able to figure out for yourself.
 
  • #197
Concsious understanding

Here's an article about a person with a global aphasic which is a type of brain damage who prosesses semantic knowledge without a conscious understanding of the act they are preforming.

Semantic processing without conscious understanding in a global aphasic: evidence from auditory event-related brain potentials.
Revonsuo A, Laine M.
Academy of Finland.
We report a global aphasic who showed evidence of implicit semantic processing of spoken words. Auditory event-related brain potentials (ERPs) to semantically congruous and incongruous final words in spoken sentences were recorded. 17 elderly adults served as control subjects. Their ERPs were more negative to incongruous than to congruous final words between 300 and 800 ms after stimulus onset (N400), and more positive between 800 and 1500 ms (Late Positivity). The aphasic showed an exactly similar pattern of ERP components as the controls did, but his performance in a task demanding explicit differentiation between semantically congruous and incongruous sentences was at the chance level. During follow-up, his explicit understanding recovered over the chance level but the ERPs remained fairly similar. We conclude that implicit semantic activation at the conceptual level can take place even in the absence of conscious (explicit) comprehension of the meaningfulness of linguistic stimuli.

The accounts in this article demonstrate how semantic processing of information still takes place in the absence of conscious understanding. This narrows the field of the definition of the word and concept "understanding" and closely associates understanding with consciousness.

In fact MF may be right in that "understanding" is a "metaphor" for the state of consciousness. This sort of idea could make "understanding" synomonous with consciousness. And this is what TH and SA have also been proposing as well. Thank you QC.
 
  • #198
MovingFinger said:
You claim that
"understanding requires consciousness" is tantamount to a falsehood
This is a false allegation. If one wishes to claim that "understanding requires consciousness" is a true statement then one must first SHOW that "understanding requires consciousness". This has not been done (except tautologically)

Tautologies are truths.


Yet both claims depends on the definitons of the terms involved.
How many times do I need to repeat it?
Whether a statement is analytic or not depends on the definitions of the terms used. If two agents do not agree on the definitions of the terms used then they may also not agree on whether the statement is analytic. Period.


Whether a statement is true or not depends on the definitions of the terms
involved, because AS YOU YOURSELF ADMIT you cannot empricially test
a statement without first understanding it. Thus both analytical
and synthetic/empricial truth depend on defintions.

Whether a statement is analytically true depends ONLY on the
definitions of the terms involved.


You cannot claim that your approach to
the nature of understanding does not depend on a the way you define
"understanding" and you have not made it clear how your definition is
preferable to the "understanding requires consciousness" definition.
Where did I claim this? All terms used in arguments need to be defined either implicitly or explicitly. I choose to define understanding one way, you choose to define it another. Your choice assumes consciousness and experiential qualities are required, mine does not. Simple as that.

So can I dismiss your claims are "tautologous", as though that mean "false "?


The (unanswered) question is why you are suggesting it.
Because that is my belief, and I believe it because nobody, including Tournesol, has come up with a rational and coherent argument to show experiential knowledge IS necessary for semantic understanding, except by the tautological method of defining semantic understanding such that it requires experiential knowledge “by definition”.

I have appealed to the common observation that people who become personally
acquainted with something understand it better than those who have not.



If someone claimed “a human heart is necessary for semantic understanding”, but they were unable to SHOW this to be the case, then why should I believe them? The same applies to experiential knowledge, and consciousness. The onus is on the person making the claim of “necessity” to show rationally why the necessity follows (without resorting to tautological arguments). In absence of such demonstration there is no reason to believe in necessity.

Show that bachelorhood requires unammriedness.

I have answered your question of why I believe that experiential knowledge is not necessary for semantic understanding. Can you now answer the question of why you think it IS necessary?

I have argued SMART is not sufficient for semantics and suggested exeperiential
understanding as one of the things that could ground an abstract system of
rules.



My point is that naturalistically you would expect variations in
conscious experience to be proportionate to variations in the physical
substrate.
I do not see that this follows at all. The genetic differences (the differences in the genes) between humans and chimpanzees is very very tiny – but the consequences of these tiny differences in genetic makeup are enormous.
One cannot assume that a small difference in physical substrate results in a similarly small difference in the way the system behaves.


Physicalism requires one to assume simple and uniform natural laws, in the
absence of specific evidence to the contrary.
One can and should assume that a small difference in physical substrate results in a similarly small difference in the way the system behaves.

IOW, while your red might be a slightly different red to my
red, there is no way it is going to be the same as my grren, unless
one of us has some highly unusual neural wiring.
I see no reason why my subjective sensation of red should not be more similar to your subjective sensation of green than it is to your subjective sensation of red. We have absolutely no idea how the precise “qualia” arise in the first place, thus there is no a priori reason to assume that qualia are the same in different agents.


Yes there is: the physicalist assumption of the uniformity of nature.

There is a good aposteriori reason. If consciousness doesn't follow the
same-cause-same-effect rule, it is the only thing that doesn't.
I have never suggested it does. I am simply suggesting that “similar physical substrate” does not necessarily imply “similar details of operating system”.

No they don't necessarily. I am arguing for an initial assumption that can be
overridden by specific data. But above you say we should not even assume
same-cause-same-effect. I sometimes wonder if you know what "necessarily"
means.


I can argue that understanding red as “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a far greater and deeper semantic understanding of red than simply defining it tautologically as “red is the experiential quality associated with seeing a red object”, which latter arguably gives us no “understanding” at all.

I dare say, but that is not what I mean by an expereiential understanding of
red: I mean what this looks like.

IOW, it is Reference, not Sense.



What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

that is a classically anti-physicalist argument.
Call it what you like. The conclusion is true. Or do you perhaps deny this conclusion is true?

You cannot be sceptical about other people's qualia without
being sceptical about everything that makes up a scientific
world-view (for instance whether far-distant galaxies have the
same laws of physics as us). Scepticism is a univesal solvent.

I am claiming that two different agents necessarily have two different physical substrates (even if only slightly different), and we have no a priori reason for assuming that a small difference in physical substrate necessarily equates to a small difference in conscious experience.

Yes we do: the apriori assumption of simplicity and universality that I call
"physicalism".

It could be. And it could be that I am a Zombie and I have no understanding at all

It might seem to me that you could be a Zombie. Does it seem remotely possible to you
that you could be a zombie ? if not, why not ? (Hint: it begins with C...)

You need to demonstrate that you can understrand "experience" without having
any experience. All your attempts to do so lean on some pre-existing
semantics acquired by interaction with a world.
Because that is the only way a human has of acquiring information and knowledge in the first place. Humans are born without any semantic understanding – the only way for a human to acquire the information and knowledge needed to develop that understanding is via the conduits of the senses. But this limitation does not necessarily apply to all agents, and it does not follow that direct sense-experience is the only way for all agents to acquire information and knowledge.

But this debate is about whether abstract rules, SMART, are sufficient for
semantics. So agent A acquires information by ineteracting with
its surroundings. Can you then cut-and-paste the resultiong data into
agent B? (In the way you can't cut-an-paste C into FORTRAN or
French into German). And if you succeed in translating the
abstract rules to work in the new agent, are they
wroking by virtue of SMART, or by virtue of the causal
interactions, the informational inputs and performative
outputs of the total, embodied, system ?


We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?
The CR already contains information in the form of the rulebook
But how do you propose to get it into the CR?
Ask Searle – its HIS thought experiment, not mine. Searle has never tackled the problem of explaining how he would “write” the CR rulebook in the first place – but that is not used as an objection to the CR thought-experiment.


If it were, it would just be another argument against AI - a stronger
version of the original CR. If the CR, as technology, fails to work, the
CR, as an argument succeeds.

In any case, Searle is attacking a particular approach to AI, the top-down
approach, so he would probably say: "Ask Marvin Minsky where to get the
rule-book from".


You haven't supplied any other way the CR can acquire semantics.
I am claiming that semantics is rule-based. Can you show it is not?
Can you give me an example of “the meaning of a word in English” which is NOT based on rules?

I have already listed arguments against rule-based semantics, and you didn't
reply. Here is the list again

1) The circularity argument: "gift" means "present", "present" means "gift",
etc.
2) The floogle/blint/zimmoid argument. Whilst small, local variations in
semantics wil probably show up as variations in symbol-manipulation, large,
global variations conceivably won't -- whaever variatiopns are entailed by substituting "pigeon" for "strawberry"
are canceled out by further substitutions. Hence the "global" versus "local"
aspect. Therefore, one cannot safely infer
that one's coloquitor has the same semantics as oneself just on the basis
that they fail to make errors (relative to your semantic model) in respect of symbol-manipulation.
3) The CR argument itself.

And while we are on the subject: being able to present a verbal defintion
for one term or another does not mean you can define all terms that
way without incurring circularity.

Nobody but you assumes apriori that semantics is rule-based. You are making
the extraordinary claim, the burden is on you to defend it.
How do you know that I am the only person who believes semantics is rule-based, or is this simply your opinion again?

You're the only one I've heard of.

And why does this make any difference to the argument anyway?

It means you have the burden of proof since you are makign the extradordinary
claim.

The proof of the pudding is in the eating. Give me any word in the English language and I can give you some of the rules that define my semantic understanding of the meaning of that word.

And while we are on the subject: being able to present a verbal defintion
for one term or another does not mean you can define all terms that
way without incurring circularity.

Can you give me an example of “the meaning of a word in English” which is NOT based on rules?

Yet another point I have already answered.

If semantics is really rule-based, you should be able to
tell me what "plobs" are, based on the following rules:-

If experiential knowledge is unecessary, you should have no trouble with
"nobbles made gulds, plobs and giffles"
"plobs are made of frint"
"giffles are made vob"
etc, etc.



I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.
Please define the Hard Problem.
The problem of how brains, as physical things (not abstract SMART systems),
generate conscious experience (as opposed to information processing).
We do not fully understand the detailed mechanisms underlying the operating of the conscious brain. I do not see how this strawman is relevant to the question of whether semantic understanding is rule-based or not.

It would give you a way of programming in semantics from scratch, as I said.
This doesn't seem important to you, because you think you can
defend SMART-based semantics without specifiying where the
rules come from. But you do need to specify
that, because that's the side of the debate you are on.
Searle doesn't.


Are you suggesting that “ability to understand” can be established any other way than by reasoning and experimental test? If so would you care to explain how you would go about it?

No: you are suggesting that "consciousness is not part of understanding" is
independent of definitions in a way that "consciousness is part of understanding"
is not. To judge that a system understands purely because it passes a TT
leans on an interpretation of "understanding" just as much as the judgement
that it doesn't.


Are you one of them ? Are you going to get off the fence on the issue ?
It seems important to you that you call me either a believer or disbeliever in physicalism. Please define exactly what you mean by physicalism and I might be able to tell you whether I believe in it or not.

Physicalism requires one to assume simple and uniform natural laws, in the
absence of specific evidence to the contrary.

Saying that physically similar brains produce consciousness
in the same way is just a value-neutral statement, like saying that physically
similar kidneys generate urine the same way.
But I have already shown above that your argument is unsound. One cannot assume that small differences in substrate necessarily lead to small differences in system behaviour. To use your terminology - this is not anti-physicalist, it is purely physicalist.

One should assume it -- but revisably, not "necessarily".


You have not shown it to be true apart from your
tendentious defintion of "semantic meaning".
Whether my definition is tendentious or not is a matter of opinion.
You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

So perhaps we could advance the argument by seeing which definition is correct
-- which does mean popular support, since meanings do not work like
facts.

Ask a blind person what red looks like.
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red,
it obviously follows that they do not have the same seamntic understanding as
someone who actually does know what red looks like. You keep trying
to pass off "some understanding" as "full understanding".
Would you care to define what you mean by “full understanding”?

eg. including what Mary learns when she leaves her prison.

Why is understanding red to be “my subjective experiential quality of seeing a red object” necessarily a more full understanding of red than understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”?

The one cannot be inferred from the other; it is therefore new information, in
the strictest sense of "information".


You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

You have not shown experiential qualities to be unnecessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

You keep using "not identical" to mean "radically
different".
You keep assuming that similar substrates necessarily give rise to similar system behaviour. This does not follow.

It is not necessarily true; it is a required assumption of physicalism, but
only as a revisable working hypothesis.

Your position is based on defintions that DON'T EVEN HAVE popular support!
That is your opinion. I am not claiming that my argument is based on popular support – I am claiming it is based on defensible rationality and logic

It is partly based on definitions, and any argument based on definitions
should have
popular support. Definitional correctness - unlike factual accuracy --
is based on convention.

You have already conceded that the question of definitions cannot be
short-circuited by empirical investigation.
I have stated that a balance needs to be drawn. One could choose to “define everything”, assume everything in one’s definitions, and leave nothing to rational argument or empirical investigation (this seems to be your tactic),

False. To appeal to one particular analytical argument ("Fred is unmarried, so he is a
bachelor" -- "prove it!") is not to assume that all truths can be established
analytically.
 
  • #199
moving finger said:
This is exactly what I have been saying all along – see post #107 in this thread :

In post #107 you said this:

moving finger said:
it follows that the statement “understanding requires consciousness” is not analytic after all

Which is false, or at least not entirely true. It is analytic given my definitions. Of course, the statement is not necessarily analytic for other definitions. Whether or not the statement “understanding requires consciousness” is analytic cannot be answered yes or no until the terms are defined.


If you agreed with this why didn’t you just say so at the time, and save us all this trouble?

You're acting as if I didn't say that whether or not a statement is analytic depends on the definitions. Please read post #128 where I list some quotes from this thread. Why didn't you pay attention to what I said, and save us all this trouble?


Tisthammerw said:
Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)?

The above does NOT tell me whether or not the computer is interpreting the data.

What more information about the computer do you require?

It’s your computer and your example – you tell me if it is interpreting or not – it’s not up to me to “guess”.

I’m not asking you to guess, simply tell me whether the computer process as described here fits your definition of “interpretation.” If you need more information (e.g. the processing speed of the computer), feel free to ask questions.


Please answer the question – is the computer you have in mind doing any interpretation of the data?

I do not know if the computer process fits your definition of interpretation, because you have not defined the word. I described this scenario in part because I wanted to know what this definition of yours was.


Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?

moving finger said:
Perception is NOT synonymous with awareness, thus “does the agent perceive?” is a DIFFERENT question to “is the agent aware?”

So is that a yes to my question?


I cannot answer the question you asked unless you first tell me how you wish to define “aware”.
Please answer the question.

I already referred to you the definition I’m using in my last post (post #186).


(Note : The definition you gave of consciousness in your last post is NOT a definition of awareness – it USES the concept of awareness as one of the defining characteristics of consciousness, without defining awareness itself)

I'll bold the part you apparently missed:

Tisthammerw said:
My definition of consciousness is the same as I defined it earlier.

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Thus my definition of consciousness is such that if an entity possesses awareness, the entity possesses consciousness.

For those who wish further precision, these individual characteristics could also be defined. “Perception” is the state of being able to perceive. Using the Eleventh edition of Merriam Webster’s dictionary, I refer to “perceive” (definitions 1a and 2), “sensation” (1b), “thought” (1a), and “awareness” (2). These definitions are also available at http://www.m-w.com/

(Note: I corrected a typo in the quote.)
 
Last edited:
  • #200
moving finger said:
Thus, following your definition of semantic understanding, at least 95% of the human race has no semantic understanding of the term “horse”.
I see.
And you expect me to agree with this? :yuck:

No, I don't expect you to agree with anything.

But I will point out that the phrase "semantic understanding" is never used in any well established profession. If you have an example where it is I would like you to direct our attention to its use and the context in which it is used.

I am convinced that what you are describing when you use the words "semantic understanding" you are actually describing "the possession of semantic knowledge"... not understanding.

As I have pointed out several times and as you have agreed with me ... (and as the latest article I posted points out)... it is entirely possible to possesses semantic knowledge of a subject without UNDERSTANDING it.

For instance I have semantically processed some information about horses to do with their digestive system... but I do not understand the full process or implications of their digestive process. I only have knowledge of the process... not understanding. I only have words and diagrams that describe the process, I have the knowledge... not the understanding.

I will never understand this information until I have shoveled road apples, slept with the horse in the woods, kept the horse moving for fear of freezing or discovered the proper vegetation that will help it digest the badger it accidently ate on the way back to the barn.

Medical students carry reams of semantic knowledge they processed during school but there is not one of them who understands the implications of the semantic, book knowledge, lab demos or videos.

That is why it is manditory that they perform a practicum for years before understanding a very small amount of the duties of being a doctor. The practicum allows their consciousness to be "standing under" the problems and solutions employed in the many, various medical professions and situations.

You, sir, have missed the mark in your bid to dilute and dis-credit the true meaning of understanding.

If you wish to assign the neuro-biological trait of "understanding" to a machine built by humans... then the machine will have to be one that has evolved to such a complexity, over millions of years, that it may as well be, and no doubt will become, a biological unit itself.
 
  • #201
I apologize for not acknowledging the contributions of Tournesol who has been with this discussion from the top. Thank you "T".
 
  • #202
I have to admit that I only found John Searle's Chinese Room because I was googling for examples of a China room.

A China room is a room decorated in the style known as Chinoiserie. Chinoiserie is a french term for the type of painting the Chinese did for centuries before they even knew of an outside world. It is axionometric in nature and picutures village life, fishing, nature scenes and anscestors and deities. It is painted in gold guilding, bronzing powders, sometimes raised, on exotic red, black and green backgrounds. Then it is laqueured around 100 times for depth and its sometimes all done on linen finished furniture or panels.

There are only 4 China Rooms in the world. I thought one was in the Brighton Pavilion but couldn't find reference to it. I just finished designing and producing some of the panels for the 4th-only China Room in the Bishop's Gate Estate in London.

There are over 40 Chinoiserie panels for that 16x16 foot room. Its all a deep antique red with the highlights of gold figures and islands and mountains etc...

Even after 2 months working on this project I don't understand the original techniques the chinese used to arrive at the "chinoiserie". And I don't understand the significance of the images they used other than the fact that they depict family life and social survival techniques.

When I found the John Searle Chinese Room Thought Experiment, I thought they'd used the wrong terminology when they tried to determine if the Chinese Room "understood" chinese.

To me, the room was able to interpret the Chinese charactures. It had an ability to translate one character into other characters that related to the original.

Understanding is far too rare a state of mind and therefore a word that must be used sparingly with sincerity.

It is more often that not that the word understanding is used in this sentence... " I do not understand" rather than "I understand".

The former statement is suspect to scrutiny, or should be, when people hear it because, as I have already stated, understanding is a rare state of mind. And this is why I would petition the originator, John Searle, of the Chinese Room T X to look a little closer at, and research his use of the word "understanding".

The word understanding has already been diluted to the point of no return. When used to describe computing it will be the end of its true meaning which is rooted in shared consciousness, caring and compassionate listening. When it is applied to the function of a computer, millions of people who still see understanding as a compassionate act and a sharing of points of view or awarenesses will slowly succum to an artifical, mis-representation of understanding... for $9.99 a minute.
 
Last edited:

Similar threads

  • General Discussion
Replies
5
Views
2K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • General Discussion
Replies
6
Views
2K
  • General Discussion
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
652
  • General Discussion
Replies
3
Views
816
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
Replies
4
Views
1K
Writing: Input Wanted Clone Ship vs. Generation Ship
  • Sci-Fi Writing and World Building
Replies
30
Views
2K
  • Special and General Relativity
Replies
1
Views
991
Back
Top