Can computers understand?Can understanding be simulated by computers?

  • Thread starter Thread starter quantumcarl
  • Start date Start date
  • Tags Tags
    China
AI Thread Summary
The discussion centers on John Searle's "Chinese Room" argument, which posits that computers cannot genuinely understand language or concepts, as they merely follow formal rules without comprehension. Critics argue that understanding may be an emergent property of complex systems, suggesting that the entire system, including the individual and the book, could possess understanding despite individual components lacking it. The conversation also explores the potential of genetic algorithms in artificial intelligence, questioning whether such systems can achieve a form of understanding without consciousness. Some participants believe that a sufficiently complex algorithm could surpass human understanding in specific contexts, while others maintain that true understanding requires consciousness. The debate highlights the need for clear definitions of "understanding" and "consciousness" to facilitate meaningful discussion on the capabilities of computers.
  • #151
Please excuse the lack of links for each of the descriptors associated with "understanding" in my post.

Somehow I have lost the ability to edit and parce links. I, personally don't parce why this is happening and I've had no time to utilize my program X to find out.

May the electricity be with you at a moderate price!

Spooky All Hallow's Eve!
 
Physics news on Phys.org
  • #152
MF said:
To say that "understanding is a process" is not the same as saying that "all processes possesses understanding". It simply means that "being a process" is necessary, but not sufficient, for understanding. This would explain your "an abacus does not understand" position.
I agree competely. It is exactly this that makes me reconsider the idea that a calculator could potentially be considered to possesses understanding. The simple fact that it utilizes a process does not mean it understands. Agian I'm not sure exactly how a calculator's program works but if it is really just a more complex version of what a slide rule or abacus does then it would say that it does not understand.

MF said:
I agree that designed agents can and do reflect to some extent the understanding of the designer. But is this the same as saying that "no designed agent can be said to understand"? I think not.
Again I agree. I would not argue that any designed agent can only reflect understanding. I only argue that this "passive understanding" may better be described as "reflected understanding". The designer or teacher has done the work of "actively understanding" and deriving information based on this. The designer or teacher then passes the information on to a device or pupil. The device or pupil may be capable of coveying the information on to yet another agent but the information is not necessarily "understood" by any of the sucessive agents that may memorize and transmit it. In this case I would prefer to consider that the agents reflect the understanding of the information's progenitor rather than possesses any sort of understanding themselves. The logic here does not preclude the ability of a device or pupil to "actively understand" it only changes the concept of "passive understanding" to what I personally think is a more logical conception of what is occurring.
 
  • #153
Tisthammerw said:
You implied it when you said that you do not agree that my statement was analytic statement and did not agree with my definitions.
“Y does not agree with X” is not synonymous with “Y thinks X is wrong”.
There are such things as “matters of opinion”. You and I may have different opinions on some issues (such as the definition of understanding), which means that “we do not agree on the definition of understanding”, but does NOT mean that one of us is necessarily “wrong”.
Tisthammerw said:
In these circumstances, “disagree” usually means believing that the statement in question is false.
I disagree. I thought that we already established that definitions of words are normally not things that can be “true” or “false”….. or are you now changing your mind?
Tisthammerw said:
Why is the fact that other people use different definitions of the terms make the statement “understanding requires consciousness” synthetic?
The answer is given already in my previous post. Because
moving finger said:
it is NOT clear that all possible forms of understanding DO require consciousness
As I pointed out several times already, two people may not be able to agree on whether a given statement is analytic or not if those two people are not defining the terms used in the statement in the same way. Do you agree with this?
Tisthammerw said:
The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.
To my mind, whether or not “understanding requires consciousness” needs to be determined by observation. Thus the statement is indeed synthetic.
moving finger said:
How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
Because I have described it to you.
In fact you did NOT specify that the computer you have in mind is interpreting the data/information. Interpreting is part of perceiving.
Tisthammerw said:
I am merely pointing out that my definitions are not the ones that are unconventional.
My definitions can also be found in dictionaries, scientific textbooks, encyclopaedias and reference works. Just because I do not use the same dictionary as you does not mean my definitions are unconventional. In psychology and the cognitive sciences, the word perception (= the act of perceiving) is usually defined as “the process of acquiring, interpreting, selecting, and organising (sensory) information”. It follows from this that “to perceive” is to acquire, interpret, select, and organise (sensory) information.
Tisthammerw said:
Are you saying an entity can “perceive” an intensely bright light without being aware of it through the senses?
I have already answered this in a previous post, thus :
moving finger said:
I am not saying this, where did you get this idea?
Tisthammerw said:
Programs are nothing more than set of instructions (albeit often a complex set of instructions). Bob is not an instruction, he is the processor of the program X.
What makes you think that a person is anything more than a “complex set of instructions”?
Tisthammerw said:
Can a computer (the model of a computer being manipulating input via a complex set of instructions etc.) have TH-understanding?
If the computer in question is conscious, then yes, it can (in principle) possesses TH-Understanding.
Can you show that no possible computer can possesses consciousness?
Tisthammerw said:
If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”
This does not follow at all. How do you arrive at this conclusion?
It is not clear from Searle’s description of the thought experiment whether or not he “allows” the CR to have any ability of acquiring new understanding (this would require a dynamic database and dynamic program, and it is not clear that Searle has in fact allowed this in his model).
I am simply saying that memorising is not synonymous with understanding. This applies to all agents, humans as well as computers.
MF
 
  • #154
MovingFinger said:
How does MF know whether the agent Tournesol understands English? The only way MF has of determining whether the agent Tournesol understands English or not is to put it to the test in fact the Turing test -fann to ask it various questions designed to test its understanding of English. If the agent Tournesol passes the test then I conclude that the agent understands English.
Why should it be any different for a machine?
Because there is another piece of information you have about me: I have a
human brain, and human brains are known to be able to implement semantics,
consciousness, etc. Silicon is not currently known to -- it is not known
not to , it just not known to.
I am not suggesting that Turings test is definitive. But in the absence of any other test it is the best we have (and certainly imho better than defining our way out of the problem). I am sure we would all love to see a better test, if you can suggest one.
If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?
The TT is more doubtful in the case of a machine than that of a human.
You are missing another point as well: the point is whether syntax is
sufficient for semantics. What fills the gap in humans, setting aside
immaterial souls, is probably the physical embodiment and interactions
with the surroundings. Of course,
any actual computer will have a physical embodiment, and its
physical embodiment *might* be sufficient for semantics and cosnciousness.
However, even if that is true, it does not mean the computer's
posession of semantics is solely due to syntactic abilities,
and Searle's point is still true.
Whether it is a valid analytical argument depends on whether the definitions it relies on are conventional or eccentric.
Conventional by whose definition? Tournesols?
In rational debate we use words as tools so long as we clearly define what we mean by the tools we use then we may use whatever tools we wish.
Redefine "fanny" to mean "dick" and your auntie is your uncle.
yes, by defintion. That is the difference between understanding, and instinct intuition, etc. A beaver can buld dams, but it cannot give lectures on civil
engineering.
I cannot report that I know anything if my means of reporting has been removed.
A beaver might in principle understand civil engineering, but it can't give lectures if it cannot speak.
Are you seriously asserting that the only think that prevents a beaver from lecturing on civil
engineering is its lack of a voicebox ?
You don't seem to have an alternative.
Consciousness imho is the internal representation and manipulation of a self model within an information-processing agent, such that the agent can ask rational questions of itself, for example what do I know?, how do I know, do I know that I know?, etc etc. The ability of an agent to do this is NOT necessary for understanding per se,
That depends on what you mean by "understanding".
The question is whether syntax is sufficient for semantics.
Im glad that you brought us back to the Searle CR argument again. Because I see no evidence that the CR does not understand semantics
Well, I have already given you a specific reason; there are words in human languages which refer specifically to sensory experiences.
Why do you consider this is evidence that the CR does not understand semantics? Sensory experiences are merely conduits for information transfer, they do not endow understanding per se, much less semantic understanding.
There are terms in language with specifically sensory meanings, such as "colour", "taste", etc.
given the set of sentences.
"all floogles are blints"
"all blints are zimmoids"
"some zimmoids are not blints"

[ ... etc ... ]
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context.
That's the point! There is nothing special about the inability of the CR to derive semantics from syntax, since
that is not possible in general.
Now, the point you should be arguing is that the CR does in fact have semantics; you need to show
that it is an *exception* to the general rule.
Agreeing that its specific inability to grasp semantics through syntax
is an instance of a general rule is not mounting an argument
against the CR -- it is tantamount to accepting the CR.
This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantics of two different agents.
If there is a difference between two agents, each is failing to grasp the semantics of the other.
Although they agree on syntax. So why should the CR specifically be able to avoid that problem ?
Human agents are likely to do so using their embededness in the world -- "*this* [points] is what I mean by zimmoid"--
but the CR does not have that capacity.
= more than one semantic model can be consistently given to
the same symbols.
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context. This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantic understanding of two different agents.
So..are you saying that the CR has the wrong semantics, and that having the wrong semantics
counts as "understanding" in a way that having no semantics does ? And how does A TT
distinguish between having the wrong semantics and having no semantics (since the
symbol-manipulation is the same in each case ?) And *how* does the CR have the
wrong semantics, since it does not get them from syntax alone ?
Now the strong AI-er could object that the examples are too
simple to be realistic, and if you threw in enough symbols,
you would be able to resolve all ambiguities succesfully.
See above.
Why? You haven't said what it is that allows the CR to plug semantic gaps. Your observation
that there is nothing special about the CR's inability to derive semantics from syntax is
far short of a demonstration that Searle is wrong, and it actually *can* derive semantics from syntax ?
(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)
Who has suggested that semantics is redundant?
You have, in effect. If semantics can be derived from syntax, it is informationally redundant.
Using your logic, one might equally ask why do we have the separate concepts of programmable computer and pocket calculator both are in fact calculating machines therefore why not just call them both calculating machines and be done with it.
I am not saying that semantics is redundant as a term because it can be subsumed under some more
general term, in the way that "computer" can be subsumed under "calculating machine";
people who directly counter Searle's argument are effectively saying that semantic information
is redundant, since it can derived from syntactic information.
I have been consistently suggesting that establishing definitions is completely different to establishing facts. Defining a word in a certain way does not
demonstrate that anything corresonding to it actaully exists.
Excellent! Therefore we can finally dispense with this stupid idea that understanding requires consciousness because it is defined that way
Dreadful! Truths-by-definition may not amount to empirical truths, but that does not mean they
are emprical falsehoods -- or do you think there are no umarried bachelors ?
How can you establish a fact without definitions?
Ask yourself what are the essential qualities of understanding that allow me to say this agent understands avoid prejudicial definitions and and avoid anthropocentrism
You haven't shown how to do all that without any pre-existing definitions.
I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality.
Experiential qualities are agent-dependent (ie subjective). Tournesols experiential quality of seeing red is peculiar to Tournesol subjective - it is meaningless to any other agent.
Even if that is true, it is a far cry from the argumentatively relevant point that "red" has no meaning.
And I don't see why it should be true anyway; if consciousness is generated by the brain, then anatomically normal
brains should generate the same qualia. To argue otherwise is to assume some degree of non-physicalism. Need I point
out the eccentricity of appealing to anti-physicalism to support AI?
This does not mean that writing the definition is impossible, it just means that it is a subjective definition, hence not easily accessible to other agents
The semantics of a phrase like "the taste of caviare" are easily supplied by non-syntactic means -- one
just tastes caviare. Are you saying we should reject the easy option , and stick to the hard-to-impossible one just to keep the semantics-is-really-syntax flag flying ?
I have denied that experiential knowledge is synonymous with understanding, I have NOT denied that experiential knowledge is knowledge
I have said that experiential semantics is part of semantics, not all of it.
As to your distinction between knowledge and understanding , I don't think it is
sustainable. To know what a word means is to understand it.
Your assertion assumes that vision is required for understanding. Vision provides experiential information, not understanding. I understand the terms X-ray and ultra-violet and infrared and microwave even though I possesses no experiential information associated with these terms. What makes you think I need experiential information to understand the terms red and green? The onus is on you to show why the experiential information is indeed necessary for understanding red and green, but not for x-rays or ultra-violet rays..
You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.
You don't need experience to grasp the semantics of "infra-red" and "ultra violet" as well as other people, because
they don't have appropriate experiences to base their semantics on.
You have no grounds to suppose that understanding X-rays is just understanding per se -- it is
only partial understanding compared to understanding visible colours.
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but more so.
Information and knowledge are required for understanding, not senses.
You have conceded that experiential knowledge is knowledge.
If knowledge is required for understanding, as you say, experiential knowledge
is required for understaning. Since eperience is needed for
experiential knowledge, that means experience is required for
understanding.
However, I do not need to argue that non-experiential knowledge is not knowledge.
Why not - is this perhaps yet another analytic statement?
It doesn't affect my conclusion.
It affects whether your conclusion is simply your opinion or not
"Not-experiential knowledge is not knowledge" is not something
I need to assume, not something I am claiming, and not my opinion
(not that that matters).
Sigh..that is a very anthropocentric viewpoint.
ces.
AI needs to be anthropocentric..up to a point.
Humans acquire most of their information from their senses in the form of reading, listening etc the same information could be programmed directly into a machine.
We could copy the data across -- as in standard, non-AI computing-- but would that be sufficient for
meaning and understanding ? If a system has language, you can use that to cnvey
3rd-person non-experiential knowledge. But how do you bootstrap that process -- arrive
at linguistic understanding in the first place? Humans learn language through interaction
with the environment. As the floogle/blint/zimmoid argument shows, you cannot safely
conclude that you have the right semantics just because you have the right syntax.
An AI that produce the right answers in a TT might have the worng semantics or no semantics.
The fact that humans are so dependent on sense-receptors for their information gathering does not lead to the conclusion that understanding is impossible in the absence of sense-receptors in all possible agents.
And that argument does not show that syntax is sufficient for human-type semantics. Would a putative AI have
quite a different form of language to a human (the Lion problem) ? Then Searle
has made his case. Would it have the same understanding , but not achieved solely
by virtue of syntax ? Again, Searle has made his case.
It makes the point that ability to fly a plane is not synonymous with understanding flight
The ability is part of an understanding which is more than merely theoretical understanding.
What red looks like is not understanding it is simply subjective experiential information.
And you don't live in a house, you live in a building made of bricks with doors and windows.
What red looks like to Mary is not necessarily the same as what red looks like to Tournesol
Naturalistically , it should be.
The full meaning of the word "red" (remember, this is ultimately about semantics).
Your argument continues to betray a peculiar anthropocentic perspective.
Strong AI is about duplicating human intelligence -- it should be anthopocentric.
What makes you think that the experiential quality of red is the same to you as it is to me?
Physicalism. Same cause, same effect. Why do you think it isn't ?
If the experiential qualities are not the same between two agents, then why should it then matter (in terms of understanding semantics) if the experiential quality of red is in fact totally absent in one of the agents? How could such an agent attach any meaning to a term like "qualia" if it has no examples whatsoeve
to draw on.
I can understand semantically just what is meant by the term red without ever experiencing seeing red, just as I can understand sematically just what is meant by the term x-rays without ever experiencing seeing x-rays.
They are not analogous, as I have shown. You don't need experience to understand X-ray as well
as anyone can understand it becuase no-one has experience OF THAT PARTICULAR PHENOMENON, not
because experience in general never contributes to semantics.
I claim that experience is necessary for a *full* understanding of *sensory* language, and that an entity without sensory exprience therefore lacks full
semantics.
There is no reason why all of the information required to understand red, or to understand a concept cannot be encoded directly into the computer (or CR) as part of its initial program.
Yes there is. If no-one can write down a definition of the experiential nature of "red" no-one can encode it into a programme.
Now (with respect) you are being silly. Nobody can write down a universal definition of the experiential nature of red" because it is a purely subjective experience.
It is a subjective experience because no-one can write down a definition.
And that holds true without making the physicalistically unwarranted assumption that
similar brains produce radically different qualia.
There is a pattern of information in Tournesols brain which corresponds to Tournesol seeing red, but that pattern means absolutely nothing to any other agent.
SO you can't teach a computer what "red" means by cutting-and-pasting information (or rather data)
from a human brain -- because it would no longer make sense in a different context.
Note that data is not information for precisely that reason -- information is data that
makes sense in a context. You seem to have got the transferability of data mixed
up with the transferability of information. Information can be transferred,if the "receiving" context
has the appropriate means to make sense of the data already present, but how the CR is to
have the means is precisely what is at stake.
Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)
We can ask questions about how things look (about the "carriers" of information as opposed to
infomation itself) , and a system with full semantics needs to understand those questions.
In principle, no sense-receptors are needed at all. The computer or CR can be totally blind (ie have no sense receptors) but still incorporate all of the information needed in order to understand red, syntactically and semantically. This is the thesis of strong AI, which you seem to dispute.
Yes. No-one knows how to encode all the information. You don't.
Oh really Tournesol. Whether MF knows know how to do it or not is irrelevant.
Whether anyone else does is highly relevant:"No-one knows how to encode all the information".
You still haven't shown , specifically, how to encode experiential infomation.
I don't see why you seem to think its such a problem.
Information is information.
If that were true, you could write a defition of the
experiential nature of "red". You can't, so it isn't.
The interesting aspect of experiential information is that it has meaning only to the agent to which it relates. In other words the information contained in the experiential state of Tournesol seeing red only means something to the agent Tournesol, the same information means nothing (indeed does not exist) to any other agent.
Wrong on several scores. I would have no way of telling whether a detailed
description of a brain state was a description of a "red" or "green" quale,
even if it was my own brain. So the "everyone has different qualia" definition
of "subjectivity" you are appealing to -- which is contradicted by physicalism --
is not the same as the "explanatory gap" version, which applies even to one's own
brain-states.
 
Last edited:
  • #155
When we staunchily remain steadfast in our separate and personal definitions of "understanding" we do not allow for any progress in the refinement of a "universal" and terminologically correct use of the concept or the word "understanding and the phenomenon it refers to.

It may be more constructive if we were to find, amongst ourselves, commonalities in our definitions that help us "reach an understanding" between all parties with regard to the meaning, definition and concept of the descriptor, "understanding".

1.) I propose that "understanding" is a result of a series of processes... not a process in itself. Understanding is a description of a plateau one reaches and from which one is able to continue in pursuit of other plateaus of understanding. (if you agree, please indicate by repeating the number of this proposal with an "agree" or "disagree" beside it. If in disagreement, please offer an explanation.

2.) Understanding is a result of cognitive processes. (agree or disagree)

Here are some secondary descriptors of the primary (in this case) descriptor "cognitive":

of, relating to, or being conscious intellectual activity (as thinking, reasoning, remembering, imagining, or learning words)
www.prostate-cancer.org/resource/gloss_c.html[/URL]

* Awareness with perception, reasoning and judgement, intuition, and memory; The mental process by which knowledge is acquired.
[PLAIN]www.finr.com/glossary.html[/URL]

* Refers to the ability to think, learn and remember.
[url]www.handsandvoices.org/resource_guide/19_definitions.html[/url]

* brain functions related to sense perception or understanding.
altweb.jhsph.edu/education/glossary.htm

* Relating mental awareness and judgment.
science.education.nih.gov/supplements/nih3/alcohol/other/glossary.htm

* Pertaining to the mental processes of perceiving, thinking, and remembering; used loosely to refer to intellectual functions as opposed to physical functions.
professionals.epilepsy.com/page/glossary.html

* Refers to a mental process of reasoning, memory, judgement and comprehension - as contrasted with emotional and volitional processes.
[PLAIN]www.into.ie/downloads/gloss1.htm[/URL]

* Pertaining to cognition, the process of being aware, knowing, thinking, learning and judging.
[url]www.memorydisorder.org/glossaryterms.htm[/url]

* thought processes

[PLAIN]www.macalester.edu/~psych/whathap/UBNRP/synesthesia/terms.html[/URL]
* relating to or involving the act or process of knowing, including both awareness and judgement. Cognition is characterized by the following: attention, language/symbols, judgement, reasoning, memory, problem-solving.
[PLAIN]www.inspection.gc.ca/english/corpaffr/publications/riscomm/riscomm_appe.shtml[/URL]

* Pertaining to functions of the brain such as thinking, learning, and processing information.
[PLAIN]www.azspinabifida.org/gloss.html[/URL]

* Thinking, getting, evaluating and synthesizing information.
oaks.nvg.org/wm6ra3.html

* in cognitive psychology this is the component of attitude that involves perceptual responses and beliefs about something (knowledge and assumption)
[PLAIN]www.oup.com/uk/booksites/content/0199274894/student/glossary/glossary.htm[/URL]

* an adjective referring to the processes of thinking, learning, perception, awareness, and judgment.
[url]www.nutrabio.com/Definitions/definitions_c.htm[/url]

* Most neuroimaging studies in mood disorders are largely restricted to patients with MDD. Magnetic resonance imaging (MRI) studies demonstrate that patients with late-life MDD have smaller focal brain volumes and larger high-intensity lesion volumes in the neocortical and subcortical regions than control subjects. 102,103 The focal reductions in brain volume have been identified in the prefrontal region, hippocampus, and the caudate nucleus. ...
ajgp.psychiatryonline.org/cgi/content/full/10/3/239

* Having to do with a person's thoughts, beliefs, and mental processes including intelligence.
access.autistics.org/resources/glossary/main.html

* function, all the normal processes associated with our thoughts and mental processes.
[PLAIN]www.srht.nhs.uk/sah/Glossary/Glossary.htm[/URL]

* Teacher takes all the responsibility for the process and the result The input-output during a lesson get more and more complex Learners get involved in individual, pair and group activities A lesson is notable for a variety of diverse activities Learners' output is either a monologue or a dialogue learned by heart Communicative message is in focus Learners are positively dependent on each other Lexis and grammar are in focus Listening and reading are in focus Activation of thought processes ...
tsu.tmb.ru/millrood1/interact/modern_e/mel10.htm

* Mental ability to gain knowledge, including perception, and reason.
cgi.[PLAIN]www.care.org.uk/student/abortion/fs/fs11.htm[/URL]

* of or being or relating to or involving cognition; "cognitive psychology"; "cognitive style"
wordnet.princeton.edu/perl/webwn

* The term cognition is used in several different loosely related ways. In psychology it is used to refer to the mental processes of an individual, with particular relation to a view that argues that the mind has internal mental states (such as beliefs, desires and intentions) and can be understood in terms of information processing, especially when a lot of abstraction or concretization is involved, or processes such as involving knowledge, expertise or learning for example are at work. ...
en.wikipedia.org/wiki/Cognitive[/quote]

We may note that 99.9% of the above descriptors describe "mental" processes. Beliefs, reasoning, thought/mental process (this brings up the previously noted idea that the physics of the brain and the physics of a computer are vastly different and perhaps require vastly different terminology to distiquish the two different processes), awareness, judgement, thinking and intuition.

What I'm still trying to demonstrate is that terminology serves the purpose of distinguishing processes and states that belong in certain categories.

To use the word "understanding" to describe a set of data being stored in a computer (on or off) is like describing an "organelle" in an animal cell as an "organ". It is clearly an example of incorrect use of terminology.

If you want to write prose or poetry about a computer... you are more than welcome to use the word "understanding" to describe what a computer does. However, in the world of professionals, I believe there are more appropriate terms that apply to the digital domain.

Thanks.
 
Last edited by a moderator:
  • #156
moving finger said:
“Y does not agree with X” is not synonymous with “Y thinks X is wrong”.

But normally that is what is implied when it comes to philosophical discussions e.g. “I do not agree with ethical relativism.”



I thought that we already established that definitions of words are normally not things that can be “true” or “false”

Yes we did, but at the time I was not aware you knew this.


Tisthammerw said:
Why is the fact that other people use different definitions of the terms make the statement “understanding requires consciousness” synthetic?

The answer is given already in my previous post. Because

it is NOT clear that all possible forms of understanding DO require consciousness

It is not clear why “understanding is a synthetic statement” logically follows form this. “Synthetic” in this context means “something that can be determined by observation.” As I said before:

Tisthammerw said:
The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.

To which you reply:


Tisthammerw said:
The only sense I can think of is that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.

To my mind, whether or not “understanding requires consciousness” needs to be determined by observation.

Which really doesn’t answer my question why you believe it is synthetic. So far your argument is this:

  • People mean different things when they use the term “understanding.”

Therefore: whether understanding requires consciousness can be determined by observation.

It is terribly unclear why this is a valid argument, except for the sense that what definitions a person is using must be determined by observation, but once that is done (e.g. in my case) “understanding requires consciousness” becomes analytic.


As I pointed out several times already, two people may not be able to agree on whether a given statement is analytic or not if those two people are not defining the terms used in the statement in the same way. Do you agree with this?

Yes and no. Think of it this way. Suppose I “disagree” with your definition of “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?


Tisthammerw said:
moving finger said:
How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?

Because I have described it to you.

In fact you did NOT specify that the computer you have in mind is interpreting the data/information.

In fact I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms. To recap: I described the scenario, and I have subsequently asked you the questions regarding whether or not this fits your definition of perceiving etc.



Tisthammerw said:
I am merely pointing out that my definitions are not the ones that are unconventional.

My definitions can also be found in dictionaries, scientific textbooks, encyclopaedias and reference works.

Can they? You yourself use terms that are unclear (by what you mean by them) in part because you have refused to answer my questions of clarification.

Tisthammerw said:
Are you saying an entity can “perceive” an intensely bright light without being aware of it through the senses?

I have already answered this in a previous post

Yes and no. Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?


What makes you think that a person is anything more than a “complex set of instructions”?

Because people possesses understanding (using my definition of the term, what you have called TH-understanding), and I have shown repeatedly that a complex set of instructions is insufficient for TH-understanding to exist.


Tisthammerw said:
Can a computer (the model of a computer being manipulating input via a complex set of instructions etc.) have TH-understanding?

If the computer in question is conscious, then yes, it can (in principle) possesses TH-Understanding.
Can you show that no possible computer can possesses consciousness?

Given the computer model in question, I think I can with my program X argument (since program X stands for any computer program that would allegedly produce TH-understanding, and yet no TH-understanding is produced when program X is run).


Tisthammerw said:
If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”

This does not follow at all. How do you arrive at this conclusion?

As I said earlier, because the man in the Chinese room models a computer program. I can also refer to my program X argument, in which case there is no “new understanding” in this case either.


It is not clear from Searle’s description of the thought experiment whether or not he “allows” the CR to have any ability of acquiring new understanding (this would require a dynamic database and dynamic program, and it is not clear that Searle has in fact allowed this in his model).

Again, the variant I put forth does use a dynamic database and a dynamic program. We can do the same thing for program X.
 
  • #157
Part One of my reply :

Tournesol said:
Because there is another piece of information you have about me: I have a
human brain, and human brains are known to be able to implement semantics,
consciousness, etc.
Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.

I know that MF’s human brain “implements semantics”, but I have no evidence that any other brain, human or otherwise, does – apart from the evidence afforded by interrogating the owners of the brains – asking them questions to test their knowledge and understanding of semantics.

moving finger said:
If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?
Tournesol said:
The TT is more doubtful in the case of a machine than that of a human.
The solution to this problem is to try and develop a better test, not to “define our way out of the problem”

Tournesol said:
You are missing another point as well: the point is whether syntax is
sufficient for semantics.
I’m not missing that point at all. Searle assumes that a computer would not be able to understand semantics – I disagree with him. It is not a question of “syntax being sufficient for semantics”, I have never asserted that “synatx is sufficient for semantics” or that “syntax somehow gives rise to semantics”.
That the AI argument is necessarily based on the premise “syntax gives rise to semantics” is a fallacy that Searle has promulgated, and which you have swallowed.

Tournesol said:
What fills the gap in humans, setting aside
immaterial souls, is probably the physical embodiment and interactions
with the surroundings. Of course,
any actual computer will have a physical embodiment, and its
physical embodiment *might* be sufficient for semantics and cosnciousness.
Once again – place me in a state of sensory deprivation, and I still understand semantics. The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.

Tournesol said:
However, even if that is true, it does not mean the computer's
posession of semantics is solely due to syntactic abilities,
and Searle's point is still true.
I am not arguing that “syntax gives rise to semantics”, which seems to be Searle’s objection. I am arguing that a computer can understand both syntax and semantics.

Tournesol said:
Are you seriously asserting that the only think that prevents a beaver from lecturing on civil
engineering is its lack of a voicebox ?
I am saying that “possession of understanding alone” is not sufficient to be able to also “report understanding” – to report understanding the agent also needs to be able “to report”.

moving finger said:
Consciousness imho is the internal representation and manipulation of a self model within an information-processing agent, such that the agent can ask rational questions of itself, for example what do I know?, how do I know, do I know that I know?, etc etc. The ability of an agent to do this is NOT necessary for understanding per se,
Tournesol said:
That depends on what you mean by "understanding".
Goes without saying that we disagree on some definitions

Tournesol said:
There are terms in language with specifically sensory meanings, such as "colour", "taste", etc.

Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
MF claims that despite her lack of experiential knowledge, Mary nevertheless has complete semantic understanding of red.
Tournesol (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

Tournesol said:
There is nothing special about the inability of the CR to derive semantics from syntax, since
that is not possible in general.
But sematics is not derived from syntax. Why do you think it needs to be? Because Searle wrongly accuses the AI argument of assuming this?

Tournesol said:
Although they agree on syntax. So why should the CR specifically be able to avoid that problem ?
Again : Semantics is not derived from syntax.

Tournesol said:
Human agents are likely to do so using their embededness in the world -- "*this* [points] is what I mean by zimmoid"--
but the CR does not have that capacity.
There are many other ways of conveying and learning both meaning and knowledge apart from “pointing and showing”.


moving finger said:
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context. This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantic understanding of two different agents.
Tournesol said:
So..are you saying that the CR has the wrong semantics, and that having the wrong semantics
counts as "understanding" in a way that having no semantics does ?
No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.

Tournesol said:
And how does A TT
distinguish between having the wrong semantics and having no semantics (since the
symbol-manipulation is the same in each case ?) And *how* does the CR have the
wrong semantics, since it does not get them from syntax alone ?
The rules of sematics, along with the rules of syntax, are learned (or programmed). Any agent, including humans, can incorporate errors in learning (or programming) – making one error in syntax or semantics, or disagreeing on the particular semantics in one instance, does not show that the agent “does not understand semantics”.

Tournesol said:
Your observation
that there is nothing special about the CR's inability to derive semantics from syntax is
far short of a demonstration that Searle is wrong, and it actually *can* derive semantics from syntax ?
Once again - I have never said that semantics is derived from syntax – you have said this!
Both syntax and semantics follow rules, which can be learned, but it does not follow that “one is derived from the other”. Syntax and semantics are quite different concepts. Just as a “programmable computer” and a “pocket calculator” are both calculating machines, but one cannot necessarily construct a programmable computer by connecting together multiple pocket calculators.

Tournesol said:
(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)
moving finger said:
Who has suggested that semantics is redundant?
Tournesol said:
You have, in effect. If semantics can be derived from syntax, it is informationally redundant.
There you go again! I have never said that semantics is derivable from syntax, you have!

Tournesol said:
Truths-by-definition may not amount to empirical truths, but that does not mean they
are emprical falsehoods -- or do you think there are no umarried bachelors ?
We may agree on the definitions of some words, that it does not follow that we agree on the definitions of all words.

Tournesol said:
How can you establish a fact without definitions?
moving finger said:
Ask yourself what are the essential qualities of understanding that allow me to say this agent understands avoid prejudicial definitions and and avoid anthropocentrism
Tournesol said:
You haven't shown how to do all that without any pre-existing definitions.
There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities, and from there work towards discovering whether it is possible for a non-conscious agent to possesses those qualities of understanding. I am not saying your definition is wrong, just that I do not agree with it.

moving finger said:
Experiential qualities are agent-dependent (ie subjective). Tournesols experiential quality of seeing red is peculiar to Tournesol subjective - it is meaningless to any other agent.
Tournesol said:
Even if that is true, it is a far cry from the argumentatively relevant point that "red" has no meaning.
I never said that “red has no meaning”. But the “experiential knowledge of red” is purely subjective.
Tournesol – Really, if you wish to continue misquoting me there is not much point in continuing this discussion.

Tournesol said:
And I don't see why it should be true anyway; if consciousness is generated by the brain, then anatomically normal
brains should generate the same qualia. To argue otherwise is to assume some degree of non-physicalism.
Why should it be the case that the precise “data content” of MF seeing red should necessarily be the same as the “data content” of “Tournesol seeing red”? Both are subjective states, there is no a priori reason why they should be identical.

Tournesol said:
Need I point
out the eccentricity of appealing to anti-physicalism to support AI?
Need I point out that I have never appealed to such a thing?
Again you are either “making things up to suit your arguments”, or you are misquoting me.

Tournesol said:
Are you saying we should reject the easy option , and stick to the hard-to-impossible one just to keep the semantics-is-really-syntax flag flying ?
One more time : I never said that “semantics is syntax”! You really have swallowed Searle’s propaganda hook, line and sinker!

Tournesol said:
As to your distinction between knowledge and understanding , I don't think it is
sustainable. To know what a word means is to understand it.
“experiential knowledge of red” has nothing to do with “knowing what the word red means”. I know what the word “x-ray” means yet I have no experiential knowledge of x-rays.

Tournesol said:
You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.
What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
MF claims that despite her lack of experiential knowledge, Mary nevertheless has complete semantic understanding of red.
Tournesol (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

Tournesol said:
You have no grounds to suppose that understanding X-rays is just understanding per se -- it is
only partial understanding compared to understanding visible colours.
Only “partial understanding”?
What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?

Tournesol said:
You have conceded that experiential knowledge is knowledge.
If knowledge is required for understanding, as you say, experiential knowledge
is required for understaning. Since eperience is needed for
experiential knowledge, that means experience is required for
understanding.
I have said that knowledge is necessary for understanding, but it does not follow from this that all knowledge conveys understanding.
Experiential knowledge is 100% subjective, it does not convey any understanding at all.

Tournesol said:
We could copy the data across -- as in standard, non-AI computing-- but would that be sufficient for
meaning and understanding ? If a system has language, you can use that to cnvey
3rd-person non-experiential knowledge. But how do you bootstrap that process -- arrive
at linguistic understanding in the first place? Humans learn language through interaction
with the environment.
And once learned, all that data and knowledge is contained within the brain – there is no need for continued interaction with the environment in order for the agent to continue understanding. Thus interaction with the environment is simply one possible way of “programming” the data and knowledge that is required for understanding. It does not follow that this is the only way to program data and knowledge.

(see part 2)

MF
 
  • #158
Part 2 :

Tournesol said:
you cannot safely
conclude that you have the right semantics just because you have the right syntax.
Wrong assumption again. I have never suggested that syntax gives rise to semantics.

Tournesol said:
An AI that produce the right answers in a TT might have the worng semantics or no semantics.
In a poorly constructed Turing Test it might have, yes. Just as any human also might have. That is why we need to look at (a) trying to get a better understanding of just what understanding is (the qualities of understanding) with resorting to “defining our way” out of the problem and (b) improving the Turing Test

moving finger said:
The fact that humans are so dependent on sense-receptors for their information gathering does not lead to the conclusion that understanding is impossible in the absence of sense-receptors in all possible agents.
Tournesol said:
And that argument does not show that syntax is sufficient for human-type semantics.
There you go again! I have never said that “syntax is sufficient for semantics” – this is a false assumption attributed to AI which is promulgated by Searle and his disciples.

Tournesol said:
Would a putative AI have
quite a different form of language to a human (the Lion problem) ? Then Searle
has made his case.
Do you mean spoken language? Why would it necessarily be any different to an existing human language? Why would it necessarily be the same? What bearing does this have on the agent’s ability to understand?

Tournesol said:
Would it have the same understanding , but not achieved solely
by virtue of syntax ? Again, Searle has made his case.
Again, Searle’s case and yours is based on a false assumption – that AI posits syntax gives rise to semantics!

moving finger said:
It makes the point that ability to fly a plane is not synonymous with understanding flight
Tournesol said:
The ability is part of an understanding which is more than merely theoretical understanding.
Ability is not necessarily anything to do with understanding. I can learn something “by rote”, and reproduce it perfectly – it does not follow that I understand what I am doing

moving finger said:
What red looks like is not understanding it is simply subjective experiential information.
Tournesol said:
And you don't live in a house, you live in a building made of bricks with doors and windows.
I don’t need to “see” a house to understand what a house is.
I don’t need to “see” red to understand what red is

moving finger said:
What red looks like to Mary is not necessarily the same as what red looks like to Tournesol
Tournesol said:
Naturalistically , it should be.
You have no way of knowing whether it is or not

moving finger said:
Your argument continues to betray a peculiar anthropocentic perspective.
Tournesol said:
Strong AI is about duplicating human intelligence -- it should be anthopocentric.
Humans are also carbon based – does that mean all intelligent agents must necessarily be carbon-based? Of course not.
AI is about creating intelligence artificially. Humans happen to be just one example of a species that we know possesses intelligence, it does not follow that intelligence must be defined anthropocentrically.

moving finger said:
What makes you think that the experiential quality of red is the same to you as it is to me?
Tournesol said:
Physicalism. Same cause, same effect.
There is reason to doubt, because you have no way of knowing if the effect is indeed the same (you have no way of knowing what red looks like to Mary).

Tournesol said:
Why do you think it isn't ?
I said “What red looks like to Mary is not necessarily the same as what red looks like to Tournesol”.

Tournesol said:
How could such an agent attach any meaning to a term like "qualia" if it has no examples whatsoeve
to draw on.
If I have never seen a house I can nevertheless attach a meaning to the word “house” by the way the word is defined and the way it is used in language and reasoning. I can attach a meaning to “x-rays” even though I have absolutely no experiential knowledge (no qualia) associated with x-rays whatsoever. I can do the same with the word “red”.

I can understand semantically just what is meant by the term red without ever experiencing seeing red, just as I can understand sematically just what is meant by the term x-rays without ever experiencing seeing x-rays.

Tournesol said:
They are not analogous, as I have shown. You don't need experience to understand X-ray as well
as anyone can understand it becuase no-one has experience OF THAT PARTICULAR PHENOMENON, not
because experience in general never contributes to semantics.

Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
MF claims that despite her lack of experiential knowledge, Mary nevertheless has complete semantic understanding of red.
Tournesol (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?
Tournesol said:
If no-one can write down a definition of the experiential nature of "red" no-one can encode it into a programme.
Maybe so – but as you recall, experiential knowledge is not part of understanding so it doesn’t really matter

Tournesol said:
It is a subjective experience because no-one can write down a definition.
And that holds true without making the physicalistically unwarranted assumption that
similar brains produce radically different qualia.
I have simply cautioned you against the opposite unwarranted assumption – that the precise data connected with your experience of red is necessarily the same as the precise data connected with Mary’s experience of red.

Tournesol said:
SO you can't teach a computer what "red" means by cutting-and-pasting information (or rather data)
from a human brain -- because it would no longer make sense in a different context.
Nor would you need to in order to impart understanding to the computer. Experiential knowledge has nothing to do with understanding, remember?

Tournesol said:
Note that data is not information for precisely that reason -- information is data that
makes sense in a context. You seem to have got the transferability of data mixed
up with the transferability of information. Information can be transferred,if the "receiving" context
has the appropriate means to make sense of the data already present, but how the CR is to
have the means is precisely what is at stake.
Good point. You could in principle transfer the precise data corresponding to “Tournesol sees red” into Mary’s brain, but it does not follow that Mary’s brain will be able to make any sense of that data. But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway

moving finger said:
Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)
Tournesol said:
We can ask questions about how things look (about the "carriers" of information as opposed to
infomation itself) , and a system with full semantics needs to understand those questions.
What colour is a red object? Red. What is there to “understand semantically” about that which requires me to have experiential knowledge of “what red looks like”?

Once again, experiential knowledge has nothing to do with understanding

Tournesol said:
No-one knows how to encode all the information. You don't.
moving finger said:
Oh really Tournesol. Whether MF knows know how to do it or not is irrelevant.
Tournesol said:
Whether anyone else does is highly relevant:"No-one knows how to encode all the information".

With respect, to suggest that “it is not possible because nobody yet knows how to do it” seems like a rather churlish and infantile argument. Nobody “knew how to construct a programmable computer” in the 18th century, but that did not stop it from eventually happening.

Tournesol said:
If that were true, you could write a defition of the
experiential nature of "red". You can't, so it isn't.
Again very churlish, Tournesol.
Just because “MF cannot do it” does not lead to the conclusion “it cannot be done”

The subjective experiential nature of red is different for each agent. There IS no universal definition – the experience is subjective. Do I need to explain what subjective means?

And besides all this, experiential knowledge is not part of understanding so it doesn’t matter (in the context of producing a machine with understanding) if nobody ever succeeds in writing it down anyway.

Tournesol said:
I would have no way of telling whether a detailed
description of a brain state was a description of a "red" or "green" quale,
even if it was my own brain. So the "everyone has different qualia" definition
of "subjectivity" you are appealing to -- which is contradicted by physicalism --
is not the same as the "explanatory gap" version, which applies even to one's own
brain-states.
It is not “contradicted by physicalism”. Tournesol is not MF, therefore there is no reason to expect that Tournesol’s brain-states will be identical to MF’s brain states when both agents are “seeing red”.

Simply because I cannot write down a complete description of either of these brain-states does not lead to the conclusion that they cannot in principle be fully described physically.

Neither MF nor anyone else can accurately predict the weather, but there is no doubt in my mind that it is a deterministically chaotic process which is entirey physical.

MF
 
Last edited:
  • #159
moving finger said:
I don’t need to “see” a house to understand what a house is.
I don’t need to “see” red to understand what red is

If you do not see a house or red (or a red house), you will never attain a complete understanding of red, house or red house. Experiencing the visual stimulus that is caused by a house or the colour red is part of completely understanding the colour or the structure.

In fact, humans are able to experience red without seeing the colour. This is because humans are comprised of cells and everyone of these cells reacts to colours in a photosensitive manner that releases hormones in the human body. Experiencing this hormonal release results in part of what I'd call the experiencial understanding of the colour red. Similarily, the hormonal reaction to seeing a house would entail about 2 million years of instinctual, hormonal and basic human reactions to the concept of attaining shelter.

It is by way of these processes that humans are able to experience and understand red in a way that is thouroghly separate and distiquished from the programmed or auto-programmed data-storage and physics of a computer.

As I have already pointed out, when professional computer scientists describe digital processes they are, or should be, bound by terminological protocol to distiquish these processes and results from biological processes and results to aggressively avoid confusion and the mis-directed trust of the lay-public.



moving finger said:
Humans are also carbon based – does that mean all intelligent agents must necessarily be carbon-based? Of course not.
AI is about creating intelligence artificially. Humans happen to be just one example of a species that we know possesses intelligence, it does not follow that intelligence must be defined anthropocentrically.

Intellegence was defined by humans in the first place.

When it comes to digital computing we call it "artificial intelligence".


moving finger said:
(you have no way of knowing what red looks like to Mary).

Yes, isn't it amazing? However, we can tell what red looks like to any computer because we built the things and we set up their parameter of definnitions. If we had built Mary, we'd know what red looked like to her. She does, however, have the ability to describe "red" to us using her unique "understanding" of the colour.


moving finger said:
If I have never seen a house I can nevertheless attach a meaning to the word “house” by the way the word is defined and the way it is used in language and reasoning. I can attach a meaning to “x-rays” even though I have absolutely no experiential knowledge (no qualia) associated with x-rays whatsoever. I can do the same with the word “red”.

I can understand semantically just what is meant by the term red without ever experiencing seeing red, just as I can understand sematically just what is meant by the term x-rays without ever experiencing seeing x-rays.

I maintain that an incomplete "understanding" is not "understanding" per sey. An incomplete understanding demonstrates a process that is working toward understanding. It may never be reached and there is no law that says it will be reached.




moving finger said:
Maybe so – but as you recall, experiential knowledge is not part of understanding so it doesn’t really matter.

The way I see it, all knowledge is experencial. I can't say this is true for computers because they do not "experience" as far as I know. I would maintain that understanding is fully dependent upon experience (with requires a form of consciousness).

moving finger said:
But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway

Why do you keep saying this?


moving finger said:
What colour is a red object? Red. What is there to “understand semantically” about that which requires me to have experiential knowledge of “what red looks like”?

See what I've written about the effect of red on hormones... (in biologically active agents)

moving finger said:
Once again, experiential knowledge has nothing to do with understanding

Repeating a false statement does not make it correct.



moving finger said:
With respect, to suggest that “it is not possible because nobody yet knows how to do it” seems like a rather churlish and infantile argument. Nobody “knew how to construct a programmable computer” in the 18th century, but that did not stop it from eventually happening.

Actually, the idea of a programable computer stems from the mechanisms involved in "programming" a loom for weaving and this process dates from before the 1700s. Once a "card with holes in it" was introduced to facilitate a speedy programing of the loom, the path was clear for IBM to intervien... some 100 years later.


moving finger said:
Again very churlish, Tournesol.
Just because “MF cannot do it” does not lead to the conclusion “it cannot be done”

The subjective experiential nature of red is different for each agent. There IS no universal definition – the experience is subjective. Do I need to explain what subjective means?

And besides all this, experiential knowledge is not part of understanding so it doesn’t matter (in the context of producing a machine with understanding) if nobody ever succeeds in writing it down anyway.


It is not “contradicted by physicalism”. Tournesol is not MF, therefore there is no reason to expect that Tournesol’s brain-states will be identical to MF’s brain states when both agents are “seeing red”.

Simply because I cannot write down a complete description of either of these brain-states does not lead to the conclusion that they cannot in principle be fully described physically.

Neither MF nor anyone else can accurately predict the weather, but there is no doubt in my mind that it is a deterministically chaotic process which is entirey physical.

MF

These are weak arguments. The very fact that Mary and T are biological agents is enough to warrent the use of the word Understanding to describe their personal experiences of phenomena.

The only thing that warrents the consideration of using human terms and terminology to describe a machine's functions such as those in a computer is the fact that humans created computers. When we create something, we use our own impression of how we function to serve as a blueprint in our machines. However, this does not warrent confusing hourds of people with words that apply to subtle human interactions by applying them to the machines that a very small number of people have built.
 
  • #160
MF said:
I don’t need to “see” a house to understand what a house is.
I don’t need to “see” red to understand what red is.
"See" I believe has just been used as an arbitrary way in which to experience something but not the sole way in which to experience it as QC has pointed out that there are other fashions by which a person could gain experiencial knowledge either directly or indirectly with which to help them understand a concept. There are other senses other than sight.
But consider this. Imagine a person has been born with only one of five senses working. We'll say the person's hearing is the only sense available to it. None others what so ever. How would you go about teaching this person what the colour "red" is?
 
  • #161
quantumcarl said:
If you do not see a house or red (or a red house), you will never attain a complete understanding of red, house or red house. Experiencing the visual stimulus that is caused by a house or the colour red is part of completely understanding the colour or the structure.

What do you “understand” about the “colour” of red simply by experiencing seeing red? You “know what red looks like for quantumcarl”, yes – but “knowing what red looks like for quantumcarl” is NOT “semantic understanding of red”. And “knowing what red looks like” tells you nothing about the “structure” of red - whatever that might mean).

“A picture paints a thousand words” – that is indeed a common expression in English.
All (ALL) of the “information” contained in any picture (visual image) can be reduced to a string of binary digits. A house is “defined” by the relational aspects of components such as door, windows, roof, etc. “What a house looks like” can be reduced to words, and also to mathematical language.

Granted that my 6-year old son has a “picture dictionary” with nice images of houses inside. Why? Because a young child “takes in more information, and more easily” through pictures rather than through words. This is clearly the case with children.

But my own dictionary does not use any pictures or visual images in its definitions of words. Why? Because images (though sometimes useful, especially for young people) are not essential to convey the meaning of words (ie to convey semantic understanding).

Suppose that Mary claims to possesses semantic understanding of the term “house”, but she has never seen a house. Presumably quantumcarl would say there is something missing from Mary’s semantic understanding of “house”.

What exactly is missing? What is it that Mary necessarily CANNOT understand about the term “house”, which she WOULD understand if only she could see a house? Would you care to tell us?

quantumcarl said:
In fact, humans are able to experience red without seeing the colour. This is because humans are comprised of cells and everyone of these cells reacts to colours in a photosensitive manner that releases hormones in the human body. Experiencing this hormonal release results in part of what I'd call the experiencial understanding of the colour red.
Does this “hormonal release” convey any semantic understanding to the human?
No.

quantumcarl said:
Similarily, the hormonal reaction to seeing a house would entail about 2 million years of instinctual, hormonal and basic human reactions to the concept of attaining shelter.
“Hormonal reaction” is not “semantic understanding”

quantumcarl said:
It is by way of these processes that humans are able to experience and understand red in a way that is thouroghly separate and distiquished from the programmed or auto-programmed data-storage and physics of a computer.
“Hormonal reaction” is not “semantic understanding”
quantumcarl said:
when professional computer scientists describe digital processes they are, or should be, bound by terminological protocol to distiquish these processes and results from biological processes and results to aggressively avoid confusion and the mis-directed trust of the lay-public.
With respect, I suggest it is the “lay-public” confusion between “knowing what a colour looks like” (ie subjective “hormonal reaction” to use your phrase) and “knowing what a colour IS” (ie semantic understanding of the term) which is responsible for your own confusion here. These are two very different types of “knowing”.

In science, we have a duty to avoid lay-person-type confusion and to distinguish very carefully between the subjective “experiential knowledge of X” (which has nothing to do with semantic understanding of X) and the objective “definitional understanding of X”, which has everything to do with semantic understanding of X.

quantumcarl said:
Intellegence was defined by humans in the first place.
All human words are defined by humans. It does not follow that all words must be defined anthropocentrically (unless we deliberately wish to create an anthropocentric bias in everything).

quantumcarl said:
we can tell what red looks like to any computer because we built the things and we set up their parameter of definnitions.
With respect, if we create a computer which is able to “consciously and subjectively perceive the colour red” then we will have no way of knowing what that subjective experience is like for the computer, whether we “set up the parameters” or not.

Even if I know everything there is to know (objectively) about a bat, I can NEVER know “what it feels like” to be a bat, because “what it feels like” is purely subjective.

quantumcarl said:
If we had built Mary, we'd know what red looked like to her.
Why should this follow?
“Mary knowing what red looks like” is a subjective experience that is peculiar to Mary, nobody on the outside of Mary “knows what this is like for Mary”, only Mary does.

quantumcarl said:
She does, however, have the ability to describe "red" to us using her unique "understanding" of the colour.
What does red look like? Can you describe “what red looks” like to someone who can only see shades of grey?
No.
Why? Because your subjective experience of “red” is peculiar to you, it has no objective basis in the outside world, it cannot be described in objective terms which another person can understand.

quantumcarl said:
I maintain that an incomplete "understanding" is not "understanding" per sey. An incomplete understanding demonstrates a process that is working toward understanding. It may never be reached and there is no law that says it will be reached.
Have you shown that Mary has incomplete understanding of the term “red” simply because she has never experienced seeing red?

What Mary does not understand about red
Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
Quantumcarl (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can quantumcarl provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?
quantumcarl said:
The way I see it, all knowledge is experencial.
This simply betrays your anthropocentric bias again.
Objective knowledge is derived from information and the relational rules of that information (which in turn is also information). All information can be encoded into binary digits and “programmed” into an agent. Humans cannot be programmed (yet), therefore the only way they can acquire knowledge of or from the outside world is through their senses. But nevertheless I can still acquire a complete semantic understanding of the term “red” without ever seeing red.

quantumcarl said:
I can't say this is true for computers because they do not "experience" as far as I know. I would maintain that understanding is fully dependent upon experience (with requires a form of consciousness).
As I said, this simply shows your anthropocentric bias.

moving finger said:
But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway
quantumcarl said:
Why do you keep saying this?
Because it is true!
I have suggested a thought experiment (“What Mary does not understand about red” above) which would allow you to show (if you can) that experiential knowledge is a necessary part of understanding – can you do so?

quantumcarl said:
See what I've written about the effect of red on hormones... (in biologically active agents)
“Hormonal reaction” is not “semantic understanding”

quantumcarl said:
Repeating a false statement does not make it correct.
Can you show that experiential knowledge is necessary for understanding (rather than simply saying that it is)? Can you answer the “What Mary does not understand about red” example above?

quantumcarl said:
These are weak arguments.
With respect, a weak argument is better than none at all.
My position is that I can “semantically understand” all there is to know about red without ever seeing red, and there has been no rational counter-argument provided!
All quantumcarl and Tournesol have been able to do is to assert the equivalent of “experiential knowledge is required for semantic understanding” – but this has NOT been shown to be the case! Where is your evidence that this is the case? Can you answer the “What Mary does not understand about red” example above?

quantumcarl said:
In the absence of any evidence, The very fact that Mary and T are biological agents is enough to warrent the use of the word Understanding to describe their personal experiences of phenomena.
This again shows a lay-person’s confused use of the word “know”. To “know what red looks like” is subjective experiential knowledge, it has nothing to do with semantically understanding what is meant by the term red.
If you genuinely believe that Mary needs to see red in order to understand what is meant by red, then please reply to the “what Mary does not understand about red” argument above
quantumcarl said:
The only thing that warrents the consideration of using human terms and terminology to describe a machine's functions such as those in a computer is the fact that humans created computers. When we create something, we use our own impression of how we function to serve as a blueprint in our machines. However, this does not warrent confusing hourds of people with words that apply to subtle human interactions by applying them to the machines that a very small number of people have built.
A more responsible and objective approach to scientific understanding of “understanding” would be to avoid anthropocentric bias at all costs.

MF
 
Last edited:
  • #162
TheStatutoryApe said:
"See" I believe has just been used as an arbitrary way in which to experience something but not the sole way in which to experience it as QC has pointed out that there are other fashions by which a person could gain experiencial knowledge either directly or indirectly with which to help them understand a concept. There are other senses other than sight.
Agreed. The role of the senses as far as human semantic understanding is concerned is to convey information and knowledge about the world – the senses are merely conduits for information and knowledge transfer to the brain. If you like, they are the means by which we “program” our brains. But once the brain is programmed and we “understand” by virtue of the information and knowledge that we possess, then we do not need the senses in order to “continue understanding”.
TheStatutoryApe said:
But consider this. Imagine a person has been born with only one of five senses working. We'll say the person's hearing is the only sense available to it. None others what so ever. How would you go about teaching this person what the colour "red" is?
That is a very good question – and it gets right to the heart of the matter, hence I will answer it fully so that we all might understand what is going on.

The confusion between “experiential knowledge” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”

One meaning (based on subjective experiential knowledge of red) would be better expressed “what does the colour red look like?”. Let us call this question A.

The other meaning (the objective semantic meaning of red) would be better expressed as “what is the semantic meaning of the term red?”. Let us call this question B.

Now, TheStatutoryApe, which question have you asked above? Is it A or B? I will answer both.

A - “what does the colour red look like?”
What the colour red looks like is a purely subjective experiential brain state. I have no idea what the colour red looks like for TheStatutoryApe, I only know what it looks like for MF. I cannot describe in objective terms what this colour looks like. Can you? Can anyone? The best I can do is to point to a red object and to say “there, if you look at that object then you will see what the colour red looks like”, but that STILL does not mean that the colour red “looks the same” for TheStatutoryApe as it does for MF. And seeing the colour red is NOT necessary in order to convery any semantic understanding of the term “red”.

B - “what is the semantic meaning of the term red?”
The semantic meaning of the term red is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. I do not need to be able to see red in order to understand from this definition that “this is what red is”.

Thus, it all depends on what your question is.

If you are asking “what does the colour red look like?”, then it is not possible for anyone to objectively describe this, and it is impossible to “teach” this to an agent who cannot “see” red. But “what the colour red looks like” has nothing to do with semantic understanding of the term “red”, which is in fact the second question. An agent can semantically understand the term “red” (question B) without being able to see “red” (question A).

This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

MF
 
Last edited:
  • #163
Because there is another piece of information you have about me: I have a
human brain, and human brains are known to be able to implement semantics,
consciousness, etc.

Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.

That is a silly objection. Anyone I can actually speak to obviously has a
functioning brain. I am not going on their external behaviour alone; I have
an insight into how their behaviour is implemented, which is missing in the CR
and the TT.

If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?

The TT is more doubtful in the case of a machine than that of a human.

The solution to this problem is to try and develop a better test, not to “define our way out of the problem”

I have already suggested a better test. You were not very receptive.


You are missing another point as well: the point is whether syntax is
sufficient for semantics.
I’m not missing that point at all. Searle assumes that a computer would not be able to understand semantics – I disagree with him. It is not a question of “syntax being sufficient for semantics”, I have never asserted that “synatx is sufficient for semantics” or that “syntax somehow gives rise to semantics”.
That the AI argument is necessarily based on the premise “syntax gives rise to semantics” is a fallacy that Searle has promulgated, and which you have swallowed.


It would help if you spelt out what, IYO, the (strong) AI arguemnt does say.

Throughout this debate you seem to be assuming that there is some set of
rules that are sufficient for semantics. Above you reject the idea that
they are the same rules as syntax. Very well: let us call the thesis
you are promoting
"The Symbol Manipulation According to Rules Technique is sufficient for semantics thesis"
w is
or
"The SMART is sufficient for semantics thesis"
Where SMART is any kind of Symbol Manipulation According to Rules.

Note, that this distinction makes no real difference to the CR.
Searle uses "syntax" and "symbol manipulation" interchangably because it
does not strike him that semantics is or could be entirely rule-based. In fact,
it as never struck anybody except yourslef, since there are so many objections
to it.

What fills the gap in humans, setting aside
immaterial souls, is probably the physical embodiment and interactions
with the surroundings. Of course,
any actual computer will have a physical embodiment, and its
physical embodiment *might* be sufficient for semantics and cosnciousness.
Once again – place me in a state of sensory deprivation, and I still understand semantics.
Once again, BECAUSE YOU HAVE ALREADY ACQUIRED SEMANTICS.

The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.

How is that relevant to the CR ? Are you saying that the CR can *acquire*
semantics despite its lack of interaction and sensory contact with an
evironment ? Are you saying you can "download" the relevant information
from a human -- although you have already conceded that information may
fail to make sense when transplanted from one context to anothera ?


However, even if that is true, it does not mean the computer's
posession of semantics is solely due to syntactic abilities,
and Searle's point is still true.

I am not arguing that “syntax gives rise to semantics”, which seems to be Searle’s objection. I am arguing that a computer can understand both syntax and semantics.

By virture of SMART ?


Are you seriously asserting that the only think that prevents a beaver from lecturing on civil
engineering is its lack of a voicebox ?
I am saying that “possession of understanding alone” is not sufficient to be able to also “report understanding” – to report understanding the agent also needs to be able “to report”.

By standard semantics, possession of undertanding is *necessary* to report.


Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

1) "What red looks like"
2) "The experiential qualities of red which cannot be written down"


There is nothing special about the inability of the CR to derive semantics from syntax, since
that is not possible in general.

But sematics is not derived from syntax. Why do you think it needs to be? Because Searle wrongly accuses the AI argument of assuming this?

To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics. You have also yet to explain what
you consider the "correct" AI argument to be.


Human agents are likely to do so using their embededness in the world -- "*this* [points] is what I mean by zimmoid"--
but the CR does not have that capacity.

There are many other ways of conveying and learning both meaning and knowledge apart from “pointing and showing”.

*Some* direct demonstrations can be deferred in *some* cases. It is not clear
whether they can all be removed completely. The alternative would be a system
of essentially circular definitons -- like
"Gift: present"
"Present: gift"
but more complex.



No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.

What relevance does that have to the CR? If the TT cannot establish that a
system understands correctly, how can it establish that it understands at all
?


The rules of sematics, along with the rules of syntax, are learned (or programmed). Any agent, including humans, can incorporate errors in learning (or programming) – making one error in syntax or semantics, or disagreeing on the particular semantics in one instance, does not show that the agent “does not understand semantics”.

The fact that errors in semantics may be undetectable to a TT implies absence of
semantics may be undetectable to a TT.


Once again - I have never said that semantics is derived from syntax – you have said this!
Both syntax and semantics follow rules, which can be learned, but it does not follow that “one is derived from the other”. Syntax and semantics are quite different concepts. Just as a “programmable computer” and a “pocket calculator” are both calculating machines, but one cannot necessarily construct a programmable computer by connecting together multiple pocket calculators.

The argument that syntax undeterdetermines sematics relies on the fact that
syntactical rules specify transformations of symbols relative to each other --
the semantics is not "grounded". Appealing to another set of rules --
another SMART -- would face the same problem.


Truths-by-definition may not amount to empirical truths, but that does not mean they
are emprical falsehoods -- or do you think there are no umarried bachelors ?

We may agree on the definitions of some words, that it does not follow that we agree on the definitions of all words.

I most people agree on the definitons of the words in a sentence, what
would stop that sentence being an analytic truth, if it is analytic ?


How can you establish a fact without definitions?

Ask yourself what are the essential qualities of understanding that allow me to say this agent understands avoid prejudicial definitions and and avoid anthropocentrism

You haven't shown how to do all that without any pre-existing definitions.

There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities,

How do you know they are its qualities, in the complete absence of a
defition ? Do they have name-tags sewn into their shorts ?


Experiential qualities are agent-dependent (ie subjective). Tournesols experiential quality of seeing red is peculiar to Tournesol subjective - it is meaningless to any other agent.

Even if that is true, it is a far cry from the argumentatively relevant point that "red" has no meaning.

I never said that “red has no meaning”. But the “experiential knowledge of red” is purely subjective.
Tournesol – Really, if you wish to continue misquoting me there is not much point in continuing this discussion.


It is not a question of misquoting you, it is a question of guessing how your
comments relate to the CR. Why should it matter that "the “experiential knowledge of red” is purely subjective".
Are you supposing that subjective knowledge doesn't matter for semantics ?

And I don't see why it should be true anyway; if consciousness is generated by the brain, then anatomically normal
brains should generate the same qualia. To argue otherwise is to assume some degree of non-physicalism.

Why should it be the case that the precise “data content” of MF seeing red should necessarily be the same as the “data content” of “Tournesol seeing red”? Both are subjective states, there is no a priori reason why they should be identical.

They should be broadly similar if our brains are broadly similar.
They should be precisely similar if our brains are precisely similar.
They should *not* be radically different if our brains are similar -- that
would be a viloation of the physicalist "same cause, same effecct" principle.


Need I point
out the eccentricity of appealing to anti-physicalism to support AI?

Need I point out that I have never appealed to such a thing?

The idea that similar brains can have radically different qualia is
non-physicalism in my and most people's book.

Again you are either “making things up to suit your arguments”, or you are misquoting me.

Or you are not aware that some of the things you are saying have implications
contrary to what you are trying to assert explicitly.
 
  • #164
As to your distinction between knowledge and understanding , I don't think it is
sustainable. To know what a word means is to understand it.
“experiential knowledge of red” has nothing to do with “knowing what the word red means”. I know what the word “x-ray” means yet I have no experiential knowledge of x-rays.

You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.

What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

Aagh! Firstly that is a classically anti-physicalist argument.
Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.


You have no grounds to suppose that understanding X-rays is just understanding per se -- it is
only partial understanding compared to understanding visible colours.

Only “partial understanding”?
What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?

What they look like, experientially.


You have conceded that experiential knowledge is knowledge.
If knowledge is required for understanding, as you say, experiential knowledge
is required for understaning. Since eperience is needed for
experiential knowledge, that means experience is required for
understanding.

I have said that knowledge is necessary for understanding, but it does not follow from this that all knowledge conveys understanding.
Experiential knowledge is 100% subjective, it does not convey any understanding at all.

That does not follow. Clearly experiential semantics conveys understanding of
experience.


We could copy the data across -- as in standard, non-AI computing-- but would that be sufficient for
meaning and understanding ? If a system has language, you can use that to convey
3rd-person non-experiential knowledge. But how do you bootstrap that process -- arrive
at linguistic understanding in the first place? Humans learn language through interaction
with the environment.

And once learned, all that data and knowledge is contained within the brain – there is no need for continued interaction with the environment in order for the agent to continue understanding. Thus interaction with the environment is simply one possible way of “programming” the data and knowledge that is required for understanding. It does not follow that this is the only way to program data and knowledge.


What is the alternative ? We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?


you cannot safely
conclude that you have the right semantics just because you have the right syntax.
Wrong assumption again. I have never suggested that syntax gives rise to semantics.

You haven't supplied any other way the CR can acquire semantics.


An AI that produce the right answers in a TT might have the worng semantics or no semantics.
In a poorly constructed Turing Test it might have, yes. Just as any human also might have. That is why we need to look at (a) trying to get a better understanding of just what understanding is (the qualities of understanding) with resorting to “defining our way” out of the problem and (b) improving the Turing Test

I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP. But that does not show
Searle is wrong; solving the HP is showing how mind emerges
from physics; it might tell you that artificially intelligent
agents need certain material substrates (or that certan semantics-
the semantics of feelings and expriences -- needs a certain physical
embededness). The resulting AI would not have therefore
have its intelligence/consicousness/semantics purely by virtue
of SMART.


Would a putative AI have
quite a different form of language to a human (the Lion problem) ? Then Searle
has made his case.

Do you mean spoken language? Why would it necessarily be any different to an existing human language?

Because human languages contain vocabulary relating to human senses.


Why would it necessarily be the same? What bearing does this have on the agent’s ability to understand?

If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?


Would it have the same understanding , but not achieved solely
by virtue of syntax ? Again, Searle has made his case.

Again, Searle’s case and yours is based on a false assumption – that AI posits syntax gives rise to semantics!

Achieved solely by SMART has the same problems as achieved solely by syntax.



What red looks like to Mary is not necessarily the same as what red looks like to Tournesol

Naturalistically , it should be.

You have no way of knowing whether it is or not

Do I have a way of knowing whether phsycialism is true ?


Your argument continues to betray a peculiar anthropocentic perspective.

Strong AI is about duplicating human intelligence -- it should be anthopocentric.

Humans are also carbon based – does that mean all intelligent agents must necessarily be carbon-based? Of course not.
AI is about creating intelligence artificially. Humans happen to be just one example of a species that we know possesses intelligence, it does not follow that intelligence must be defined anthropocentrically.


It does if we are to avoid a situation where "is this a computer" is a matter
of idiosyncratic definition. We have been through all this: you can be too
anthropocentric, but you can be insufficiently anthropocentric too.

What makes you think that the experiential quality of red is the same to you as it is to me?

Physicalism. Same cause, same effect.

There is reason to doubt, because you have no way of knowing if the effect is indeed the same (you have no way of knowing what red looks like to Mary).

Well, that's the anti-physicalist's argument.


How could such an agent attach any meaning to a term like "qualia" if it has no examples whatsoeve
to draw on.

If I have never seen a house I can nevertheless attach a meaning to the word “house” by the way the word is defined and the way it is used in language and reasoning. I can attach a meaning to “x-rays” even though I have absolutely no experiential knowledge (no qualia) associated with x-rays whatsoever. I can do the same with the word “red”.

The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?


If no-one can write down a definition of the experiential nature of "red" no-one can encode it into a programme.

Maybe so – but as you recall, experiential knowledge is not part of understanding so it doesn’t really matter

Tu quoque.

It is a subjective experience because no-one can write down a definition.
And that holds true without making the physicalistically unwarranted assumption that
similar brains produce radically different qualia.

I have simply cautioned you against the opposite unwarranted assumption – that the precise data connected with your experience of red is necessarily the same as the precise data connected with Mary’s experience of red.

It is not necessarily the same, it naturalistically the same. For all your
adherence to the central dogma of anti-physicalism, inverted spectra, you
claim to be a physicalist.


SO you can't teach a computer what "red" means by cutting-and-pasting information (or rather data)
from a human brain -- because it would no longer make sense in a different context.

Nor would you need to in order to impart understanding to the computer. Experiential knowledge has nothing to do with understanding, remember?

Why should I "remember" something that relates to a definition of
"understanding" which *I* don't accept..as *you* point out.

Anyway, experience has to do with the semantics of expreiential language.

Note that data is not information for precisely that reason -- information is data that
makes sense in a context. You seem to have got the transferability of data mixed
up with the transferability of information. Information can be transferred,if the "receiving" context
has the appropriate means to make sense of the data already present, but how the CR is to
have the means is precisely what is at stake.

Good point. You could in principle transfer the precise data corresponding to “Tournesol sees red” into Mary’s brain, but it does not follow that Mary’s brain will be able to make any sense of that data. But as you recall, experiential knowledge is not part of understanding so it doesn’t matter anyway

It is not part of your definition of understanding -- how remarkably
convenient.

Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)

We can ask questions about how things look (about the "carriers" of information as opposed to
infomation itself) , and a system with full semantics needs to understand those questions.

What colour is a red object? Red. What is there to “understand semantically” about that which requires me to have experiential knowledge of “what red looks like”?

Ask a blind person what red looks like.

No-one knows how to encode all the information. You don't.

Oh really Tournesol. Whether MF knows know how to do it or not is irrelevant.

Whether anyone else does is highly relevant:"No-one knows how to encode all the information".

With respect, to suggest that “it is not possible because nobody yet knows how to do it” seems like a rather churlish and infantile argument. Nobody “knew how to construct a programmable computer” in the 18th century, but that did not stop it from eventually happening.

Is that any easier than solving the Hard problem, or is it part of the Hard
problem?


I would have no way of telling whether a detailed
description of a brain state was a description of a "red" or "green" quale,
even if it was my own brain. So the "everyone has different qualia" definition
of "subjectivity" you are appealing to -- which is contradicted by physicalism --
is not the same as the "explanatory gap" version, which applies even to one's own
brain-states.

It is not “contradicted by physicalism”. Tournesol is not MF, therefore there is no reason to expect that Tournesol’s brain-states will be identical to MF’s brain states when both agents are “seeing red”.

Yes there is: all brains are broadly similar anatomically. If they were not,
you could not form a single brain out the two sets of genes you get from your
parents. (Argument due to Steven Pinker).

Simply because I cannot write down a complete description of either of these brain-states does not lead to the conclusion that they cannot in principle be fully described physically.

Summary:-

You might be able to give the CR full semantics by solving the HP; but that
leads to a version of AI that Searle does not disagree with.

You might be wriggle off the hook of full semantics by stipulating that
experience has nothing to do with understanding; but this is a style of
argument you dislike when others use it.

You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.
 
  • #165
moving finger said:
Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.
Tournesol said:
That is a silly objection.
It is not an objection, it is an observation. Do you dispute it?
Tournesol said:
Anyone I can actually speak to obviously has a
functioning brain.
Tournesol, your arguments are becoming very sloppy.
I can (if I wish) “speak to” my table – does that mean my table has a functioning brain?
Tournesol said:
I am not going on their external behaviour alone; I have
an insight into how their behaviour is implemented, which is missing in the CR
and the TT.
What “insight” do you have which is somehow independent of observing their behaviour?
How would you know “by insight” that a person in a coma cannot understand you, unless you put it to the test?
How would you know “by insight” that a 3-year old child cannot understand you, unless you put it to the test?
moving finger said:
The solution to this problem is to try and develop a better test, not to “define our way out of the problem”
Tournesol said:
I have already suggested a better test. You were not very receptive.
Sorry, I missed that one. Where was it?
Tournesol said:
It would help if you spelt out what, IYO, the (strong) AI arguemnt does say.
I am not here to defend the AI argument, strong or otherwise.
I am here to support my own position, which is that machines are in principle capable of possessing understanding, both syntactic and semantic.
Tournesol said:
you seem to be assuming that there is some set of
rules that are sufficient for semantics.
Agreed
Tournesol said:
let us call the thesis
you are promoting
"The Symbol Manipulation According to Rules Technique is sufficient for semantics thesis"
w is
or
"The SMART is sufficient for semantics thesis"
Where SMART is any kind of Symbol Manipulation According to Rules.
Not “any kind” of symbol manipulation – a particular symbol manipulation
Tournesol said:
Note, that this distinction makes no real difference to the CR.
Can you show this, or are you simply asserting it?
Tournesol said:
Searle uses "syntax" and "symbol manipulation" interchangably because it
does not strike him that semantics is or could be entirely rule-based.
That’s his opinion. I do not agree
Tournesol said:
In fact,
it as never struck anybody except yourslef, since there are so many objections
to it.
I do not think this is true. Even if it were true, what relevance does this have to the argument?
moving finger said:
The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.
Tournesol said:
How is that relevant to the CR ? Are you saying that the CR can *acquire*
semantics despite its lack of interaction and sensory contact with an
evironment ?
I am saying that the information and knowledge to understand semantics can be encoded into the CR, and once encoded it does not need continued contact with the outside world in order to understand
Tournesol said:
Are you saying you can "download" the relevant information
from a human -- although you have already conceded that information may
fail to make sense when transplanted from one context to anothera ?
Where did I say that the information needs to be downloaded from a human?
Are you perhaps suggesting that semantic understanding can only be transferred from a human?
The only “information” which I claim would fail to make sense when transplanted from one agent to another is subjective experiential information – which as you know by now is not necessary for semantic understanding.
Tournesol said:
By virture of SMART ?
By virtue of the fact that semantic understanding is rule-based
Tournesol said:
By standard semantics, possession of undertanding is *necessary* to report.
The question is not whether “the ability to report requires understanding” but whether “understanding requires the ability to report”
If you place me in situation where I can no longer report what I am thinking (ie remove my ability to speak and write etc), does it follow that I suddenly cease to understand? Of course not.
Tournesol said:
Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?
Tournesol said:
1) "What red looks like"
2) "The experiential qualities of red which cannot be written down"
Mary can semantically understand the statement “what red looks like” without knowing what red looks like. The statement means literally “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “what red looks like”.
Mary can semantically understand the statement “the experiential qualities of red which cannot be written down” without knowing the experiential qualities of red. The statement means literally “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “the experiential qualities of red which cannot be written down”.
Thus I have shown that Mary can indeed semantically understand both your examples.
Now, can you provide an example of a statement containing the word “red” which Mary CANNOT semantically understand?
What red looks like is nothing to do with semantic understanding of the term red – it is simply “what red looks like”. What red looks like to Tournesol may be very different to what red looks like to MF, but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of what red looks like.
The experiential qualities of red are nothing to do with semantic understanding of the term red – these are simply “the experiential qualities of red”. The experiential qualities of red for Tournesol may be very different to The experiential qualities of red for MF, but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of the experiential qualities of red.
The confusion between “experiential qualities” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”
One meaning (based on subjective experiential knowledge of red) would be expressed “what does the colour red look like?”.
The other meaning (the objective semantic meaning of red) would be expressed as “what is the semantic meaning of the term red?”.
This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.
Tournesol said:
To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics.
You are the one asserting that semantics is necessarily NOT rule-based. I could equally say the onus is on you to show why it is not.
Tournesol said:
You have also yet to explain what
you consider the "correct" AI argument to be.
Answered above
moving finger said:
No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.
Tournesol said:
What relevance does that have to the CR? If the TT cannot establish that a
system understands correctly, how can it establish that it understands at all
?
Very relevant. If the CR passes most of the Turing test, but fails to understand one or two words because those words are simply defined differently between the CR and the human interrogator, that in itself is not sufficient to conclude “the CR does not understand”
Tournesol said:
The argument that syntax undeterdetermines sematics relies on the fact that
syntactical rules specify transformations of symbols relative to each other --
the semantics is not "grounded". Appealing to another set of rules --
another SMART -- would face the same problem.
“Grounded” in what in your opinion? Experiential knowledge?
What experiential knowledge do I necessarily need to have in order to have semantic understanding of the term “house”?
Tournesol said:
I most people agree on the definitons of the words in a sentence, what
would stop that sentence being an analytic truth, if it is analytic ?
If X and Y agree on the definitions of words in a statement then they may also agree it is analytic. What relevance does this have?
Tournesol said:
There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities,
Tournesol said:
How do you know they are its qualities, in the complete absence of a
defition ? Do they have name-tags sewn into their shorts ?
You are not reading my replies, are you? I never said there should be no definitions, I said there is a balance to be struck. Once again you seem to be making things up to suit your argument.
Tournesol said:
Why should it matter that "the “experiential knowledge of red” is purely subjective".
Are you supposing that subjective knowledge doesn't matter for semantics ?
I am suggesting that subjective experiential knowledge is not necessary for semantic understanding. How many times do you want me to repeat that?
Tournesol said:
They should be broadly similar if our brains are broadly similar.
“Broadly similar” is not “identical”.
A horse is broadly similar to a donkey, but they are not the same animal.
Tournesol said:
you are not aware that some of the things you are saying have implications
contrary to what you are trying to assert explicitly.
You are perhaps trying to read things into my arguments that are not there, to support your own unsupported argument. When I say “there is no a priori reason why they should be identical” this means exactly what it says. With respect if we are to continue a meaningful discussion I suggest you start reading what I am writing, instead of making up what you would prefer me to write.
Tournesol said:
You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.
I certainly do not. Red is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm. This is a semantic understanding of red. What more do I need to know? Whether or not I have known the experiential quality of seeing red makes absolutely no difference to this semantic understanding.
moving finger said:
What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.
Tournesol said:
that is a classically anti-physicalist argument.
It may be a true argument, but it is not necessarily anti-physicalist.
Tournesol said:
Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.
The semantics is completely embodied in the meaning of the term red – which is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm.
moving finger said:
What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?
Tournesol said:
What they look like, experientially.
What they look like is an experiential quality, it is not semantic understanding.
Perhaps you would claim that I also do not have a full understanding of red because I have not tasted red? And what about smelling red?
Tournesol said:
Clearly experiential semantics conveys understanding of
experience.
I can semantically understand what is meant by the term “experience” without actually “having” that experience.
Tournesol said:
We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?
The CR already contains information in the form of the rulebook
Tournesol said:
You haven't supplied any other way the CR can acquire semantics.
Sematics is rule-based, why should the CR not possesses the rules for semantic understanding?
Tournesol said:
I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.
Please define the Hard Problem.
Tournesol said:
Because human languages contain vocabulary relating to human senses.
And I can have complete semantic understanding of the term red, without ever seeing red.
Tournesol said:
If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?
By reasoning and experimental test.
Tournesol said:
Achieved solely by SMART has the same problems as achieved solely by syntax.
You have not shown that semantic understanding requires anything other than a knowledge and an understanding of the relevant semantic rules
Tournesol said:
Do I have a way of knowing whether phsycialism is true ?
You don’t. And I understand that many people do not believe it is true.
Tournesol said:
We have been through all this: you can be too
anthropocentric, but you can be insufficiently anthropocentric too.
And my position is that I believe arbitrary definitions such as “understanding requires consciousness” and “understanding requires experiential knowledge” are too anthropocentrically biased and cannot be defended rationally
Tournesol said:
Well, that's the anti-physicalist's argument.
It’s my argument. I’m not into labelling people or putting them into boxes.
I see no reason why X’s subjective experience of seeing red should be the same as Y’s
Tournesol said:
The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?
By “how do I semantically understand the term qualia”, do you mean “how do I semantically understand the term experiential quality”?
Let me give an example – “the experiential quality of seeing red” – which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. What is missing from this semantic understanding the experiential quality of seeing red?
Tournesol said:
you
claim to be a physicalist.
To my knowledge I have made no such claim in this thread
Tournesol said:
Anyway, experience has to do with the semantics of expreiential language.
Semantic understanding has nothing necessarily to do with experiential qualities, as I have shown several times above
Tournesol said:
It is not part of your definition of understanding -- how remarkably
convenient.
And remarkably convenient that it is part of yours?
The difference is that I can actually defend my position that experiential knowledge is not part of understanding with rational argument and example – the Mary experiment for example.
Tournesol said:
Ask a blind person what red looks like.
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red, which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. Experiential knowledge is not part of this semantic understanding.
Tournesol said:
Is that any easier than solving the Hard problem, or is it part of the Hard
problem?
Please define what you understand to be the Hard Problem
Tournesol said:
Yes there is: all brains are broadly similar anatomically
As before, “broadly similar” is not synonymous with “identical”.
Tournesol said:
. If they were not,
you could not form a single brain out the two sets of genes you get from your
parents. (Argument due to Steven Pinker).
Genetically identical twins may behave similarly, but not necessarily identically. Genetic makeup is only one factor in neurophysiology.
Tournesol said:
You might be able to give the CR full semantics by solving the HP; but that
leads to a version of AI that Searle does not disagree with.
Please define what you mean by the Hard Problem
Tournesol said:
this is a style of
argument you dislike when others use it.
It is not a question of “disliking”.
If a position can be supported and defended with rational argument (and NOT by resorting solely to “definition” and “popular support”) then it is worthy of discussion. I have put forward the “What Mary does not understand about red” thought experiment in defence of my position that experiential knowledge is not necessary for semantic understanding, and so far I am waiting for someone to come up with a statement including the term red which Mary cannot semantically understand. The two statements you have offered so far I have shown can be semantically understood by Mary.
Tournesol said:
You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.
What “explanatory gap” is this?
MF
 
Last edited:
  • #166
moving finger said:
What do you “understand” about the “colour” of red simply by experiencing seeing red? You “know what red looks like for quantumcarl”, yes – but “knowing what red looks like for quantumcarl” is NOT “semantic understanding of red”. And “knowing what red looks like” tells you nothing about the “structure” of red - whatever that might mean).

“A picture paints a thousand words” – that is indeed a common expression in English.
All (ALL) of the “information” contained in any picture (visual image) can be reduced to a string of binary digits. A house is “defined” by the relational aspects of components such as door, windows, roof, etc. “What a house looks like” can be reduced to words, and also to mathematical language.

Granted that my 6-year old son has a “picture dictionary” with nice images of houses inside. Why? Because a young child “takes in more information, and more easily” through pictures rather than through words. This is clearly the case with children.

But my own dictionary does not use any pictures or visual images in its definitions of words. Why? Because images (though sometimes useful, especially for young people) are not essential to convey the meaning of words (ie to convey semantic understanding).

Suppose that Mary claims to possesses semantic understanding of the term “house”, but she has never seen a house. Presumably quantumcarl would say there is something missing from Mary’s semantic understanding of “house”.

What exactly is missing? What is it that Mary necessarily CANNOT understand about the term “house”, which she WOULD understand if only she could see a house? Would you care to tell us?

An incomplete understanding of a house means not experiencing the house as a whole. It means not receiving the entire gammut of information with regard to a house. Mary does not see, smell or understand the plumbing of the house... therefore, her understanding of a house is incomplete... and this does not constitute an understanding of a house. Mary has not seen the blueprints and has not inspected the foundations. Mary has not walked into the house and inspected or had an inspection done for insects or pests. These are things that a house can contain. She perhaps doesn't understand that and so, her understanding is incomplete and still in the process of being formed. Mary does not understand these implications with regard to the term "house".[/quote]

moving finger said:
Does this “hormonal release” convey any semantic understanding to the human?
No.

Yes. The hormonal release demonstrates how red makes the human feel in the presence of red. This is true in every human although not widely known. Note: eg. red light districts. Note: eg. (the popular saying in advertising) "red sells".


moving finger said:
“Hormonal reaction” is not “semantic understanding”

Hormonal reaction is an experience and therefore constitutes knowledge. As noted above it is understood by many professionals as "common knowledge" or a "semantic understanding".


moving finger said:
“Hormonal reaction” is not “semantic understanding”

You're repeating yourself again.



moving finger said:
With respect, I suggest it is the “lay-public” confusion between “knowing what a colour looks like” (ie subjective “hormonal reaction” to use your phrase) and “knowing what a colour IS” (ie semantic understanding of the term) which is responsible for your own confusion here. These are two very different types of “knowing”.

With continued respect, part of knowing what a colour is includes experiencing its effects. When a colour stimulates the cones of the retina this experience helps one toward an understanding the physics of colour. When you read an equation that explains the physical properties of colour... the experience helps one toward an understanding of colour.

moving finger said:
In science, we have a duty to avoid lay-person-type confusion and to distinguish very carefully between the subjective “experiential knowledge of X” (which has nothing to do with semantic understanding of X) and the objective “definitional understanding of X”, which has everything to do with semantic understanding of X.


All human words are defined by humans. It does not follow that all words must be defined anthropocentrically (unless we deliberately wish to create an anthropocentric bias in everything).

We don't have to "create" the anthropocentric bias. As humans we cannot escape the bias. However, we can avoid using terms that only apply to human physiology and function and process such as the word "understanding". If you you can build a computer that "loves" and "understands" you... I'd really like to see that.


moving finger said:
With respect, if we create a computer which is able to “consciously and subjectively perceive the colour red” then we will have no way of knowing what that subjective experience is like for the computer, whether we “set up the parameters” or not.

I agree. In fact, there are more things that we don't know or understand than there are things we do know or understand. That's why caution is imperitive in every endevour, especially in the sciences that directly effect humankind, such as computer sciences.

moving finger said:
Even if I know everything there is to know (objectively) about a bat, I can NEVER know “what it feels like” to be a bat, because “what it feels like” is purely subjective.

That's right. That's why you will never have a complete understanding (and therefore will not understand a bat) of a bat. That's why when you asked me if the individual components of the brain, ie. neurons, "understand" the tasks they perform and their implications... I said... I don't know, I'm not a neuron.


moving finger said:
“Mary knowing what red looks like” is a subjective experience that is peculiar to Mary, nobody on the outside of Mary “knows what this is like for Mary”, only Mary does.

Yes, that's what I'm getting at. That is one of the many determiners of understanding. It is individual, relative and dependent upon the person understanding. I have given examples of "reaching and understanding" between more than one person, however.


moving finger said:
What does red look like? Can you describe “what red looks” like to someone who can only see shades of grey?
No.
Why? Because your subjective experience of “red” is peculiar to you, it has no objective basis in the outside world, it cannot be described in objective terms which another person can understand.

Yes it can... but, its not a scientific language people use... its also not binary language. One uses words to describe red like "warm", "to the yellow", " a little blue", "makes me horney"... "makes me want to buy"... and so on... There are some studies that have yeilded standard results with regard to the qualities of red... but... you'll just have to believe me cause we're almost out of time.


moving finger said:
Have you shown that Mary has incomplete understanding of the term “red” simply because she has never experienced seeing red?

What Mary does not understand about red
Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
Quantumcarl (presumably) would argue that experiential knowledge is necessary for full semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can quantumcarl provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?

Mary said sitting in the red room made her feel sexy, horney and warm all over. This room also triggered a memory of her father who happened to be Satin and it reminded her of the places that glowed deep like the reddened coals of a fire where her dad would take her on Christmas eve.



moving finger said:
This simply betrays your anthropocentric bias again.
Objective knowledge is derived from information and the relational rules of that information (which in turn is also information). All information can be encoded into binary digits and “programmed” into an agent. Humans cannot be programmed (yet), therefore the only way they can acquire knowledge of or from the outside world is through their senses. But nevertheless I can still acquire a complete semantic understanding of the term “red” without ever seeing red.

I disagree. Please see my above statements.


moving finger said:
As I said, this simply shows your anthropocentric bias.

Oh my god, is it showing!... yes I admit it... I'm a f@cking human!


moving finger said:
I have suggested a thought experiment (“What Mary does not understand about red” above) which would allow you to show (if you can) that experiential knowledge is a necessary part of understanding – can you do so?

Done and done(r). All knowledge is experiencial. It must be experienced before it can be stored as knowledge.

Let me tell you how things are shaping up in my mind with regard to the terminology of the CR experiment.

Computers store data.

Humans Understand.

-----


Computers are programmed.

Humans experience.

That's my take on it.

The rest of your argument seems to repeat most of the above points. Got to run. This is most enlightening because the more you try to discredit the state of understanding as a specifically human trait... the more you expose how it really is. Thank you.


moving finger said:
With respect, a weak argument is better than none at all.

Yes, and I'm sorry I through that in. Just getting cocky I suppose. My respect. Cheers.
MF[/QUOTE]
 
  • #167
quantumcarl said:
An incomplete understanding of a house means not experiencing the house as a whole. It means not receiving the entire gammut of information with regard to a house. Mary does not see, smell or understand the plumbing of the house... therefore, her understanding of a house is incomplete... and this does not constitute an understanding of a house.

Ahhhh, I see now! Perhaps Mary actually needs to “be” the house to really understand it? Mary cannot really understand what a house is unless she is part of the house. Yes, I see what you mean…… :smile:

quantumcarl said:
Mary has not seen the blueprints and has not inspected the foundations. Mary has not walked into the house and inspected or had an inspection done for insects or pests. These are things that a house can contain. She perhaps doesn't understand that and so, her understanding is incomplete and still in the process of being formed. Mary does not understand these implications with regard to the term "house".

Yes. And it follows that Mary can never truly understand what a house “IS” unless she herself is part of the house….. built into the foundations…. Cemented into the brickwork….. why didn’t I see that before? :biggrin:
quantumcarl said:
Does this “hormonal release” convey any semantic understanding to the human?
quantumcarl said:
Yes. The hormonal release demonstrates how red makes the human feel in the presence of red. This is true in every human although not widely known. Note: eg. red light districts. Note: eg. (the popular saying in advertising) "red sells".
“how red makes the human feel” – this is semantic understanding to you?
Or is it perhaps an emotional response to a stimulus?

With respect - I think you and I are on different planets.

Bye!MF
 
Last edited:
  • #168
moving finger said:
“Y does not agree with X” is not synonymous with “Y thinks X is wrong”.
Tisthammerw said:
But normally that is what is implied when it comes to philosophical discussions e.g. “I do not agree with ethical relativism.”
If X does not agree with Y’s opinion it does not follow that X thinks Y is wrong. They may simply have different opinions. X can have different opinions to Y and still respect Y’s right to hold his/her opinion. Simple as that.
I can see that perhaps some may not respect the right of others to hold different opinions, but I'm not one of them.

I here shall trim most of the parts on “analytic vs synthetic” because I consider we have done that to death, we are simply repeating things over and over again.
moving finger said:
As I pointed out several times already, two people may not be able to agree on whether a given statement is analytic or not if those two people are not defining the terms used in the statement in the same way. Do you agree with this?
Tisthammerw said:
Yes and no. Think of it this way. Suppose I “disagree” with your definition of “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?
If Tisthammerw has a different definition of bachelor then it is up to Tisthammerw to decide whether the statement “bachelors are unmarried” is analytic or not according to his definitions of bachelor and unmarried. I cannot tell you since I do not know what Tisthammerw’s “different definition of bachelor” actually is.

Whether a statement is “analytic or not” depends on the definitions of the words used in the statement.

moving finger said:
How do I know whether the computer you have in mind is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
Because I have described it to you.
moving finger said:
In fact you did NOT specify that the computer you have in mind is interpreting the data/information.
Tisthammerw said:
In fact I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms. To recap: I described the scenario, and I have subsequently asked you the questions regarding whether or not this fits your definition of perceiving etc.
With resepct, what part of “you did NOT specify that the computer you have in mind is interpreting the data/information” do you not understand?
If I do not know whether the computer YOU have in mind is interpreting the data/information then I have no idea whether it is perceivng or not. So tell me – is the computer you have in mind interpreting the data/information?

I also trim the parts on “my definition is better than yours” since I consider this rather puerile.

Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?
There is more than one meaning of “to perceive”. There is the “introspective-perception” meaning, which does NOT require any sense receptors. There is the “sense-perception” meaning, which does require sense receptors.
For an entity to “sense-perceive” a bright light it must possesses suitable sense receptors which respond to the stimulus of that light.
Whether that entity is necessarily “aware” of that bright light is a different question and it depends on one’s definition of awareness. I am sure that you define awareness as requiring consciousness. Which definition would you like to use?

moving finger said:
What makes you think that a person is anything more than a “complex set of instructions”?
Tisthammerw said:
Because people possesses understanding (using my definition of the term, what you have called TH-understanding), and I have shown repeatedly that a complex set of instructions is insufficient for TH-understanding to exist.
And I have shown repeatedly that you have “shown” no such thing – your argument is not necessarily sound because the premise "Bob's consciousness is the only consciousness in the system" is not necessarily true - (see post #256 in the “can artificial intelligence……” thread)

moving finger said:
Can you show that no possible computer can possesses consciousness?
Tisthammerw said:
Given the computer model in question, I think I can with my program X argument (since program X stands for any computer program that would allegedly produce TH-understanding, and yet no TH-understanding is produced when program X is run).
see above

Tisthammerw said:
If what you say is true, it seems that computers are not capable of new understanding at all (since the person in the Chinese room models the learning algorithms of a computer), only “memorizing.”
moving finger said:
This does not follow at all. How do you arrive at this conclusion?
Tisthammerw said:
As I said earlier, because the man in the Chinese room models a computer program. I can also refer to my program X argument, in which case there is no “new understanding” in this case either.
see above

moving finger said:
It is not clear from Searle’s description of the thought experiment whether or not he “allows” the CR to have any ability of acquiring new understanding (this would require a dynamic database and dynamic program, and it is not clear that Searle has in fact allowed this in his model).
Tisthammerw said:
Again, the variant I put forth does use a dynamic database and a dynamic program. We can do the same thing for program X.
see above

Suggestion : If you wish to continue discussing the Program X argument can we please do that in just one thread (let’s say the AI thread and not this one)? That way we do not have to keep repeating ourselves and cross-referencing.

MF
 
Last edited:
  • #169
moving finger said:
Agreed. The role of the senses as far as human semantic understanding is concerned is to convey information and knowledge about the world – the senses are merely conduits for information and knowledge transfer to the brain. If you like, they are the means by which we “program” our brains. But once the brain is programmed and we “understand” by virtue of the information and knowledge that we possess, then we do not need the senses in order to “continue understanding”.
This last part again is a straw man. No One here has argued that continued sensory information is necessary for understanding. I myself have said this a number of times so continually responding with it gets us no where.
The contention is that aquisition of information is necessary for understanding. You have said that "possession" of information is what is necessary as opposed to the "aquisition". The fact is that you can not possesses information unless you acquire it in some fashion and the manner in which you acquire that information will influence your "understanding" of it.
When I say "experience" I am referring to the aquisition and correlation of information in one fashion or another. I agree that continuous aquisition (a steady feed) of information is not necessary.

MF said:
That is a very good question – and it gets right to the heart of the matter, hence I will answer it fully so that we all might understand what is going on.

The confusion between “experiential knowledge” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”

One meaning (based on subjective experiential knowledge of red) would be better expressed “what does the colour red look like?”. Let us call this question A.

The other meaning (the objective semantic meaning of red) would be better expressed as “what is the semantic meaning of the term red?”. Let us call this question B.

Now, TheStatutoryApe, which question have you asked above? Is it A or B? I will answer both.

A - “what does the colour red look like?”
What the colour red looks like is a purely subjective experiential brain state. I have no idea what the colour red looks like for TheStatutoryApe, I only know what it looks like for MF. I cannot describe in objective terms what this colour looks like. Can you? Can anyone? The best I can do is to point to a red object and to say “there, if you look at that object then you will see what the colour red looks like”, but that STILL does not mean that the colour red “looks the same” for TheStatutoryApe as it does for MF. And seeing the colour red is NOT necessary in order to convery any semantic understanding of the term “red”.

B - “what is the semantic meaning of the term red?”
The semantic meaning of the term red is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. I do not need to be able to see red in order to understand from this definition that “this is what red is”.

Thus, it all depends on what your question is.

If you are asking “what does the colour red look like?”, then it is not possible for anyone to objectively describe this, and it is impossible to “teach” this to an agent who cannot “see” red. But “what the colour red looks like” has nothing to do with semantic understanding of the term “red”, which is in fact the second question. An agent can semantically understand the term “red” (question B) without being able to see “red” (question A).

This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

MF
Unfortunately this does not answer my question but I'll respond to it before I go back to what my original question was.
For one I would say that unless there is some sort of difference between our "software" and "hardware" that you and I do in fact see the same or very nearly the same thing when we look at red. Considering that the software and hardware are nearly identical there is not reason to believe otherwise and after we have correlated our experiences side by side with samples of colours I'd say that we will find we can deduce that we see the same thing.
Next, why is the sensory experience of "red" insufficient information for understanding? As far as I see it this is one of multiple viable manners by which to acquire information for the purpose of understanding.
Would you say that your average kindergartener has no understanding of what the word "red" means because they have never been explained to the scientific definition of "red"? If so then we'd probably have to say that the majority of the people in the world have no idea what the word "red" (or it's equivilent in their own language) means. We'd further probably have to conclude that the persons who came up with the word themselves had no idea what the word meant. I wonder what the word meant to the people who came up with it?
At any rate, I'd personally say that the two manners of aquiring the information about rede you have defined above would probably best be described as "direct" and "indirect" understanding. Or rather understanding by virtue of direct experience or understanding by virtue of parallel experience. Let me get into the "parallel experience" a bit more.
Remember my question? How would you go about teaching a person who possesses only hearing and no other sense what so ever? You never actually answered this. With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
 
  • #170
TheStatutoryApe said:
The contention is that aquisition of information is necessary for understanding. You have said that "possession" of information is what is necessary as opposed to the "aquisition". The fact is that you can not possesses information unless you acquire it in some fashion and the manner in which you acquire that information will influence your "understanding" of it.
In the sense that “acquisition of data” must be followed by “interpretation of data” before the agent can make sensible use of that data, then yes I agree that the precise form of the acquisition of the data may “colour” the interpretation of the data. But it does not follow from this that a particular sense-experience is necessary for semantic understanding – only that all understanding may be coloured by the manner in which data is acquired and interpreted. A Chinese person’s understanding of the meaning of the word “house” (or it’s Chinese equivalent) may not be precisely the same as an English person’s understanding of the meaning of the same word – but this does not mean that “one of them understands and the other does not” – it means simply that they attach slightly different meanings to the same word.

moving finger said:
If you are asking “what does the colour red look like?”, then it is not possible for anyone to objectively describe this, and it is impossible to “teach” this to an agent who cannot “see” red. But “what the colour red looks like” has nothing to do with semantic understanding of the term “red”, which is in fact the second question. An agent can semantically understand the term “red” (question B) without being able to see “red” (question A).
TheStatutoryApe said:
Unfortunately this does not answer my question
With respect, I have indeed answered your question(s), but perhaps it was not the answer you wanted.

TheStatutoryApe said:
but I'll respond to it before I go back to what my original question was.
For one I would say that unless there is some sort of difference between our "software" and "hardware" that you and I do in fact see the same or very nearly the same thing when we look at red.
In the case of human beings, I agree our experiential data may be similar (but not necessarily identical). But what happens if you confront an intelligent alien (with visual ability)? The alien can look at a red object, and it will have an experiential quality associated with seeing that red object, but it may be completely different to the experiential quality that you have when you look at the same object. Yet the alien can learn English, and it can then refer to the colour it sees as “red”. The key point is that the meaning of the term red is the same for both you and the alien, even though the experiential data may be vastly different – because the definition of the term red (the meaning of the word) is not determined by any particular experiential quality.

TheStatutoryApe said:
Considering that the software and hardware are nearly identical there is not reason to believe otherwise and after we have correlated our experiences side by side with samples of colours I'd say that we will find we can deduce that we see the same thing.
I will agree that you and I would probably see similar things – but not that we necessarily see exactly the same.
However in the above example of the alien intelligence, you cannot deduce that you and the alien see even similar things. Yet the alien can still have a semantic understanding of the word red which may be identical to your understanding – because semantics is determined by the definitions of and relationships between words, not necessarily by any associated sense-experience

TheStatutoryApe said:
why is the sensory experience of "red" insufficient information for understanding? As far as I see it this is one of multiple viable manners by which to acquire information for the purpose of understanding.
Would you say that your average kindergartener has no understanding of what the word "red" means because they have never been explained to the scientific definition of "red"?
A young child may have a semantic understanding of red, but it would not necessarily be the same as my own semantic understanding of red. Presumably quantumcarl (based on his recent post regarding "understanding a house") would say that a young child does not understand anything.

As to the question "is experiential knowledge alone sufficient for semantic understanding?" - Would you say that an agent can claim to understand the meaning of any word before the agent has developed a reasonably basic vocabulary? A newly-born baby might start visually-sensing objects which have the colour “red” at a very early age, but that sensory experience alone is not sufficient for it to claim that it understands the meaning of the word red. The infant needs to develop a basic vocabulary before it can do that. It follows that sensory experience of "red" is insufficient for understanding the meaning of the term “red”.

TheStatutoryApe said:
If so then we'd probably have to say that the majority of the people in the world have no idea what the word "red" (or it's equivilent in their own language) means.
To take the position that “everyone must mean exactly what I mean when I refer to the word red” would be unreasonable arrogance akin to the anthropocentriuc arrogance displayed by those who insist on defining terms with a human bias. Of course it goes without saying that the meaning of a word may be different to different agents, and the meaning of a word to an agent may change over time as the agent acquires more knowledge and understanding of the world. None of this makes one agent “right” and the other “wrong”.

Mary may start to grasp the understanding of a word like “consciousness” at a very early age – let's say as a young teenager. If Mary grows to become a neurophysiologist, her understanding of the word will likely change dramatically as she learns more about the subject. This does not mean that the teenager Mary “did not understand the meaning of the word consciousness”, just that the word consciousness had a different meaning to the teenager Mary compared to what it does to the adult Mary. But at the same time I think it is reasonable to assume that the adult Mary’s understanding of the meaning of the word is much more highly developed, more complex, and more subtle, and shows a much greater “depth” of understanding, than the teenager Mary’s understanding. Wouldn’t you agree?

TheStatutoryApe said:
We'd further probably have to conclude that the persons who came up with the word themselves had no idea what the word meant. I wonder what the word meant to the people who came up with it?
Why would we conclude this? The meanings of words change over time. As we have seen above, the meanings of words can even change over an agent’s lifetime. A caveman might attach a meaning to the word “red”, but he certainly knows nothing of “wavelengths of electromagnetic radiation”. He would be totally confused by such words. From his position of a very limited and shallow understanding of the world about him, he might even deny that there IS any other meaning to the word “red” apart from his experiential knowledge of red (shock, horror!) - simply because his semantics is more basic, more crude and less developed than the semantics of modern man. From our vantage point we can see that the caveman’s semantics are not the only possible semantics, we can also see it is not the case that one meaning is “wrong” and the other is “right”, there can be enormous differences in the complexity, subtlety and depth of understanding of meaning between two agents. The important thing is that (unlike the caveman) we know that this depth of understanding is possible without experiential knowledge.

TheStatutoryApe said:
I'd personally say that the two manners of aquiring the information about red you have defined above would probably best be described as "direct" and "indirect" understanding. Or rather understanding by virtue of direct experience or understanding by virtue of parallel experience.
That is your opinion and of course you are entitled to it. I would personally say that experiential knowledge is not synonymous with, and is not necessary for, semantic understanding.

TheStatutoryApe said:
Remember my question? How would you go about teaching a person who possesses only hearing and no other sense what so ever? You never actually answered this.
With respect, remember my answer? I pointed out that your question is ambiguous. I pointed out the two different possible meanings, and I asked you which one you actually meant. You never actually clarified what you meant. In fact I did provide answers to both possible meanings of your question. Perhaps my answers were not the answers you wanted, but that is beside the point. Please go check again.

TheStatutoryApe said:
With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
Again, your question is ambiguous and I ask again the same question that you still have not answered. Your question is (if I understand correctly) “is his experience sufficient for conveying the concept of red?” – yes?

This question MIGHT mean :

A : “is his experience sufficient for conveying his experiential quality of seeing red?”

Or it MIGHT mean :

B : “is his experience sufficient for conveying his semantic understanding of red?”

These are two very different questions. Only the latter question (B) explicitly refers to semantic understanding.

As before, I must ask : Which question are you asking?

Let me know which one, and I will then answer it.

Let me repeat once again what I said in my previous reply, in case you missed it - this is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

Now I have a question for you.

Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
TheStatutoryApe (presumably) would argue that experiential knowledge is necessary for semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can TheStatutoryApe provide an example of any sentence in the English language which includes the term “red” which Mary must necessarily and demonstrably fail to semantically understand by virtue of her lack of experiential knowledge of red?

Thanks

MF
 
Last edited:
  • #171
moving finger said:
Ahhhh, I see now! Perhaps Mary actually needs to “be” the house to really understand it? Mary cannot really understand what a house is unless she is part of the house. Yes, I see what you mean…… :smile:
Mary must experience every aspect of a house in order to understand the implications of the word house. If all Mary possessed in order to understand a house was binary language... all Mary would understand would be the binary interpretation of a house.
moving finger said:
Yes. And it follows that Mary can never truly understand what a house “IS” unless she herself is part of the house….. built into the foundations…. Cemented into the brickwork….. why didn’t I see that before? :biggrin:
That's you're understanding of what I have written?
moving finger said:
“how red makes the human feel” – this is semantic understanding to you?
Or is it perhaps an emotional response to a stimulus?
Yes. Evaluating emotional response to a phenomenon is a part of understanding the effects of a phenomenon and thusly the phenomenon itself.
moving finger said:
With respect - I think you and I are on different planets.
This really sums things up (not). Let's hope its true.
StatuApe said:
The contention is that aquisition of information is necessary for understanding. You have said that "possession" of information is what is necessary as opposed to the "aquisition". The fact is that you can not possesses information unless you acquire it in some fashion and the manner in which you acquire that information will influence your "understanding" of it.
When I say "experience" I am referring to the aquisition and correlation of information in one fashion or another. I agree that continuous aquisition (a steady feed) of information is not necessary.
Hi, Once information is stored in a human brain upon retreaval the stimulus of that retreaval is an connection and an experience that is a building block of understanding.
Drawing on an earlier understanding is an experience that leads to further understanding... or can... but not necessarily in every case... in fact it is a rare occurence by my observations.

Lets say Larry got hit by lightning. Now, no one understands the phrase, "hit by lightning" as well or as correctly as Larry and the 1400 other people who have been hit by lightning and lived.

Lets say Tim, who has never seen anything and relys on his hearing to experience the world has several different coloured lights shone on his skin. One is red. Now Tim has tactile understanding of red... and can use that to build on his total understanding of the colour red.

Lets say Hal the computer can only interpret the world through its use of binary language. He can't experience anything because he is not as complex as an organic organism like a human.

The data Hal recovers from its environment is immediately translated into 0s and 1s and stored in a specific area of its storage capacity. When it either stimulates itself to build a data base on what it has analyzed or is asked to regurgitate what it has compiled with regard to its environment, it does so, unknowing of the purpose of the task or even how it feels to be following the orders of an independent operator or a program that was installed to provoke this process.

Understanding is a rare comodoty. It doesn't come fast, cheap or easy. It requires am empathetic and gallant attempt of someone willing to put themselves in the actual currcumstances they need to understand. That's how I understand it.
 
Last edited:
  • #172
What is red?

Suppose Houston makes radio contact with an alien intelligence on the far side of our galaxy. Over a long period of time the Houston scientists are teaching the aliens, via radio, the meanings of our words and language - in other words our semantics.

It has been established already in previous communications that the aliens have visual sense-perception organs similar in function to our "eyes", and they can sense-perceive electromagnetic radiation with wavelengths in the range 400nm to 700nm.

Here we listen in on one of the radio conversations between Houston and the Aliens…..

Alien : You have taught me that to understand your language, to grasp your semantics, I must understand the meanings of your words, which in turn means I must understand the defining properties and characteristics of your words as used within your language. Is this not so?
Houston Yes, that’s exactly right
Alien Then I have a question please, to help me understand
Houston OK, let’s have it
Alien What is “red”?
Houston Ummmm, well, red is a colour
Alien What is “colour”?
Houston “colour” is the set of things such as red, yellow, green
Alien Thus, red is characterised by being a member of the set of colours, and a colour is the set of things one member of which is red. Is this supposed to help me truly understand what is meant either by “red” or by “colour”?
Houston ummmm, well no I guess not
Alien When you were teaching me what is “horse”, you did not simply tell me “horse is an animal”. This tells me only that horse is a member of the set of animals, it does not tell me what is horse. Likewise, telling me that “red” is a member of the set of “colours” does not tell me what is red.
Houston Yes, I guess you are right
Alien For me to understand what is horse, I needed to know the defining properties and characteristics of horse, was it not so?
Houston Yes, that’s true
Alien Thus – for me to understand what is red, I need to know the defining properties and characteristics of red, is it not so?
Houston Yes, that’s correct
Alien Thus – what are the defining properties and characteristics of red?
Houston (after a long pause) – ummmmm, redness?
Alien Redness? If I ask what are the defining properties and characteristics of horse, and you reply “horsiness”, would you expect me to then understand from this reply what is horse?
Houston ummm, well no of course not
Alien Then let us stop wasting time. Let me ask again - what are the defining properties and characteristics of red?
Houston OK, we’ll try a bit more detail. You aliens have visual sense-receptors, right? Red is the experiential quality that you have when you sense-perceive a red object with those visual sense-receptors
Alien Human, do I need to explain the circularity of your explanation? “Red is the experiential quality when one perceives a red object” – this is much like saying “horse is the entity which is characterised by being a horse object”. How can I understand from this what is horse if I do not first know what a horse object looks like? How can I understand what is red from your explanation, if I have no idea what is a red object?
Houston (embarrassed silence)
Alien With respect, human, for me to understand what is red, you need to explain the defining properties and characteristics of red without tautologically defining these characteristics in terms of redness.
Houston (still more silence)
Alien Let me help you. It is clear from what you have told me so far that “red” is an experiential quality associated with visual sense-receptors. As you know, we aliens have visual sense-receptors similar in function to your human “eyes”, and like you humans we aliens can sense-perceive electromagnetic radiation in the wavelength range 400nm to 700nm.
Houston yes, that’s correct
Alien Then is it possible to define red in terms of something we both understand - sense-perceiving electromagnetic radiation of a particular wavelength?
Houston (lots of applause and cheers) Yes – of course! That’s it!
Alien Well?
Houston : OK, here goes…. Red is the experiential quality that you have when you sense-perceive electromagnetic radiation with wavelengths of the order of 650nm
Alien Ahhhhh, NOW I see! We call that experiential quality “qrkzmnthlog” – thus “red” is equivalent to qrkzmnthlog. NOW I understand. Thank you!

MF
 
Last edited:
  • #173
moving finger said:
Suppose Houston makes radio contact with an alien intelligence on the far side of our galaxy. Over a long period of time the Houston scientists are teaching the aliens, via radio, the meanings of our words and language - in other words our semantics.
It has been established already in previous communications that the aliens have visual sense-perception organs similar in function to our "eyes", and they can sense-perceive electromagnetic radiation with wavelengths in the range 400nm to 700nm.
Here we listen in on one of the radio conversations between Houston and the Aliens…..
Alien : You have taught me that to understand your language, to grasp your semantics, I must understand the meanings of your words, which in turn means I must understand the defining properties and characteristics of your words as used within your language. Is this not so?
Houston Yes, that’s exactly right
Alien Then I have a question please, to help me understand
Houston OK, let’s have it
Alien What is “red”?
Houston Ummmm, well, red is a colour
Alien What is “colour”?
Houston “colour” is the set of things such as red, yellow, green
Alien Thus, red is characterised by being a member of the set of colours, and a colour is the set of things one member of which is red. Is this supposed to help me truly understand what is meant either by “red” or by “colour”?
Houston ummmm, well no I guess not
Alien When you were teaching me what is “horse”, you did not simply tell me “horse is an animal”. This tells me only that horse is a member of the set of animals, it does not tell me what is horse. Likewise, telling me that “red” is a member of the set of “colours” does not tell me what is red.
Houston Yes, I guess you are right
Alien For me to understand what is horse, I needed to know the defining properties and characteristics of horse, was it not so?
Houston Yes, that’s true
Alien Thus – for me to understand what is red, I need to know the defining properties and characteristics of red, is it not so?
Houston Yes, that’s correct
Alien Thus – what are the defining properties and characteristics of red?
Houston (after a long pause) – ummmmm, redness?
Alien Redness? If I ask what are the defining properties and characteristics of horse, and you reply “horsiness”, would you expect me to then understand from this reply what is horse?
Houston ummm, well no of course not
Alien Then let us stop wasting time. Let me ask again - what are the defining properties and characteristics of red?
Houston OK, we’ll try a bit more detail. You aliens have visual sense-receptors, right? Red is the experiential quality that you have when you sense-perceive a red object with those visual sense-receptors
Alien Human, do I need to explain the circularity of your explanation? “Red is the experiential quality when one perceives a red object” – this is much like saying “horse is the entity which is characterised by being a horse object”. How can I understand from this what is horse if I do not first know what a horse object looks like? How can I understand what is red from your explanation, if I have no idea what is a red object?
Houston (embarrassed silence)
Alien With respect, human, for me to understand what is red, you need to explain the defining properties and characteristics of red without tautologically defining these characteristics in terms of redness.
Houston (still more silence)
Alien Let me help you. It is clear from what you have told me so far that “red” is an experiential quality associated with visual sense-receptors. As you know, we aliens have visual sense-receptors similar in function to your human “eyes”, and like you humans we aliens can sense-perceive electromagnetic radiation in the wavelength range 400nm to 700nm.
Houston yes, that’s correct
Alien Then is it possible to define red in terms of something we both understand - sense-perceiving electromagnetic radiation of a particular wavelength?
Houston (lots of applause and cheers) Yes – of course! That’s it!
Alien Well?
Houston : OK, here goes…. Red is the experiential quality that you have when you sense-perceive electromagnetic radiation with wavelengths of the order of 650nm
Alien Ahhhhh, NOW I see! We call that experiential quality “qrkzmnthlog” – thus “red” is equivalent to qrkzmnthlog. NOW I understand. Thank you!
MF
Shared knowledge about a colour or an animal doesn't constitute a shared understanding of the colour or the animal.
It is the shared experience of a horse or red that can allow for an understanding to take place between Houston and Alien.
If the Alien has its visual receptors in its arm pits, his understanding of red will be a different understanding of red from a human's, who's eyes are in his head. In fact it may be that aliens percieve green to be what we call red. Please note "colour blindness". This isn't necessarily a condition of "blindness" but, perhaps a different way of seeing and experiencing colour. In fact, if you look at red lights and green lights and yellow lights on traffic controlers... you will notice that the red is shifted more to the yellow... the green to the blue and the yellow to the red to facilitate the percentage of the population that has difficulty understanding a pure red etc...
Similarily, if the alien has never been on a horse, brushed down the horse, shoveled the fresh dung of the horse or fed a horse, the alien doesn't not possesses a complete understanding of a horse.
The stimulus that is the shared knowledge or data of a phenomenon is insufficient to use in arriving at what I am terming as an understanding of the said phenomena.
 
  • #174
quantumcarl said:
Shared knowledge about a colour or an animal doesn't constitute a shared understanding of the colour or the animal.
Neither does "shared sense perception".
Understanding is not an absolute property like "the speed of light". No two agents (even two human "genetically identical" twins) will have "precisely" the same understanding of the world. Nevertheless, two agents can still reach a "shared understanding" about a word or concept, even the world in general, even if their individual understandings of some ideas and concepts is not identical. If you are demanding that agent A must have completely identical understanding to agent B in order for them to share understanding then no two agents share understanding in your idea of the word.

And just because there may be some differences in understanding between agent A and agent B does not necessarily give agent A the right to claim that agent B "does not understand".

quantumcarl said:
It is the shared experience of a horse or red that can allow for an understanding to take place between Houston and Alien.
It is the shared information and knowledge that can allow for an understanding to take place between Houston and Alien

quantumcarl said:
If the Alien has its visual receptors in its arm pits, his understanding of red will be a different understanding of red from a human's, who's eyes are in his head.
I disagree. If I could transplant your eyes from your head to your armpits, your semantic understanding of red could remain exactly the same - what red "is" to you does not necessarily change just because your eyes have changed location.

quantumcarl said:
In fact it may be that aliens percieve green to be what we call red.
In fact it may be that I perceive green to be what you call red (and red to be green - ie just a colour-swap). How would we ever find out? We could not. Would it make any difference at all to the understanding of red and green of either of us, or between us? It would not.

quantumcarl said:
Please note "colour blindness". This isn't necessarily a condition of "blindness" but, perhaps a different way of seeing and experiencing colour.
Colour blindness is an inability to correctly distinguish between two or more different colours.

quantumcarl said:
In fact, if you look at red lights and green lights and yellow lights on traffic controlers... you will notice that the red is shifted more to the yellow... the green to the blue and the yellow to the red to facilitate the percentage of the population that has difficulty understanding a pure red etc...
If red looked to me the same as green looks to you, and green looks to me the same as red looks to you, this would not change anything about either my understanding of traffic lights, or about your understanding of traffic lights, or about the understanding between us.

quantumcarl said:
Similarily, if the alien has never been on a horse, brushed down the horse, shoveled the fresh dung of the horse or fed a horse, the alien doesn't not possesses a complete understanding of a horse.
In that case neither do I.
What is "complete understanding"?
I could argue that no agent possesses "complete understanding" of Neddy the horse except possibly for Neddy himself (but arguably not even Neddy "completely underdstands" Neddy). Thus no agent possesses complete understanding of anything. Is this a very useful concept? Of course not. There is no absolute in understanding, different agents understand differently. This does not necessarily give one agent the right to claim its own understanding is "right" and all others wrong.

quantumcarl said:
The stimulus that is the shared knowledge or data of a phenomenon is insufficient to use in arriving at what I am terming as an understanding of the said phenomena.
You are entitled to your rather unusual definition of understanding - I do not share it. Information and knowledge are required for semantic understanding, experiential data are not necessary.

MF
 
Last edited:
  • #175
MF said:
As before, I must ask : Which question are you asking?

Let me know which one, and I will then answer it.
I have already agreed with you that one can arrive at an understanding without direct experiencial knowledge several times. Why you continue to treat my statements as if I don't agree I have no idea.
I think I have also been quite clear that by "experiencial knowledge/information" I mean any sort of information that is acquired and correlated without any restrictions on the apparati used for this purpose while you seem to continue to regard my statements as if I mean a person must have eyes like I have.
So from now on if you could possibly remember that I have agreed a person can understand the concept of red without ever having actually seen "red". And also please remember that when I state "experience" I am not referring to "seeing" with ones eyes or aquiring information in the same exact manner that I do but only the aquisition and correlation of information in what ever form it may take.

Now my question was: How would you teach a person with 'hearing' as their sole means for aquiring information what 'red' is so that the person understands?
I would think the question of whether I mean the direct visual experience or not is rather moot by the sheer fact that Tim's eyes do not work and never have. Tim must then learn what "red" is via some indirect method. You have already communicated the definition you would give in this instance which I already knew. You have yet to give me the "How". HOW would you communicate your definition in such a manner that such a person understands what you mean?

MF said:
Let me repeat once again what I said in my previous reply, in case you missed it - this is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

Now I have a question for you.

Suppose Mary understands objectively everything there is to understand about “red”, but she has no subjective experiential knowledge of red.
Mary claims that despite her lack of experiential knowledge, she nevertheless has complete semantic understanding of red.
TheStatutoryApe (presumably) would argue that experiential knowledge is necessary for semantic understanding, hence there must be something which Mary “does not semantically understand” about red.
Can TheStatutoryApe provide an example of any sentence in the English language which includes the term “red” which Mary must necessarily and demonstrably fail to semantically understand by virtue of her lack of experiential knowledge of red?
I have never asserted that Mary must have experienced red to understand what it is or that she lacks semantic understanding. I have asserted that her definition will be different than others and have never stated that this makes her definition any less viable or usable. The only other thing that I have asserted is that she does in fact require some sort of "experiencial knowledge" to understand what red is (At this point please refer to what I have stated is my definition of "experience" or "experiencial knowledge"). I believe that Mary is capable of understanding through indirect experience. That is to say that her experiencial knowledge contains information that she can parallel with the information she is attempting to understand and by virtue of this parallel she can come to understand this new information. I believe I more or less already stated this here...
With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
In that last line I am specifically referring to Tim. The reason for my Tim scenario is for us to discuss what I see as the importance of experience to understanding (again note my earlier definition of experience). Ultimately the questions in my mind are "Can we even teach Tim english?" "How would we go about this?" "Does his experience afford sufficient information for understanding?" and "Will his understanding be limited by the nature of the information his experience can afford him?".
 
  • #176
moving finger said:
Neither does "shared sense perception".
Understanding is not an absolute property like "the speed of light". No two agents (even two human "genetically identical" twins) will have "precisely" the same understanding of the world. Nevertheless, two agents can still reach a "shared understanding" about a word or concept, even the world in general, even if their individual understandings of some ideas and concepts is not identical. If you are demanding that agent A must have completely identical understanding to agent B in order for them to share understanding then no two agents share understanding in your idea of the word.
And just because there may be some differences in understanding between agent A and agent B does not necessarily give agent A the right to claim that agent B "does not understand".
It is the shared information and knowledge that can allow for an understanding to take place between Houston and Alien
I disagree. If I could transplant your eyes from your head to your armpits, your semantic understanding of red could remain exactly the same - what red "is" to you does not necessarily change just because your eyes have changed location.
In fact it may be that I perceive green to be what you call red (and red to be green - ie just a colour-swap). How would we ever find out? We could not. Would it make any difference at all to the understanding of red and green of either of us, or between us? It would not.
Colour blindness is an inability to correctly distinguish between two or more different colours.
If red looked to me the same as green looks to you, and green looks to me the same as red looks to you, this would not change anything about either my understanding of traffic lights, or about your understanding of traffic lights, or about the understanding between us.
In that case neither do I.
What is "complete understanding"?
I could argue that no agent possesses "complete understanding" of Neddy the horse except possibly for Neddy himself (but arguably not even Neddy "completely underdstands" Neddy). Thus no agent possesses complete understanding of anything. Is this a very useful concept? Of course not. There is no absolute in understanding, different agents understand differently. This does not necessarily give one agent the right to claim its own understanding is "right" and all others wrong.
You are entitled to your rather unusual definition of understanding - I do not share it. Information and knowledge are required for semantic understanding, experiential data are not necessary.
MF
Yes I got your drift upon reading one of your first posts.

My "unusual (on your planet) definition" of "understanding" stems from the original meaning of the word which is described in a number of dictionaries. The origin is middle english and it describes standing under something.

Standing under something is a curcumstance one attains by going to meet it and going to experience it in order to further oneself toward an understanding of it.

When you speak of "understanding' a world or "understanding" a traffic light without ever having seen one or having experienced the effects of its emfs etc... what you mean... by my standard of english and use of certain terminologies... is knowledge as in Having knowledge of a world or a traffic light. This is not what I would term as an understanding of a world or of a traffic light.


You can share knowledge about a world or you can share knowledge about a traffic light...but, by your own admission, you cannot share understanding without both parties having experienced being in the currcumstances created by the subject.


So, when we program a computer are we getting it to brush the horse and shovel the horse hockeys? No.

We are sharing the knowledge we have of a horse with the computer through the use of binary language.


By this process, and by many scholar's definitions of "undertstanding", does the computer understand what a horse is? Or does the computer only hold a repository set of data that defines, for its records, a horse? I choose the latter.


(Don't forget to buy our Flammable Safety Cabinets, they burn like hell!) another example of poor english... what what?
 
Last edited:
  • #177
quantumcarl said:
My "unusual (on your planet) definition" of "understanding" stems from the original meaning of the word which is described in a number of dictionaries. The origin is middle english and it describes standing under something.
And this definition implies to you that to have semantic understanding of the term "horse" an agent must necessarily have groomed a horse, and shovelled the horses dung? With respect, that is plain silly. But if this is what you choose to believe then that's OK with me.

quantumcarl said:
Standing under something is a curcumstance one attains by going to meet it and going to experience it in order to further oneself toward an understanding of it.
"Standing under something" in this context does not mean literally "physically placing your body underneath that thing" - or perhaps you do not understand what a metaphor is?

quantumcarl said:
When you speak of "understanding' a world or "understanding" a traffic light without ever having seen one or having experienced the effects of its emfs etc... what you mean... by my standard of english and use of certain terminologies... is knowledge as in Having knowledge of a world or a traffic light. This is not what I would term as an understanding of a world or of a traffic light.
I am talking here about semantic understanding - which is the basis of Searle's argument. Semantic understanding means understanding the meanings of words as used in a language - it does not mean (as you seem to think) making some kind of "intimate physical or spiritual connection" with the objects that those words represent. An agent can semantically understand what is meant by the term "horse" without ever having seen a horse, let alone mucked out the horses dung.

quantumcarl said:
You can share knowledge about a world or you can share knowledge about a traffic light...but, by your own admission, you cannot share understanding without both parties having experienced being in the currcumstances created by the subject.
No, I have admitted no such thing. You are mistaken here.

quantumcarl said:
So, when we program a computer are we getting it to brush the horse and shovel the horse hockeys? No.
And by your definition, I and at least 95% of the human race do not semantically understand what is meant by the term "horse". Ridiculous.

quantumcarl said:
We are sharing the knowledge we have of a horse with the computer through the use of binary language.
Billions of humans share knowledge with each other through the use of binary language - what do you think the internet is? Are you suggesting that the internet does not contribute towards shared understanding?

quantumcarl said:
By this process, and by many scholar's definitions of "undertstanding", does the computer understand what a horse is? Or does the computer only hold a repository set of data that defines, for its records, a horse? I choose the latter.
You choose to define understanding such that only an agent who has shared an intimate physical connection with a horse (maybe one needs to have spent the night sleeping with the horse as well?) can semantically understand the term "horse".

I noticed that you chose not to reply to my criticism of your rather quaint phrase "complete understanding". Perhaps because you now see the folly of suggesting that any agent can ever have complete understanding of anything. There are many different levels of understanding, no two agents ever have the same understanding on everything, none of these levels of understanding can ever be said to be "complete", and no agent can simply assert that "my understanding is right and yours is wrong"

As I have said many times already, you are entitled to your rather eccentric definition of understanding, but I don't share it.

Bye

MF
 
Last edited:
  • #178
TheStatutoryApe said:
I have already agreed with you that one can arrive at an understanding without direct experiencial knowledge several times. Why you continue to treat my statements as if I don't agree I have no idea.
Perhaps you will understand later in this post why your position on “experience is not necessary for understanding” seems rather confusing and contradictory to me…..

TheStatutoryApe said:
Now my question was: How would you teach a person with 'hearing' as their sole means for aquiring information what 'red' is so that the person understands?
Actually this was NOT your original question. Never mind.

TheStatutoryApe said:
I would think the question of whether I mean the direct visual experience or not is rather moot by the sheer fact that Tim's eyes do not work and never have. Tim must then learn what "red" is via some indirect method. You have already communicated the definition you would give in this instance which I already knew. You have yet to give me the "How". HOW would you communicate your definition in such a manner that such a person understands what you mean?
It is always good to seek confirmation of meaning where any ambiguity of meaning is even remotely possible – this is all part of better understanding. Wouldn’t you agree?

Thus – we now agree that the meaning of your original question was NOT “is his experience sufficient for conveying his experiential quality of seeing red?”, but in fact it was “is his experience sufficient for conveying his semantic understanding of red?”

And my answer to this is very clearly “YES”.

Tim’s semantic understanding of “red” is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. It makes no difference whether or not Tim actually has the physical ability to “perceive electromagnetic radiation with wavelengths of the order of 650nm”, his blindness does not change the semantic understanding of red that he has. Even though blind himself, Tim can introspectively perceive the possibility of an agent which does have such sense receptors, and which can then “perceive electromagnetic radiation with wavelengths of the order of 650nm”. Similarly, Tim does not need to have seen, heard, touched, smelled, or mucked out a horse to semantically understand the term “horse”.

Thus, when you ask “is his experience sufficient for conveying his semantic understanding of red?” the answer is Yes – because Tim can simply state that “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” IS his understanding of the term red. By the way, it’s also my understanding of the term red. And it is the only understanding of the term red which allows any mutual understanding of the concept red to take place between Houston and the aliens in my earlier example.

TheStatutoryApe said:
I have never asserted that Mary must have experienced red to understand what it is or that she lacks semantic understanding. I have asserted that her definition will be different than others and have never stated that this makes her definition any less viable or usable.
If one probes closely enough, I think one will find that most agents do not agree on all aspects of semantic understanding – there are differences in semantic understanding between most humans (witness my discussion with quantumcarl on the semantic understanding of the term “semantic understanding” in this very thread). And it seems we agree that this does not necessarily give one human agent the right to claim that the other human agent “does not understand”. It simply means that all agents, human or otherwise, might understand some terms or concepts in different ways.

In my example of Houston and the Aliens, could one say that they had the same understanding of the term red? On one (objective) level they did have the same understanding – because both Houston and the Aliens finally agreed that red is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. But on another (subjective) level they did not, because the data corresponding to the sense-experience of red to a human agent means nothing to an alien agent (in fact the precise data corresponding to the sense-experience of red to TheStatutoryApe does not necessarily mean anything to MF). The ONLY common factor between all agents is the objective definition of red as “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. It seems to me that such an objective semantic understanding, which can then be understood by all agents, is a much greater and deeper level of understanding than one based solely on a subjective experientially-based semantic understanding.

TheStatutoryApe said:
The only other thing that I have asserted is that she does in fact require some sort of "experiencial knowledge" to understand what red is (At this point please refer to what I have stated is my definition of "experience" or "experiencial knowledge").
I’m sorry, but there is that ambiguous phrase again – “what red is”. Please explain exactly what you mean by this phrase, because (as I have shown several times now) this phrase has at least two very different possible meanings. It might mean “what red looks like”, or it might mean “the semantic meaning of the term red”.
Remember - you have already agreed quite pointedly that experiential knowledge is not necessary for the latter. Now - Which meaning of “what red is” do you actually mean here?

TheStatutoryApe said:
I believe that Mary is capable of understanding through indirect experience. That is to say that her experiencial knowledge contains information that she can parallel with the information she is attempting to understand and by virtue of this parallel she can come to understand this new information. I believe I more or less already stated this here...
You have in fact stated something rather stronger than this - that experience is not necessary for understanding. I agree with this.

TheStatutoryApe said:
With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is?
That ambiguous phrase again!
Please clarify.
If you mean “how is he to have any semantic understanding of the term red” then I have already shown how – and you have agreed that he needs no experience to understand.
If you mean “how is he to know what red looks like” then I think we both agree this is a meaningless question in Tim’s case – but this is not important because we already agree that an agent does not “need to know what X looks like” to have semantic understanding of X.

TheStatutoryApe said:
It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending.
This statement contradicts your earlier very strong assertion that experience is not necessary for semantic understanding. Are you now saying that experience IS necessary for semantic understanding?

TheStatutoryApe said:
There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
By “the concept of red” do you mean some universal, objective concept of red? The only possible universal concept of red that means the same to you as to me, and the same to Houston and the Aliens, is “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”

TheStatutoryApe said:
In that last line I am specifically referring to Tim. The reason for my Tim scenario is for us to discuss what I see as the importance of experience to understanding (again note my earlier definition of experience). Ultimately the questions in my mind are "Can we even teach Tim english?"
Of course we can. Blind people can learn English, or are you suggesting otherwise? The additional lack of the perceptions of touch-feeling, smell and taste I do not see as any greater impediments to understanding than the lack of sight.

TheStatutoryApe said:
"How would we go about this?" "Does his experience afford sufficient information for understanding?" and "Will his understanding be limited by the nature of the information his experience can afford him?".
His precise understanding may differ in some ways to yours and mine. But my understanding differs to yours anyway. There is no “absolute” in understanding. ALL agents understand differently to a greater or lesser externt. We have already agreed this does not mean that one agent understands and the other does not. And arguably a semantic understanding of a concept like “red” in objective terms which can be translated between all agents is a much greater and deeper understanding of red than a simple subjective experiential understanding.

MF
 
Last edited:
  • #179
I'm not sure where to start and what to quote so I will try to just recreate your points that we are having issues with...

Is experience required for understanding?
I never stated that experience is not required for understanding. I stated that direct experience is not required for understanding, that indirect experience could suffice.
Remember again my definition of experience... aquisition and correlation of information in what ever manner this may take shape. If a person can not acquire and correlate information about something then they can not understand it. Again you say that possession and correlation of the information is enough but all you are doing is skipping the step of aquiring the information in the first place. In my mind it is necessary to be considered because I believe that the actual act of gathering the information, aside from being necessary to possessing it, is important to the process of understanding in and of itself. I'll expand more on this later if you would like me to.
So once again I have not stated that experience is not necessary but that direct experience is not necessary.

MF said:
That ambiguous phrase again!
Please clarify.
If you mean “how is he to have any semantic understanding of the term red” then I have already shown how – and you have agreed that he needs no experience to understand.
If you mean “how is he to know what red looks like” then I think we both agree this is a meaningless question in Tim’s case – but this is not important because we already agree that an agent does not “need to know what X looks like” to have semantic understanding of X.
Yes I have already agreed that "seeing" isn't necessary and stated I do not mean "seeing" since this would be moot in Tim's case because we already know he can not "see" red.
The problem here is that you still have not answered the question. You have asserted that theoretically he can understand what "red" is and asserted a definition that he would be theoretically capable of understanding. You have yet to explain how this definition would be imparted to him in such a manner that he would understand. HOW as in the manner method or means by which.

MF said:
Of course we can. Blind people can learn English, or are you suggesting otherwise? The additional lack of the perceptions of touch-feeling, smell and taste I do not see as any greater impediments to understanding than the lack of sight.
It's so obvious is it? Then explain how.
I do not suggest that blind people are unable to learn english. Yet another strawman. It would help us greatly in our discussion if you would drop this tactic.
How it is that you do not see there being any stronger impediment to understanding for tim than for any other blind person is really quite beyond me.
Your average human possesses five avenues by which to gather information. A blind person only has four which will naturally hinder the blind persons ability to gather information by which to understand relative to the person with a total of five. Helen Keller, both blind and deaf, had only three and had great difficulty in learning to understand things. Tim has only one avenue by which to gather information. Considering this extreme lack of ability to gather information in comparison to your average human how do you justify your idea that it should pose no more of an impediment to be in Tim's shoes than it does to be in those of a blind person?
Can you accurately define the location of an object in three dimensional space with only one coordinate? Do you see the parallel between this and the difficulty that an agent with only one avenue for information gathering may have in understanding the world around it let alone the words that are supposed to describe it?
 
  • #180
TheStatutoryApe said:
I'm not sure where to start and what to quote so I will try to just recreate your points that we are having issues with...

Is experience required for understanding?
I never stated that experience is not required for understanding. I stated that direct experience is not required for understanding, that indirect experience could suffice.

Remember again my definition of experience... aquisition and correlation of information in what ever manner this may take shape. If a person can not acquire and correlate information about something then they can not understand it.
Given your definition I agree with this conclusion, though I do find your definition rather strange. Because of this strange definition we must be very careful to distinguish between “purely informational experience” on the one hand (which does not involve any “sensory experiential quality or data”), and “sensory experience” on the other hand (which is directly associated with “sensory experiential quality or data”). Sensory experience may be classed as a subset of informational experience, but purely informational experience involves no sensory experience. Would you agree?

TheStatutoryApe said:
Again you say that possession and correlation of the information is enough but all you are doing is skipping the step of aquiring the information in the first place. In my mind it is necessary to be considered because I believe that the actual act of gathering the information, aside from being necessary to possessing it, is important to the process of understanding in and of itself. I'll expand more on this later if you would like me to.
I understand what you are saying. And I think you will find that I have already, in a previous post, agreed that the precise form in which the data and information are acquired may “colour” the interpretation of that information and hence may also “colour” any subsequent understanding that the agent may derive from that data and information. This is also at the root of my argument that all agents understand things differently to a greater or lesser extent, because our understanding is based on our experience (your definition of experience), and our experiences are all different. Thus it seems we agree that agents may understand some things (more or less) differently because their experiences are different, yes?

Getting back to the subject of this thread - the important point (I suggest) is “whether a machine can in principle semantically understand at all”, and not whether “all agents understand everything in exactly the same way”

TheStatutoryApe said:
So once again I have not stated that experience is not necessary but that direct experience is not necessary.
Understood. I apologise, because I always have in mind a slightly different definition of experience, which is “sensory experience”, rather than “informational experience”. It is now clear to me what you mean by “indirect experience”. I suggest to avoid future misunderstanding that we explicitly refer to indirect experience whenever we mean indirect experience (and not simply to experience, which may be misinterpreted).

TheStatutoryApe said:
Yes I have already agreed that "seeing" isn't necessary and stated I do not mean "seeing" since this would be moot in Tim's case because we already know he can not "see" red.
The problem here is that you still have not answered the question. You have asserted that theoretically he can understand what "red" is and asserted a definition that he would be theoretically capable of understanding.
I do not understand what you mean by “theoretically” here.
Do I only “theoretically” semantically understand what is “horse” if I have never seen, heard, smelled or touched a horse?
To my mind, Tim “semantically understands” the definition of red as given. Period. There is nothing "theoretical" about it. Semantic understanding is semantic understanding.

If you can explain your distinction between “theoretical semantic understanding” and “practical semantic understanding” then I may be able to grasp what you are trying to say here.

BTW – I have indeed answered the question that you asked.

Your original question was :
TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?

My answer was :

moving finger said:
Tim can simply state that “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” IS his understanding of the term red.

Once again, you may not “like” this answer, and it may not answer the question that YOU think you asked, but it certainly answered the question as I understood it. I'm sorry, but I cannot help it if the meanings of your questions are ambiguous.

TheStatutoryApe said:
You have yet to explain how this definition would be imparted to him in such a manner that he would understand. HOW as in the manner method or means by which.

You now seem to be asking “how is this definition of red imparted to Tim?”. With respect, this was not your original question (at least it was not my understanding of your question - again perhaps because the way you phrased the question was ambiguous).

Tim has the sense of hearing, yes? In Tim’s case he can learn his semantic understanding of English, including his semantic understanding of the term “red”, via his sense of hearing. Is this the answer you are looking for?

TheStatutoryApe said:
I do not suggest that blind people are unable to learn english. Yet another strawman. It would help us greatly in our discussion if you would drop this tactic.
With the greatest respect, TheStatutoryApe, it would help us greatly in our discussion if you would state your argument clearly at the outset, instead of making one ambiguous statement and question after another, which forces me to guess at your meaning and to ask for clarifications, after which you then frequently (it seems to me) change the sense of your questions.

When I ask a question in an attempt to clarify what I see as an ambiguity or an uncertainty in a post, that is just what it is - a question to seek clarification. You may call it a "strawman" if that makes you feel any better - but I'm afraid that as long as your statements and questions remain ambiguous then I must continue to ask questions to seek clarification on your meaning.

Please understand that I’m not guessing at the meaning of your questions because I want to – with the greatest respect, I am forced to guess at your meanings, and to offer up a question in the form of what you term a "strawman", because your questions are unclear, ambiguous, or keep changing each time you re-state them.

TheStatutoryApe said:
Your average human possesses five avenues by which to gather information. A blind person only has four which will naturally hinder the blind persons ability to gather information by which to understand relative to the person with a total of five. Helen Keller, both blind and deaf, had only three and had great difficulty in learning to understand things. Tim has only one avenue by which to gather information. Considering this extreme lack of ability to gather information in comparison to your average human how do you justify your idea that it should pose no more of an impediment to be in Tim's shoes than it does to be in those of a blind person?
Here, with respect, is the error in your reasoning at this point : It is not the “number” of senses which are available to an agent which is important – it is the information content imparted via those senses. In most human agents the data and information required for semantic understanding of a language are imparted mainly via the two senses of hearing and sight – thus in humans these two senses are much more critical in learning semantics than the other 3 senses. An agent which possesses neither hearing nor sight must try to acquire almost all of the external data and information about a language via the sense of touch (the senses of taste and smell would not be very efficient conduits for most semantically useful information transfer). In other words, most of the agent's learning about language would be via braille. This would indeed be a massive problem for the agent - but it STILL would not necessarily mean that the agent would have NO semantic understanding, which is the point of the CR argument.

TheStatutoryApe said:
Can you accurately define the location of an object in three dimensional space with only one coordinate? Do you see the parallel between this and the difficulty that an agent with only one avenue for information gathering may have in understanding the world around it let alone the words that are supposed to describe it?
No, I don’t see the parallel. Why does having only the sense of hearing necessarily limit an agent to “one coordinate space”? Or is this a bad metaphor?

I am not suggesting that Tim will not have problems learning. Of course he will. He needs to learn everything about the outside world via his sense of hearing alone. Of course this will be a problem. But your analogy with Helen Keller is poor – as I have stated already arguably the two most important senses for human language learning are sight and hearing, both of which Helen lacked, and one of which Tim has. Thus I could argue that Tim’s problem will in fact be less severe than Helen’s.

But I have no interest in going off at a tangent just to satisfy your curiosity about Tim’s unusual predicament in your pet thought experiment. What relevance does any of this have to the subject of this thread, and what is the point you are trying to make? With respect I am tired of continually guessing your meanings.

The CR argument is not based on the problems that a severely disabled human agent will have in “learning about the world”. The CR argument is based on the premise that a machine “cannot in principle semantically understand a language”. If you can show the relevance of Tim’s predicament and your thought experiment to the subject of this thread then I’ll be happy to continue this line of reasoning.

MF
 
Last edited:
  • #181
MF said:
Thus it seems we agree that agents may understand some things (more or less) differently because their experiences are different, yes?
Yes. Though I would add that the understanding between agents (e.g... communication by means of language) is contingent upon significant similarity in "informational experience". A gap is expected but a significant gap will result in the break down of communicative understanding.

MF said:
I do not understand what you mean by “theoretically” here.
Do I only “theoretically” semantically understand what is “horse” if I have never seen, heard, smelled or touched a horse?
To my mind, Tim “semantically understands” the definition of red as given. Period. There is nothing "theoretical" about it. Semantic understanding is semantic understanding.

If you can explain your distinction between “theoretical semantic understanding” and “practical semantic understanding” then I may be able to grasp what you are trying to say here.
I say "theoretical" because it is your theory that based off of only one avenue of information gathering Tim will be able to gain a semantic understanding of what the word "red" means. However you have yet to explain how Tim will accomplish this. I have set up a scenario where Tim does not understand what "red" means and asked you how you would teach him what red means. You have replied by saying that he can understand and "X" is the definition that he will be capable of understanding. You have continually failed to adress the "how" portion of my question. How will you teach him. How will he come to understand. How would you theorize the process occurring in his mind would unfold.

MF said:
You now seem to be asking “how is this definition of red imparted to Tim?”. With respect, this was not your original question (at least it was not my understanding of your question - again perhaps because the way you phrased the question was ambiguous).
With respect, every version of the question I have asked has included the word "How"...
post 160 said:
But consider this. Imagine a person has been born with only one of five senses working. We'll say the person's hearing is the only sense available to it. None others what so ever. How would you go about teaching this person what the colour "red" is?
post 169 said:
Remember my question? How would you go about teaching a person who possesses only hearing and no other sense what so ever? You never actually answered this. With Mary, even though she lacks the ability to see colour, she still had several other forms of experiencial knowledge to fall back on and use by parallel to understand in some fashion what the colour "red" is. With our new student, let's call him Tim, we are severely limited. Until he somehow discovers otherwise nothing exists outside of the realm of sound. Sound is the only sort of information he has by which to understand anything. Now how is he to understand what red is? It would somehow have to be based on that which he experiences otherwise he will not be capable of comprehending. There needs to be a parallel drawn. Is his experience sufficient for conveying the concept of red do you think?
post 175 said:
Now my question was: How would you teach a person with 'hearing' as their sole means for aquiring information what 'red' is so that the person understands?
I would think the question of whether I mean the direct visual experience or not is rather moot by the sheer fact that Tim's eyes do not work and never have. Tim must then learn what "red" is via some indirect method. You have already communicated the definition you would give in this instance which I already knew. You have yet to give me the "How". HOW would you communicate your definition in such a manner that such a person understands what you mean?
post 175 said:
In that last line I am specifically referring to Tim. The reason for my Tim scenario is for us to discuss what I see as the importance of experience to understanding (again note my earlier definition of experience). Ultimately the questions in my mind are "Can we even teach Tim english?" "How would we go about this?" "Does his experience afford sufficient information for understanding?" and "Will his understanding be limited by the nature of the information his experience can afford him?".
post 179 said:
The problem here is that you still have not answered the question. You have asserted that theoretically he can understand what "red" is and asserted a definition that he would be theoretically capable of understanding. You have yet to explain how this definition would be imparted to him in such a manner that he would understand. HOW as in the manner method or means by which.
I have very plainly asked how from the very begining. I have reworded my questions and tossed in a couple of extra questions along the way because I am trying to help you understand what I am asking of you and you obviously aren't getting it. Only now do you seem to begin to understand with this...
MF said:
Tim has the sense of hearing, yes? In Tim’s case he can learn his semantic understanding of English, including his semantic understanding of the term “red”, via his sense of hearing. Is this the answer you are looking for?
But you still don't seem to understand what I mean by "how". I did not mean to ask "with what?". His ears/hearing/auditory sense is obviously what he will be using to gather information by which to understand, this is very plain by the set up of the scenario.
Hopefully in the previous part of this particular post I have cleared up what I mean by "how" and perhaps you will take a stab at answering the question.

MF said:
With the greatest respect, TheStatutoryApe, it would help us greatly in our discussion if you would state your argument clearly at the outset, instead of making one ambiguous statement and question after another, which forces me to guess at your meaning and to ask for clarifications, after which you then frequently (it seems to me) change the sense of your questions.

When I ask a question in an attempt to clarify what I see as an ambiguity or an uncertainty in a post, that is just what it is - a question to seek clarification. You may call it a "strawman" if that makes you feel any better - but I'm afraid that as long as your statements and questions remain ambiguous then I must continue to ask questions to seek clarification on your meaning.

Please understand that I’m not guessing at the meaning of your questions because I want to – with the greatest respect, I am forced to guess at your meanings, and to offer up a question in the form of what you term a "strawman", because your questions are unclear, ambiguous, or keep changing each time you re-state them.
With respect, I must again point out that every single one of my posts requested that you answer "How". It's been the one thing that has not changed at all what so ever. The other words may have changed in order to try adapting to the manner in which you are misinterpreting what I mean by "How" and I may have asked other questions in conjunction with the one main question in order to flesh out my meaning but the word "How" has been quite consistant through out and you seem to have glossed over it every time.
Please, in the future, if you do not understand a question simply ask me to clarify it. Do not make assumptions because we all know what happens when we "assume" right?

MF said:
Here, with respect, is the error in your reasoning at this point : It is not the “number” of senses which are available to an agent which is important – it is the information content imparted via those senses. In most human agents the data and information required for semantic understanding of a language are imparted mainly via the two senses of hearing and sight – thus in humans these two senses are much more critical in learning semantics than the other 3 senses. An agent which possesses neither hearing nor sight must try to acquire almost all of the external data and information about a language via the sense of touch (the senses of taste and smell would not be very efficient conduits for most semantically useful information transfer). In other words, most of the agent's learning about language would be via braille. This would indeed be a massive problem for the agent - but it STILL would not necessarily mean that the agent would have NO semantic understanding, which is the point of the CR argument.
____________________________________________________

I am not suggesting that Tim will not have problems learning. Of course he will. He needs to learn everything about the outside world via his sense of hearing alone. Of course this will be a problem. But your analogy with Helen Keller is poor – as I have stated already arguably the two most important senses for human language learning are sight and hearing, both of which Helen lacked, and one of which Tim has. Thus I could argue that Tim’s problem will in fact be less severe than Helen’s.
Have you yet to imagine yourself in Tim's or Helen's shoes?
A human being takes advantage of all five senses to learn and understand. If you watch a child you will see it looking constantly at everything and reacting to just about every noise. You will also see it grab for and touch anything it can get it's hands on. When it does get it's hands on things they go straight to it's mouth and nose. One of the issues here is that we take for granted so much of our sensory input that we don't realize just how important those senses are.

MF said:
No, I don’t see the parallel. Why does having only the sense of hearing necessarily limit an agent to “one coordinate space”? Or is this a bad metaphor?
Perhaps it is a weak metaphor but it's purpose is to point out the importance of multiple senses. When you look to establish an objects location you use multiple coordinates. When you look to establish it's size you use multiple dimensions. When you look to establish it's composition you run multiple tests. The correlation of the data from multiple sources is always used to determine validity of information and to understand that information. With out multiple sources, or rather with only one source, you are stuck regarding only a single aspect of anything and largely are unable to substantiate much in the way of logical conclusions.
Helen Keller had three sources of information to compare and draw conclusions with. She relied mainly on her tactile sense which is actually a very important sense though you seem to regard it as lesser than vision and hearing.
Bats rely on hearing quite a bit. The problem though is that bats have very specialized hearing mechanisms and possibly even an instinctual program of how to interpret the information. Even still a bats hearing ability is not terribly reliable and easily thrown off. Only a couple species of bat are actually blind and rely heavily on their hearing ability but they always augment this with their other senses, probably most notably their sense of smell.
But really have you imagined what it must be like for Tim?
How do you determine what noises are, where they come from? You can hear when a human speaks but how do you know what a human is or that you are a human or that the noises you hear are words? How do you know that there are such things as "tagible objects"? You could maybe tell that when you are moving because you can hear the "air" passing your "ears" but wait... what if there is a wind and that is why air is passing your ears? How do you tell the difference. You can't "taste" and you can't "feel" but do you eat? Do you realize when you are eating? If so how? Does someone else feed you? Can you tell that that someone else is feeding you? Can you figure this out because they tell you? How do you know what they mean when they tell you if you have no idea what it is they are doing to you which they are explaining because you can't feel taste see or smell the food or their hands or the spoon or the plate or anything like? All you know is what you hear. Can you even tell the difference between being awake and being asleep?


I am asking this because I am trying to establish, regardless of the answers, that your personal "direct experience" is vital to your ability to understand even those things that are "indirect experiences". Note that I have not said that Tim is unable to possesses semantic understanding. I do believe Tim can possesses semantic understanding but that it would likely be severely limited by his situation.
Also I am asking this question because it is a step in my process but I wish to establish where we stand in this matter before continuing to the next step. I believe that by imagining Tim's situation we come closer to imagining the situation of a computer without sensory input attempting to understand. The CR computer at least has access to only one information input. After we discuss the parallel, if the discussion is even necessary after we determine where we stand in regards to Tim, I would like to discuss the question I previously posed in regards to "justification" of knowledge.
This is how I am linking my questions in regards to Tim with the CR and it's situation. Right now I just want to focus on what we can or can not agree about Tim and then run with what ever information I gleen from that.
 
Last edited:
  • #182
TheStatutoryApe said:
I say "theoretical" because it is your theory that based off of only one avenue of information gathering Tim will be able to gain a semantic understanding of what the word "red" means. However you have yet to explain how Tim will accomplish this. I have set up a scenario where Tim does not understand what "red" means and asked you how you would teach him what red means.
You have replied by saying that he can understand and "X" is the definition that he will be capable of understanding. You have continually failed to adress the "how" portion of my question. How will you teach him. How will he come to understand. How would you theorize the process occurring in his mind would unfold.

It is my position that Tim can “possess” understanding. It is also my position that an agent with NO senses at all can “possess” understanding – but obviously the question of how the agent is to acquire that understanding in the first place is a separate problem. In the case of an arbitrary agent I have many more possibilities than the 5 human senses. But I do not need to show how an agent has “acquired” its understanding in order to claim that it can “possess” understanding.
The problem of how TIM “acquires” that understanding in the first place is a separate issue. Tim is able to communicate – he can speak and he can hear. Given a means of communication then it is possible to transfer information. I am not suggesting that teaching Tim will be easy, but it will not be impossible. My concern here is only that the transfer of information can take place – you have certainly not shown that transfer of information cannot take place. I have no interest to go into the details of how Tim’s complete education would be accomplished in practice, so if you need to know these details then please go ask someone else.
This is a thought-experiment, not a detailed lesson in “how to teach Tim”. The relevance for the CR argument lies in the idea that Tim can in principle semantically understand “red”, just as the CR can in principle semantically understand “red”. When Searle proposed his CR thought experiment, nobody asked him “but HOW would you go about writing the rulebook in the first place?” – because that is a practical problem which does not change the in principle nature of the argument. Everyone KNOWS that “writing the rulebook” is one hell of a practical problem, and nobody has attempted to show how it could be done in practice, but this does not invalidate the thought experiment.

TheStatutoryApe said:
With respect, every version of the question I have asked has included the word "How"...
With respect, this is simply untrue. Your original question (which I even quoted in my last post, but you obviously failed to read) was

TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?

Which I interpreted to mean “can Tim convey his concept of red to others?”. And that original interpretation has stuck in my mind as the question you are asking, until you quite pointedly stated that you mean something completely different.
Do you see now how the confusion is caused by your ambiguity?

TheStatutoryApe said:
A human being takes advantage of all five senses to learn and understand. If you watch a child you will see it looking constantly at everything and reacting to just about every noise. You will also see it grab for and touch anything it can get it's hands on. When it does get it's hands on things they go straight to it's mouth and nose. One of the issues here is that we take for granted so much of our sensory input that we don't realize just how important those senses are.
And your point is?
All intelligent agents will utilise whatever information sources are available to them. The more sources of information then (in general) the easier it will be for the agent to gain knowledge of the world around them. I have never said otherwise.

TheStatutoryApe said:
When you look to establish it's size you use multiple dimensions. When you look to establish it's composition you run multiple tests. The correlation of the data from multiple sources is always used to determine validity of information and to understand that information. With out multiple sources, or rather with only one source, you are stuck regarding only a single aspect of anything and largely are unable to substantiate much in the way of logical conclusions.
Tim is stuck with his one sense of hearing. It’s still stereo hearing and he can still develop spatial awareness and an understanding of coordinate systems, distances, motion etc based on this. The fact that he only has one sense is going to make it very tough for Tim to learn. But it does NOT mean that Tim cannot develop a semantic understanding of any particular concept or language. It simply means it will not be as easy for him as it would be for an agent with multiple other senses. But this is not relevant to the CR argument anyway.

TheStatutoryApe said:
But really have you imagined what it must be like for Tim?
I do not need to, because I still do not see the relevance of your argument to the CR question.

TheStatutoryApe said:
I am asking this because I am trying to establish, regardless of the answers, that your personal "direct experience" is vital to your ability to understand even those things that are "indirect experiences".
You have so far not shown that “direct experience” is necessary for semantic understanding.
If this is now your position then you are contradicting your earlier position, which was that “indirect” and not “direct” experience is necessary for semantic understanding.
In the case of Tim, the agent has NO other means of acquiring information other than via the sense of hearing. In the case of AI we are not talking of a human agent which is necessarily limited to learning via any of its five senses. I could argue that an artificial intelligence, able to access and process data in ways that humans cannot, could develop an even deeper, more complex, more consistent and more complete semantic understanding of a language than any human is capable of – and it would not necessarily need to have any particular human sense, or any direct experience, to be able to do this.

At the same time there is also no reason why our AI should not be equipped with information-input devices corresponding to the human senses of vision, hearing, touch, even smell and taste, IF we should so wish – the AI is not necessarily restricted to any particular sense, in terms of its learning ability. Thus I do not see the relevance of your Tim thought-experiment to the question of whether a machine can semantically understand.

TheStatutoryApe said:
Also I am asking this question because it is a step in my process but I wish to establish where we stand in this matter before continuing to the next step. I believe that by imagining Tim's situation we come closer to imagining the situation of a computer without sensory input attempting to understand.
The analogy is completely inappropriate. In Tim’s case, the only source of information he has about the outside world is via his sense of hearing.
In the case of a computer there are many ways in which the information can be imparted to the computer.

TheStatutoryApe said:
The CR computer at least has access to only one information input.
The CR is ALREADY PROGRAMMED. It already possesses the information it needs to do the job it is supposed to do, by definition. It does not necessarily NEED any senses at all for the purpose of learning – because Searle has not indicated whether the CR has ANY ability to learn from new information at all – this is one of the questions I already asked a long time ago. The CR experiment does not ask “how does the CR acquire it’s understanding in the first place?”, it asks “does the CR semantically understand?”

MF
 
  • #183
TheStatutoryApe said:
I would add that the understanding between agents (e.g... communication by means of language) is contingent upon significant similarity in "informational experience". A gap is expected but a significant gap will result in the break down of communicative understanding.

I would say that understanding between agents is contingent upon similarities in semantic understanding, and NOT necessarily on similarities in “informational experience”.

Take my example of Houston and the Aliens.

If our semantic understanding of “red” is based simply on “Red is the experiential quality when one perceives a red object”, then Houston and the Aliens cannot reach any level of understanding about what is meant by the term red, BECAUSE defining red as the experiential quality when one perceives a red object is a circular argument, and the Aliens have no way of directly experiencing seeing a red object without knowing in advance what a red object is. Thus if one’s definition of red is indeed “Red is the experiential quality when one perceives a red object” then I can see how one erroneously might conclude that understanding between agents is contingent upon significant similarity in "informational experience".

But if our semantic understanding of red is based on “Red is the experiential quality that you have when you sense-perceive electromagnetic radiation with wavelengths of the order of 650nm”, then Houston and the Aliens can indeed understand each other, even though their “informational experiences” (ie exactly how they have ariived at that definition and understanding) may be very different.

Thus : Understanding between agents is contingent simply upon significant similarities in semantic understanding, and not necessarily on similarities in informational experience.

MF
 
  • #184
Not all human brains “implement semantics”.
A person in a coma is not “implementing any semantics” – a severely brain-damaged person may be conscious but may have impaired “implementation of semantics”.

That is a silly objection.

It is not an objection, it is an observation. Do you dispute it?

If it is not an objection, it has no relevance to the debate...

Anyone I can actually speak to obviously has a
functioning brain.

Tournesol, your arguments are becoming very sloppy.
I can (if I wish) “speak to” my table – does that mean my table has a functioning brain?



I am not going on their external behaviour alone; I have
an insight into how their behaviour is implemented, which is missing in the CR
and the TT.

What “insight” do you have which is somehow independent of observing their behaviour?

The insight that "someone with a normal, funcitoning brain has consciousness
and understanding broadly like mine, because my cosnciousness and
understanding are genrated by my brain".

How would you know “by insight” that a person in a coma cannot understand you, unless you put it to the test?
How would you know “by insight” that a 3-year old child cannot understand you, unless you put it to the test?


The solution to this problem is to try and develop a better test, not to “define our way out of the problem”

I have already suggested a better test. You were not very receptive.

Sorry, I missed that one. Where was it?

"figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness."

It would help if you spelt out what, IYO, the (strong) AI arguemnt does say.

I am not here to defend the AI argument, strong or otherwise.
I am here to support my own position, which is that machines are in principle capable of possessing understanding, both syntactic and semantic.

You certainly should be defending the strong AI argument, since that is
what this thread is about.
If you really are only saying that
"machines are in principle capable of possessing understanding, both syntactic and semantic"
you are probably wasting my time and yours. Neither Searle nor I rule
out machine understanding in itself. The argument is about whether machine
understanding can be achieved purely a system of abastract rules (SMART).
It can be read as favouring one approach to AI, the Artificial Life, or
bottom-up approach , over another, the top-down or GOFAI.


Note, that this distinction between (syntax and SNART) makes no real difference to the CR.

Can you show this, or are you simply asserting it?

Re-read the CR, and see if it refers to syntactic rules to the exlcusion
of all others.


In fact,
(there are rules for semantics) has never struck anybody except yourslef, since there are so many objections
to it.

I do not think this is true. Even if it were true, what relevance does this have to the argument?

It explains why it is quite natural to assume SMART is restricted to syntax --
this is not some malicious misreading of what you are saying.

The semantic knowledge and understanding is “encoded in the information in my brain” – I do not need continued contact with the outside world in order to continue understanding, syntactically or semantically.

How is that relevant to the CR ? Are you saying that the CR can *acquire*
semantics despite its lack of interaction and sensory contact with an
evironment ?


I am saying that the information and knowledge to understand semantics can be encoded into the CR, and once encoded it does not need continued contact with the outside world in order to understand

You have not explained how it is encoded. It cannot be acquired naturally, by
interaction with an environmnent, it cannot be cut-and-pasted.


Are you saying you can "download" the relevant information
from a human -- although you have already conceded that information may
fail to make sense when transplanted from one context to another ?


Where did I say that the information needs to be downloaded from a human?

It was a guess.
So far you haven't said anything at all about where it is to come
from, which is not greatly to the advantage of your case.

Are you perhaps suggesting that semantic understanding can only be transferred from a human?
The only “information” which I claim would fail to make sense when transplanted from one agent to another is subjective experiential information – which as you know by now is not necessary for semantic understanding.

If the variability of subjective information is based on variability of brain
anatomy, why doesn't that affect everything else as well ?

And if you think non-subjective information can be downloaded from a brain --
why mention that ? Are you saying the CR rulebook is downloaded from a brain
or what.

By virture of SMART ?

By virtue of the fact that semantic understanding is rule-based

You have yet to support that.

By standard semantics, possession of undertanding is *necessary* to report.

The question is not whether “the ability to report requires understanding” but whether “understanding requires the ability to report”
If you place me in situation where I can no longer report what I am thinking (ie remove my ability to speak and write etc), does it follow that I suddenly cease to understand? Of course not.

You are blurring the distinction between being able to report under specific
circumstances, and being able to report under any circumstances. When
we say people are conscious, we do not mean they are awake all the time.
When we say understanding requires the ability to report, we
do not mean that people produce an endless monologue on their internal
state.

Can Tournesol provide an example of any sentence in the English language which includes the term “red” which Mary necessarily cannot “semantically understand” by virtue of her lack of experiential knowledge of red?


1) "What red looks like"
2) "The experiential qualities of red which cannot be written down"


Mary can semantically understand the statement “what red looks like” without knowing what red looks like.

Mary has a partial understanding. She acquires a deeper understanding when she
leaves her prison and sees red for the first time.

The statement means literally “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “what red looks like”.

It is not the meaning that "what red looks like" has to someone who has
actually seen red. It is not the full meaning.

Mary can semantically understand the statement “the experiential qualities of red which cannot be written down” without knowing the experiential qualities of red. The statement means literally “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. This is the semantic meaning of the statement “the experiential qualities of red which cannot be written down”.

If you have seen a cat, you can understand the sentence "a cat you have not
seen". If you have not seen a cat it is more difficult. If you have not seen
a mammal it is more difficult still...etc...etc. There is no fixed border not
understanding and understanding.


Thus I have shown that Mary can indeed semantically understand both your examples.

Given a minimal definition of "understanding" -- setting the bar low, in other
words.


Now, can you provide an example of a statement containing the word “red” which Mary CANNOT semantically understand?

I already have :- given that semantics means semantics and not the bits of
semantics Mary can understand while still in the room.

What red looks like is nothing to do with semantic understanding of the term red – it is simply “what red looks like”.

How can the semantic meaning of "what red looks like" fail to have anything to
do with what red, in fact, looks like?


What red looks like to Tournesol may be very different to what red looks like to MF,

Possibly. About as possible as Zombies.


but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of what red looks like.

Tu quoque.

The experiential qualities of red are nothing to do with semantic understanding of the term red – these are simply “the experiential qualities of red”.

Tu quoque.


The experiential qualities of red for Tournesol may be very different to The experiential qualities of red for MF,


Do you know that ? Or do you just mean they are not necessarily the same ? How does that relate to meaning anyway ?
"What is ny name", "What is the time" and "Where are we" tend to have different meanings accordimg to who says them, when, and where. Are you completely sure that "X has different meanings for different people" equates to "X has no (objective) semantic meaning" ?

but nevertheless we both have the same semantic understanding of what is meant by red, because that semantic understanding is independent of the experiential qualities of red.

Tu quoque. You seem to be asserting that, in Fregean terminology, meaning is
purely sense and not reference. But, for Frege,
sense and reference are both constituents of meaning.

The confusion between “experiential qualities” and “semantic understanding” arises because there there are two possible, and very different, meanings to (interpretations of) the simple question “what is the colour red?”
One meaning (based on subjective experiential knowledge of red) would be expressed “what does the colour red look like?”.

The other meaning (the objective semantic meaning of red) would be expressed as “what is the semantic meaning of the term red?”.

The fact that the second meaning is "objective" does not imply that it and it
alone is semantic. You are avoding the idea that meaning can have a
subjective component, rather than arguing against it.

This is a perfect illustration of the fact that we need to be very careful when using everyday words in scientific debate, to make sure that we are not confusing meanings.

Using "objective" and "semantic" as ineterchangeable synoymns is conufusing
meanings.

To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics.

You are the one asserting that semantics is necessarily NOT rule-based. I could equally say the onus is on you to show why it is not.

I already have. To recap:

1) The circularity argument: "gift" means "present", "present" means "gift",
etc.
2) The floogle/blint/zimmoid argument. Whilst small, local variations in
semantics wil probably show up as variations in symbol-manipulation, large,
global variations conceivably won't -- whaever variatiopns are entailed by substituting "pigeon" for "strawberry"
are canceled out by further substitutions. Hence the "global" versus "local"
aspect. Therefore, one cannot safely infer
that one's coloquitor has the same semantics as oneself just on the basis
that they fail to make errors (relative to your semantic model) in respect of symbol-manipulation.
3) The CR argument itself.


No, I’m saying that any two agents may differ in their semantic understanding, inlcuding human agents. Two human agents may “semantically understand” a particular concept differently, but it does not follow that one of them “understands” and the other “does not understand”.

What relevance does that have to the CR? If the TT cannot establish that a
system understands correctly, how can it establish that it understands at all
?

Very relevant. If the CR passes most of the Turing test, but fails to understand one or two words because those words are simply defined differently between the CR and the human interrogator, that in itself is not sufficient to conclude “the CR does not understand”

The floogle/blint/zimmoid argument shows that a CR could systematically
misunderstand (have the wrong semantic model for) all its terms without displaying any errors with regard to
symbol-manipulation.

The argument that syntax undeterdetermines sematics relies on the fact that
syntactical rules specify transformations of symbols relative to each other --
the semantics is not "grounded". Appealing to another set of rules --
another SMART -- would face the same problem.

“Grounded” in what in your opinion? Experiential knowledge?
What experiential knowledge do I necessarily need to have in order to have semantic understanding of the term “house”?

If experiential knowledge is unecessary, you should have no trouble with
"nobbles made gulds, plobs and giffles"
"plobs are made of frint"
"giffles are made vob"
etc, etc.

IOW, it only *seems* to you that experience is unnecessary because YOU ALREADY
KNOW what terms like "brick" , "window" and "door" mean.

IOW if your theory is correct you should be able to tell me what
nobbles, gulds, plobs, giffles, frint and vob are.


So...can you ?
 
Last edited:
  • #185
I most people agree on the definitons of the words in a sentence, what
would stop that sentence being an analytic truth, if it is analytic ?

If X and Y agree on the definitions of words in a statement then they may also agree it is analytic. What relevance does this have?

Your highly selective objcetions to analytic truths. You claim that
"understanding requires consciousness" is tantamount to a falsehood
but "understanding does not require experience" is a close to a necessary
truth. Yet both claims depends on the definitons of the terms involved.



There is a balance to be struck. You seem to wish to draw the balance such that “understanding requires consciousness by definition, and that’s all there is to it”, whereas I prefer to define understanding in terms of its observable and measurable qualities,

How do you know they are its qualities, in the complete absence of a
defition ? Do they have name-tags sewn into their shorts ?


You are not reading my replies, are you? I never said there should be no definitions, I said there is a balance to be struck. Once again you seem to be making things up to suit your argument.

Indeed there is a balance to be struck. You cannot claim that your approach to
the nature of understanding does not depend on a the way you define
"understanding" and you have not made it clear how your definition is
preferable to the "understanding requires consciousness" definition.

Why should it matter that "the “experiential knowledge of red” is purely subjective".
Are you supposing that subjective knowledge doesn't matter for semantics ?

I am suggesting that subjective experiential knowledge is not necessary for semantic understanding. How many times do you want me to repeat that?


You have made it abundantly clear that you are suggesting it.

The (unanswered) question is why you are suggesting it.

They should be broadly similar if our brains are broadly similar.
“Broadly similar” is not “identical”.
A horse is broadly similar to a donkey, but they are not the same animal.

So ? My point is that naturalistically you would expect variations in
conscious experience to be proportionate to variations in the physical
substrate. IOW, while your red might be a slightly different red to my
red, there is no way it is going to be the same as my grren, unless
one of us has some highly unusual neural wiring.

you are not aware that some of the things you are saying have implications
contrary to what you are trying to assert explicitly.

You are perhaps trying to read things into my arguments that are not there, to support your own unsupported argument. When I say “there is no a priori reason why they should be identical” this means exactly what it says. With respect if we are to continue a meaningful discussion I suggest you start reading what I am writing, instead of making up what you would prefer me to write.

There is a good aposteriori reason. If consciousness doesn't follow the
same-cause-same-effect rule, it is the only thing that doesn't.


You need experience to grasp the semantics of "red" and "green" as well as other people, because
they base their semantic grasp of these terms on their experiences.

I certainly do not. Red is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm. This is a semantic understanding of red. What more do I need to know?

what "red" looks like.

Whether or not I have known the experiential quality of seeing red makes absolutely no difference to this semantic understanding.


According to your tendentious defintion of "semantic understanding".


What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

that is a classically anti-physicalist argument.


It may be a true argument, but it is not necessarily anti-physicalist.

How can conscious experience vary in a way that is not accounted for by
variations in physical brain states ? That knocks out the possibility
that CE is caused by brain-states, and also the possibility that it
is identical with brains states. Anything other possibiliy is surely
mind-body dualism. Have you thought this issue thorugh at all ?

Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.

The semantics is completely embodied in the meaning of the term red – which is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm.

Given your tendentious defintion of "semantic meaning".

What then, do I NOT understand about X-rays, which I WOULD neceessarily understand if I could “see” X-rays?

What they look like, experientially.

What they look like is an experiential quality, it is not semantic understanding.
Given your tendentious defintion of "semantic understanding".

Perhaps you would claim that I also do not have a full understanding of red because I have not tasted red? And what about smelling red?

What the full meaning of a term in a human language is, depends on human
senses. Perhaps Martians can taste "red", bu the CR argument is about human
language.

Clearly experiential semantics conveys understanding of
experience.

I can semantically understand what is meant by the term “experience” without actually “having” that experience.

You need to demonstrate that you can understrand "experience" without having
any experience. All your attempts to do so lean on some pre-existing
semantics acquired by interaction with a world. You have never
really tackled the problem of explaing how an abstract SMART-based
semantics bootstraps itself.

We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?

The CR already contains information in the form of the rulebook

But how do you propose to get it into the CR?

You haven't supplied any other way the CR can acquire semantics.

Sematics is rule-based, why should the CR not possesses the rules for semantic understanding?

Nobody but you assumes apriori that semantics is rule-based. You are making
the extraordinary claim, the burden is on you to defend it.

I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.

Please define the Hard Problem.

The problem of how brains, as physical things (not abstract SMART systems),
generate conscious experience (as opposed to information processing).

Surf "chalmers hard problem consciousness"


Because human languages contain vocabulary relating to human senses.

And I can have complete semantic understanding of the term red, without ever seeing red.

Given your tendentious defintion of "semantic meaning".

If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?

By reasoning and experimental test.

Reasoning involves definitions and experimental test rquires pre-existing
standards.

Do I have a way of knowing whether phsycialism is true ?

You don’t. And I understand that many people do not believe it is true.

Are you one of them ? Are you going to get off the fence on the issue ?

We have been through all this: you can be too
anthropocentric, but you can be insufficiently anthropocentric too.

And my position is that I believe arbitrary definitions such as “understanding requires consciousness” and “understanding requires experiential knowledge” are too anthropocentrically biased and cannot be defended rationally

Well, that's the anti-physicalist's argument.

It’s my argument. I’m not into labelling people or putting them into boxes.
I see no reason why X’s subjective experience of seeing red should be the same as Y’s

Oh puh-leaze! This isn't a PC-thing. Saying that physically similar brains produce consciousness
in the same way is just a value-neutral statement, like saying that physically
similar kidneys generate urine the same way.



The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?

By “how do I semantically understand the term qualia”, do you mean “how do I semantically understand the term experiential quality”?
Let me give an example – “the experiential quality of seeing red” – which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. What is missing from this semantic understanding the experiential quality of seeing red?

What red actaully looks like.

you
claim to be a physicalist.

To my knowledge I have made no such claim in this thread

So people have immaterial souls ? But then how can you be sure
that AI's have real understanding or consciousness. Can't
you see that "SMART is sufficient for understanding" is
a more-physicalist-than-physicalism stance. Not only
does it imply that no non-physical component is needed,
it also implies that the nature of the physical basis
is fairly unimportant.

Anyway, experience has to do with the semantics of expreiential language.

Semantic understanding has nothing necessarily to do with experiential qualities, as I have shown several times above

You have not shown it to be true apart from your
tendentious defintion of "semantic meaning".


It is not part of your definition of understanding -- how remarkably
convenient.

And remarkably convenient that it is part of yours?
The difference is that I can actually defend my position that experiential knowledge is not part of understanding with rational argument and example – the Mary experiment for example.

The point of the Mary parable is exactly the opposite of what you are trying
to argue. And you complain about being misunderstood !


Ask a blind person what red looks like.
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red,

it obviously follows that they do not have the same seamntic understanding as
someone who actually does know what red looks like. You keep trying
to pass off "some understanding" as "full understanding".

which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. Experiential knowledge is not part of this semantic understanding.

It is not part of your tendentiously stripped-down defintion of
"understanding", certainly.

Yes there is: all brains are broadly similar anatomically

As before, “broadly similar” is not synonymous with “identical”.

Again, that is not relevant. The question is whether gross subjective
differences could emerge from slight physical differences.

. If they were not,
you could not form a single brain out the two sets of genes you get from your
parents. (Argument due to Steven Pinker).

Genetically identical twins may behave similarly, but not necessarily identically. Genetic makeup is only one factor in neurophysiology.

Again irrelevant. You keep using "not identical" to mean "radically
different".

this is a style of
argument you dislike when others use it.

It is not a question of “disliking”.
If a position can be supported and defended with rational argument (and NOT by resorting solely to “definition” and “popular support”)

Your position is based on defintions that DON'T EVEN HAVE popular support!

You have already conceded that the question of definitions cannot be
short-circuited by empirical investigation.


then it is worthy of discussion. I have put forward the “What Mary does not understand about red” thought experiment in defence of my position that experiential knowledge is not necessary for semantic understanding, and so far I am waiting for someone to come up with a statement including the term red which Mary cannot semantically understand. The two statements you have offered so far I have shown can be semantically understood by Mary.

According to your tendentious defintion of "semantic understanding".


You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.

What “explanatory gap” is this?

Surf "explanatory gap Levine".
 
  • #186
moving finger said:
Tisthammerw said:
Yes and no. Think of it this way. Suppose I “disagree” with your definition of “bachelor.” Does it then logically follow that “bachelors are unmarried” is no longer an analytic statement because we as two people do not agree on the term “bachelor”?

If Tisthammerw has a different definition of bachelor then it is up to Tisthammerw to decide whether the statement “bachelors are unmarried” is analytic or not according to his definitions of bachelor and unmarried.

I see, so it is only analytic depending on how one defines the terms, something I have been saying from the beginning and yet you continued to ignore and misconstrue my words.

Anyway, let's move on.


Tisthammerw said:
In fact I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms. To recap: I described the scenario, and I have subsequently asked you the questions regarding whether or not this fits your definition of perceiving etc.

With resepct, what part of “you did NOT specify that the computer you have in mind is interpreting the data/information” do you not understand?

With respect, what part of "I DID describe the computer I had in mind and I left it to you to tell me whether or not this scenario involves “interpreting the data/information” as you have defined those terms" do you not understand? I described the computer, and asked you if this computer was "interpreting data/information" according to your definitions of those terms (in addition to perceiving).

Let’s recap:

Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)?


So tell me – is the computer you have in mind interpreting the data/information?

You tell me.


I also trim the parts on “my definition is better than yours” since I consider this rather puerile.

Then why did you start it?


Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?


There is more than one meaning of “to perceive”.

Fine, but that doesn’t answer my question. Does this entity perceive using your definition of term (whatever that is)?


For an entity to “sense-perceive” a bright light it must possesses suitable sense receptors which respond to the stimulus of that light.
Whether that entity is necessarily “aware” of that bright light is a different question and it depends on one’s definition of awareness. I am sure that you define awareness as requiring consciousness. Which definition would you like to use?

It is not a different question it is crucial to the question I asked: Under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?

My definition of consciousness is the same as I defined it earlier.

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Thus my definition of consciousness is such that if an entity possesses awareness, the entity possesses consciousness.

For those who wish further precision, these individual characteristics could also be defined. “Perception” is the state of being able to perceive. Using the Eleventh edition of Merriam Webster’s dictionary, I refer to to “perceive” (definitions 1a and 2), “sensation” (1b), “thought” (1a), and “awareness” (2). These definitions are also available at http://www.m-w.com/


And I have shown repeatedly that you have “shown” no such thing

And I have shown repeatedly that what you have shown no such thing regarding what I have shown.

Okay, this is getting a bit unwieldy…

(see post #256 in the “can artificial intelligence……” thread)

See my response to that post.


Suggestion : If you wish to continue discussing the Program X argument can we please do that in just one thread (let’s say the AI thread and not this one)? That way we do not have to keep repeating ourselves and cross-referencing.

As you wish.
 
  • #187
Tournesol said:
If it is not an objection, it has no relevance to the debate...
Why is an observation necessarily not relevant to a debate?

Tournesol said:
"figure out how the brain produces consciousness physically, and see if the AI has the right kind of physics to produce consciousness."
Yes, this is one possible approach. But not necessarily a good one. It implicitly assumes that “the physics which produces consciousness in the brain is the only kind of physics which can produce consciousness”, which is not necessarily the case.

Tournesol said:
You certainly should be defending the strong AI argument, since that is
what this thread is about.
Thank you for telling me what I “should” be doing. With respect, I’ll ignore your advice. I do what I wish to do, not what you think I should do.

Tournesol said:
If you really are only saying that
"machines are in principle capable of possessing understanding, both syntactic and semantic"
you are probably wasting my time and yours. Neither Searle nor I rule
out machine understanding in itself. The argument is about whether machine
understanding can be achieved purely a system of abastract rules (SMART).
It can be read as favouring one approach to AI, the Artificial Life, or
bottom-up approach , over another, the top-down or GOFAI.
You must be reading a different version of the CR to me.

Tournesol said:
Re-read the CR, and see if it refers to syntactic rules to the exlcusion
of all others.
You must be reading a different version of the CR to me.

Tournesol said:
It explains why it is quite natural to assume SMART is restricted to syntax --
this is not some malicious misreading of what you are saying.
As I said, you must be reading a different version of the CR to me.

Tournesol said:
You have not explained how it is encoded. It cannot be acquired naturally, by
interaction with an environmnent, it cannot be cut-and-pasted.
Why can it not be cut and pasted? Have you shown why?

Tournesol said:
So far you haven't said anything at all about where it is to come
from, which is not greatly to the advantage of your case.
I am not the one asserting that the “CR shows semantic understanding in machines is impossible”. The onus is on the owner of the thought experiment to defend the logic and conclusions of the thought experiment. The basic assumption of the CR (that the premise “syntax gives rise to semantics” is false) is false – because the CR does not need a premise that “syntax gives rise to semantics”.

Tournesol said:
If the variability of subjective information is based on variability of brain
anatomy, why doesn't that affect everything else as well ?
Not everything is subjective

Tournesol said:
And if you think non-subjective information can be downloaded from a brain --
why mention that ? Are you saying the CR rulebook is downloaded from a brain
or what.
I’m saying the CR rulebook must be created, but not necessarily “downloaded from a brain”.

moving finger said:
By virtue of the fact that semantic understanding is rule-based
Tournesol said:
You have yet to support that.
I am not the one assering that semantic information is NOT rule-based – Searle is. Let Searle (or his supporters) defend the CR experiment by showing that semantic understanding is NOT rule-based.

Tournesol said:
You are blurring the distinction between being able to report under specific
circumstances, and being able to report under any circumstances. When
we say people are conscious, we do not mean they are awake all the time.
When we say understanding requires the ability to report, we
do not mean that people produce an endless monologue on their internal
state.
I dispute that understanding requires the ability to report. Are you saying it does?

Tournesol said:
Mary has a partial understanding. She acquires a deeper understanding when she
leaves her prison and sees red for the first time.
I would say that understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a much DEEPER understanding of “red” than simply “knowing the experiential quality of seeing red”.

Tournesol said:
It is not the meaning that "what red looks like" has to someone who has
actually seen red. It is not the full meaning.
What is “full meaning”? Is this similar to the “complete understanding” of a horse which quantumcarl insists can only be obtained by someone who feeds, grooms and mucks out a horse? What makes you think that you have access to the “full meaning” of red?
I would say that understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a much FULLER understanding of “red” than simply “knowing the experiential quality of seeing red”.

moving finger said:
Thus I have shown that Mary can indeed semantically understand both your examples.
Tournesol said:
Given a minimal definition of "understanding" -- setting the bar low, in other
words.
Again, to my mind understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a much HIGHER understanding of “red” than simply “knowing the experiential quality of seeing red”

moving finger said:
Now, can you provide an example of a statement containing the word “red” which Mary CANNOT semantically understand?
Tournesol said:
I already have :- given that semantics means semantics and not the bits of
semantics Mary can understand while still in the room.
Are you saying that Mary’s “semantic understanding of red” while still in the room is NOT in fact a semantic understanding of red?

Tournesol said:
To say that semantics is not derived from the syntactical SMART does not mean
it is derived from some other SMART. You have yet to issue a positive argument
that SMART is sufficient for semantics.
The onus is on Searle, or his followers, to defend their thought-experiment which is based on the assumption that AI necessarily posits that syntax gives rise to semantics. The assumption is false, hence the thought experiment needs re-stating.

moving finger said:
You are the one asserting that semantics is necessarily NOT rule-based. I could equally say the onus is on you to show why it is not.

Tournesol said:
I already have. To recap:

1) The circularity argument: "gift" means "present", "present" means "gift",
etc.
I know that this does not follow – why can a machine not know the same?
That a “gift” is also a “present”, whereas “present” has more than one meaning, is still following a “rule”. I know this rule, a machine can know it also.

Tournesol said:
2) The floogle/blint/zimmoid argument. Whilst small, local variations in
semantics wil probably show up as variations in symbol-manipulation, large,
global variations conceivably won't -- whaever variatiopns are entailed by substituting "pigeon" for "strawberry"
are canceled out by further substitutions. Hence the "global" versus "local"
aspect. Therefore, one cannot safely infer
that one's coloquitor has the same semantics as oneself just on the basis
that they fail to make errors (relative to your semantic model) in respect of symbol-manipulation.
Then don’t just test their symbol manipulation – test their understanding of the language. This is within the scope of the Turing test. The TT is not meant to be ONLY a test of symbol manipulation, it is meant also to be a test of understanding.

Tournesol said:
3) The CR argument itself.
The CR argument neither assumes nor shows that semantics is not rule-based. It assumes that AI necessarily posits “syntax gives rise to semantics”, which it does not. Hence the premises of the CR argument are untrue.

Thus NONE of the above shows that semantics is NOT rule-based.
NOTHING in the CR argument shows that semantics is NOT rule-based.

Tournesol said:
The floogle/blint/zimmoid argument shows that a CR could systematically
misunderstand (have the wrong semantic model for) all its terms without displaying any errors with regard to
symbol-manipulation.
Which is why the TT must test not only symbol manipulation (syntax), but also understanding (semantics).

Tournesol said:
If experiential knowledge is unecessary, you should have no trouble with
"nobbles made gulds, plobs and giffles"
"plobs are made of frint"
"giffles are made vob"
etc, etc.
I have no trouble with understanding these terms at all.
Whether you define them in the same way as me is another issue.
If we wanted to share our definitions of the meanings of these words, does it follow that either of us needs to have any particular experiential knowledge in order to understand those meanings?

Tournesol said:
IOW if your theory is correct you should be able to tell me what
nobbles, gulds, plobs, giffles, frint and vob are.
I can tell you if you wish. Whether you will agree with me or not is another matter. Would the fact that you do not agree with me mean that I do not understand the terms?

MF
 
Last edited:
  • #188
MF said:
TheStatutoryApe said:
With respect, every version of the question I have asked has included the word "How"...
With respect, this is simply untrue. Your original question (which I even quoted in my last post, but you obviously failed to read) was

TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?
Which I interpreted to mean “can Tim convey his concept of red to others?”. And that original interpretation has stuck in my mind as the question you are asking, until you quite pointedly stated that you mean something completely different.
Do you see now how the confusion is caused by your ambiguity?
It would seem obvious to me that you are the one failing to read and pay attention. If you refer back to my last post you will see that I have quoted every instance in which I asked a question regarding Tim in order from the very first. I have been worried that perhaps I am not being clear enough in my posts but it seems quite obvious to me that you really aren't paying any attention.
Thank you for your time.
Have a nice day.
 
  • #189
MF said:
But I do not need to show how an agent has “acquired” its understanding in order to claim that it can “possess” understanding.
That's brilliant. I simply need to assert that something is possible without at all indicating how it is possible and it should be accepted as fact. I wonder why scientists go through all the trouble then.

MF said:
When Searle proposed his CR thought experiment, nobody asked him “but HOW would you go about writing the rulebook in the first place?”
Searle may not have gone into detail but he did explain how the rule book functioned with regard to the CR.

MF said:
I do not need to, because I still do not see the relevance of your argument to the CR question.
If you don't pay attention and simply dismiss this without regard for the implications then you never will see the relevence will you?
 
  • #190
moving finger said:
And this definition implies to you that to have semantic understanding of the term "horse" an agent must necessarily have groomed a horse, and shovelled the horses dung?

Yes. A true understanding requires every type of experiencial input. If one relies soley on text or binary data or even word of mouth and pictures, there is only enough knowledge in these formats to form a definition of a horse. Not an understanding of a horse.


moving finger said:
"Standing under something" in this context does not mean literally "physically placing your body underneath that thing" - or perhaps you do not understand what a metaphor is?

Clearly you are the one who does not understand what a metaphor is. Are you telling me that the word "understanding' is a metaphor? Please let us know how "understanding" works as a metaphor.

Can you please explain what the word "inside" is a metaphor for.


moving finger said:
I am talking here about semantic understanding - which is the basis of Searle's argument. Semantic understanding means understanding the meanings of words as used in a language

Where did you get your definition for "semantic understanding?". "Semantic undstanding" is an oxymoron. "Semantic" implies a standard system while "understanding" implies relative individuality and points of view.

moving finger said:
An agent can semantically understand what is meant by the term "horse" without ever having seen a horse, let alone mucked out the horses dung.

More like "an agent can use semantic knowledge to know the definition of a horse". However, if the agent has never been in close proximity with a horse, the agent does not understand horses... not how to ride them... not how to care for them... and pretty well everything else about a horse.


moving finger said:
No, I have admitted no such thing. You are mistaken here.

I think what you are referring to is that I read how you think of understanding as an individual experience that can not be quantified or qualified by another person... very easily.


moving finger said:
And by your definition, I and at least 95% of the human race do not semantically understand what is meant by the term "horse". Ridiculous.

"Semantically understand" is an empty and ill defined phrase. I've already shown it to be null and void. Semantic knowledge is what you are referring to.


moving finger said:
Billions of humans share knowledge with each other through the use of binary language - what do you think the internet is? Are you suggesting that the internet does not contribute towards shared understanding?

I haven't suggested this.


moving finger said:
You choose to define understanding such that only an agent who has shared an intimate physical connection with a horse (maybe one needs to have spent the night sleeping with the horse as well?) can semantically understand the term "horse".

Those who read about horses or hear about them have semantic knowledge of horses. They have no experience with horses... they do not understand the actual, true implications of the animal the horse.

moving finger said:
I noticed that you chose not to reply to my criticism of your rather quaint phrase "complete understanding". Perhaps because you now see the folly of suggesting that any agent can ever have complete understanding of anything.

In your case I do see the folly.

moving finger said:
As I have said many times already, you are entitled to your rather eccentric definition of understanding, but I don't share it.

You are entitled to your rather uninteresting and misleading definition of understanding, but I don't share it.

b' Bye.
 
  • #191
Tournesol said:
You claim that
"understanding requires consciousness" is tantamount to a falsehood
This is a false allegation. If one wishes to claim that "understanding requires consciousness" is a true statement then one must first SHOW that "understanding requires consciousness". This has not been done (except tautologically)

Tournesol said:
but "understanding does not require experience" is a close to a necessary
truth.
I am not saying this is necessarily true. Just that this is what I believe, based on rational examination of understanding. You clearly believe differently. Hence the reason for our debate here.

Tournesol said:
Yet both claims depends on the definitons of the terms involved.
How many times do I need to repeat it?
Whether a statement is analytic or not depends on the definitions of the terms used. If two agents do not agree on the definitions of the terms used then they may also not agree on whether the statement is analytic. Period.

Tournesol said:
You cannot claim that your approach to
the nature of understanding does not depend on a the way you define
"understanding" and you have not made it clear how your definition is
preferable to the "understanding requires consciousness" definition.
Where did I claim this? All terms used in arguments need to be defined either implicitly or explicitly. I choose to define understanding one way, you choose to define it another. Your choice assumes consciousness and experiential qualities are required, mine does not. Simple as that.

Tournesol said:
The (unanswered) question is why you are suggesting it.
Because that is my belief, and I believe it because nobody, including Tournesol, has come up with a rational and coherent argument to show experiential knowledge IS necessary for semantic understanding, except by the tautological method of defining semantic understanding such that it requires experiential knowledge “by definition”.

If someone claimed “a human heart is necessary for semantic understanding”, but they were unable to SHOW this to be the case, then why should I believe them? The same applies to experiential knowledge, and consciousness. The onus is on the person making the claim of “necessity” to show rationally why the necessity follows (without resorting to tautological arguments). In absence of such demonstration there is no reason to believe in necessity.

I have answered your question of why I believe that experiential knowledge is not necessary for semantic understanding. Can you now answer the question of why you think it IS necessary?

Tournesol said:
My point is that naturalistically you would expect variations in
conscious experience to be proportionate to variations in the physical
substrate.
I do not see that this follows at all. The genetic differences (the differences in the genes) between humans and chimpanzees is very very tiny – but the consequences of these tiny differences in genetic makeup are enormous. One cannot assume that a small difference in physical substrate results in a similarly small difference in the way the system behaves.

Tournesol said:
IOW, while your red might be a slightly different red to my
red, there is no way it is going to be the same as my grren, unless
one of us has some highly unusual neural wiring.
I see no reason why my subjective sensation of red should not be more similar to your subjective sensation of green than it is to your subjective sensation of red. We have absolutely no idea how the precise “qualia” arise in the first place, thus there is no a priori reason to assume that qualia are the same in different agents.

Tournesol said:
There is a good aposteriori reason. If consciousness doesn't follow the
same-cause-same-effect rule, it is the only thing that doesn't.
I have never suggested it does. I am simply suggesting that “similar physical substrate” does not necessarily imply “similar details of operating system”. The way that a system behaves can be critically dependent on very minor properties of the substrate, such that small changes in substrate result in enormous changes in system behaviour. Heard of chaos?

moving finger said:
Red is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm. This is a semantic understanding of red. What more do I need to know?
Tournesol said:
what "red" looks like.
This is a purely subjective quality and imho does not add to my semantic understanding of red. If the subjective quality “what red looks like” suddenly changed for me overnight, nothing would change in my semantic understanding of red.

Tournesol said:
According to your tendentious defintion of "semantic understanding".
I can argue that understanding red as “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a far greater and deeper semantic understanding of red than simply defining it tautologically as “red is the experiential quality associated with seeing a red object”, which latter arguably gives us no “understanding” at all.

moving finger said:
What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

Tournesol said:
that is a classically anti-physicalist argument.
Call it what you like. The conclusion is true. Or do you perhaps deny this conclusion is true?

Tournesol said:
How can conscious experience vary in a way that is not accounted for by
variations in physical brain states ?
Where have I suggested that it does? I am claiming that two different agents necessarily have two different physical substrates (even if only slightly different), and we have no a priori reason for assuming that a small difference in physical substrate necessarily equates to a small difference in conscious experience.

Tournesol said:
That knocks out the possibility
that CE is caused by brain-states, and also the possibility that it
is identical with brains states. Anything other possibiliy is surely
mind-body dualism. Have you thought this issue thorugh at all ?
Have you thought out the fact that you are STILL not reading my posts correctly, and continuing to invent your own incorrect ideas?
Where have I suggested that variations in CE is not accounted for by differences in brain-states?

Tournesol said:
Secondly, it doesn't mean that we are succeeding in grasping the experiential
semantics in spite of spectrum inversion; it could perfectly well be a
situation in which the syntax is present and the semantics are absent.
It could be. And it could be that I am a Zombie and I have no understanding at all. That is why we need to develop objective tests for syntax and semantics.

moving finger said:
The semantics is completely embodied in the meaning of the term red – which is the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm.
Tournesol said:
Given your tendentious defintion of "semantic meaning".
And you think that defining red as “the subjective experiential quality associated with seeing a red object” is a deeper and more insightful meaning of red?

Tournesol said:
What the full meaning of a term in a human language is, depends on human
senses. Perhaps Martians can taste "red", bu the CR argument is about human
language.
I cannot be sure of what red looks like to you, and (despite your claims to the contrary) you cannot be sure of what red looks like to me. The ONLY reason that we share a common understanding of red is BECAUSE “red” literally MEANS “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” – and it means this to ALL agents – and that is THE common semantic understanding of red.

Tournesol said:
You need to demonstrate that you can understrand "experience" without having
any experience. All your attempts to do so lean on some pre-existing
semantics acquired by interaction with a world.
Because that is the only way a human has of acquiring information and knowledge in the first place. Humans are born without any semantic understanding – the only way for a human to acquire the information and knowledge needed to develop that understanding is via the conduits of the senses. But this limitation does not necessarily apply to all agents, and it does not follow that direct sense-experience is the only way for all agents to acquire information and knowledge.

Tournesol said:
You have never
really tackled the problem of explaing how an abstract SMART-based
semantics bootstraps itself.
Searle has never tackled the problem of explaining how he would “write” the CR rulebook in the first place – but that is not used as an objection to the CR thought-experiment.

Tournesol said:
We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?
moving finger said:
The CR already contains information in the form of the rulebook
Tournesol said:
But how do you propose to get it into the CR?
Ask Searle – its HIS thought experiment, not mine. Searle has never tackled the problem of explaining how he would “write” the CR rulebook in the first place – but that is not used as an objection to the CR thought-experiment.

Tournesol said:
You haven't supplied any other way the CR can acquire semantics.
I am claiming that semantics is rule-based. Can you show it is not?
Can you give me an example of “the meaning of a word in English” which is NOT based on rules?
Since semantics is rule-based, it follows algorithms, and can be programmed into a machine. Whether we humans yet consciously understand all these rules and algorithms and thus whether we yet have the capability to program the CR is a separate practical problem.

Tournesol said:
Nobody but you assumes apriori that semantics is rule-based. You are making
the extraordinary claim, the burden is on you to defend it.
How do you know that I am the only person who believes semantics is rule-based, or is this simply your opinion again? And why does this make any difference to the argument anyway?
The proof of the pudding is in the eating. Give me any word in the English language and I can give you some of the rules that define my semantic understanding of the meaning of that word.
Can you give me an example of “the meaning of a word in English” which is NOT based on rules?

Tournesol said:
I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.
moving finger said:
Please define the Hard Problem.
Tournesol said:
The problem of how brains, as physical things (not abstract SMART systems),
generate conscious experience (as opposed to information processing).
We do not fully understand the detailed mechanisms underlying the operating of the conscious brain. I do not see how this strawman is relevant to the question of whether semantic understanding is rule-based or not.

Tournesol said:
Because human languages contain vocabulary relating to human senses.
moving finger said:
And I can have complete semantic understanding of the term red, without ever seeing red.
Tournesol said:
Given your tendentious defintion of "semantic meaning".
I cannot be sure of what red looks like to you, and (despite your claims to the contrary) you cannot be sure of what red looks like to me. The ONLY reason that we share a common understanding of red is BECAUSE “red” literally MEANS “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” – and it means this to ALL agents – and that is THE common semantic understanding of red.

Tournesol said:
If its ability to understand cannot be established on the basis of having
the same features asd human understanding -- how else can it be established.
By defintion ?
moving finger said:
By reasoning and experimental test.
Tournesol said:
Reasoning involves definitions and experimental test rquires pre-existing
standards.
I never said otherwise. Are you suggesting that “ability to understand” can be established any other way than by reasoning and experimental test? If so would you care to explain how you would go about it?

Tournesol said:
Are you one of them ? Are you going to get off the fence on the issue ?
It seems important to you that you call me either a believer or disbeliever in physicalism. Please define exactly what you mean by physicalism and I might be able to tell you whether I believe in it or not.

Tournesol said:
Saying that physically similar brains produce consciousness
in the same way is just a value-neutral statement, like saying that physically
similar kidneys generate urine the same way.
But I have already shown above that your argument is unsound. One cannot assume that small differences in substrate necessarily lead to small differences in system behaviour. To use your terminology - this is not anti-physicalist, it is purely physicalist.

Tournesol said:
The question was the term "qualia". You could infer "house" on analogy with
"palace" or "hut". You could infer "X Ray" on analogy with "light". How
can you infer "qualia" without any abalogies ?
moving finger said:
By “how do I semantically understand the term qualia”, do you mean “how do I semantically understand the term experiential quality”?
Let me give an example – “the experiential quality of seeing red” – which is “the sense-experiences created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”. What is missing from this semantic understanding the experiential quality of seeing red?
Tournesol said:
What red actaully looks like.
Which I have already said many times – is a subjective experiential quality and is not necessary for semantic understanding

Tournesol said:
You have not shown it to be true apart from your
tendentious defintion of "semantic meaning".
Whether my definition is tendentious or not is a matter of opinion.
You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

moving finger said:
The difference is that I can actually defend my position that experiential knowledge is not part of understanding with rational argument and example – the Mary experiment for example.
Tournesol said:
The point of the Mary parable is exactly the opposite of what you are trying
to argue. And you complain about being misunderstood !
I was referring to the Mary argument that I gave in this thread (see post #157), and not to some other Mary argument that you might have in mind. I apologise for not making that clear.

Tournesol said:
Ask a blind person what red looks like.
moving finger said:
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red,
Tournesol said:
it obviously follows that they do not have the same seamntic understanding as
someone who actually does know what red looks like. You keep trying
to pass off "some understanding" as "full understanding".
Would you care to define what you mean by “full understanding”?
Why is understanding red to be “my subjective experiential quality of seeing a red object” necessarily a more full understanding of red than understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”?

Tournesol said:
It is not part of your tendentiously stripped-down defintion of
"understanding", certainly.
Whether my definition is tendentious or not is a matter of opinion.
You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

Tournesol said:
The question is whether gross subjective
differences could emerge from slight physical differences.
The question is whether similar substrates necessarily give rise to similar system behaviour. See above for an answer to this question.

Tournesol said:
You keep using "not identical" to mean "radically
different".
You keep assuming that similar substrates necessarily give rise to similar system behaviour. This does not follow.

moving finger said:
If a position can be supported and defended with rational argument (and NOT by resorting solely to “definition” and “popular support”)
Tournesol said:
Your position is based on defintions that DON'T EVEN HAVE popular support!
That is your opinion. I am not claiming that my argument is based on popular support – I am claiming it is based on defensible rationality and logic

Tournesol said:
You have already conceded that the question of definitions cannot be
short-circuited by empirical investigation.
I have stated that a balance needs to be drawn. One could choose to “define everything”, assume everything in one’s definitions, and leave nothing to rational argument or empirical investigation (this seems to be your tactic), or one could choose to start with minimal definitions and then use these plus empirical investigation to construct a rational and consistent model of the world.

moving finger said:
I have put forward the “What Mary does not understand about red” thought experiment in defence of my position that experiential knowledge is not necessary for semantic understanding, and so far I am waiting for someone to come up with a statement including the term red which Mary cannot semantically understand. The two statements you have offered so far I have shown can be semantically understood by Mary.
Tournesol said:
According to your tendentious defintion of "semantic understanding".
Your opinion again

Tournesol said:
You might be able to give the CR full semantics by closing the explanatory
gap in some unspecified way; but that is speculation.
moving finger said:
What “explanatory gap” is this?
Tournesol said:
Surf "explanatory gap Levine".
Another strawman. How is this relevant to the question of whether semantic understanding is rule-based or not?

MF
 
Last edited:
  • #192
TheStatutoryApe said:
It would seem obvious to me that you are the one failing to read and pay attention. If you refer back to my last post you will see that I have quoted every instance in which I asked a question regarding Tim in order from the very first. I have been worried that perhaps I am not being clear enough in my posts but it seems quite obvious to me that you really aren't paying any attention.
And there it is in post #169, where you said, quite clearly :

TheStatutoryApe said:
Is his experience sufficient for conveying the concept of red do you think?

Which is the question I actually answered, and which is an ambiguous question, as explained already.

Ignore this fact if you wish.

As I have explained many times, the other meaning of the question which is “how would you go about teaching Tim” is a strawman – it is not relevant to the question of whether the CR can possesses understanding, thus it does not NEED to be answered in a debate on whether the CR can possesses understanding.

Bye

MF
 
Last edited:
  • #193
moving finger said:
But I do not need to show how an agent has “acquired” its understanding in order to claim that it can “possess” understanding.
TheStatutoryApe said:
That's brilliant. I simply need to assert that something is possible without at all indicating how it is possible and it should be accepted as fact.
With respect, your thinking is very irrational here.
“Showing how an agent has acquired its understanding” (what you insist I must do) is NOT synonymous with “showing that an agent understands” (which is what I am claiming can be done) – or perhaps you think the two are synonymous?
TheStatutoryApe said:
I wonder why scientists go through all the trouble then.
Maybe because scientists are (by and large) rational agents – and your above argument is irrational

moving finger said:
When Searle proposed his CR thought experiment, nobody asked him “but HOW would you go about writing the rulebook in the first place?”
TheStatutoryApe said:
Searle may not have gone into detail but he did explain how the rule book functioned with regard to the CR.
Again, “showing how the rulebook was created in the first place” is not synonymous with “how the rulebook functions” (which latter Searle only desribed in a very cursory and simplistic way). By showing the basic principles of how the rulebook functions, Searle did not show (and did not need to show) how the rulebook was created in the first place.

moving finger said:
I do not need to, because I still do not see the relevance of your argument to the CR question.
TheStatutoryApe said:
If you don't pay attention and simply dismiss this without regard for the implications then you never will see the relevence will you?
The “implications” as you call them are based on your irrational attempt to equate “showing how an agent has acquired its understanding” with “showing that an agent understands”. I do not accept the two are equated. Your insistence that I must explain how an agent acquires its understanding is thus a strawman.

Bye

MF
 
  • #194
moving finger said:
If Tisthammerw has a different definition of bachelor then it is up to Tisthammerw to decide whether the statement “bachelors are unmarried” is analytic or not according to his definitions of bachelor and unmarried.
Tisthammerw said:
I see, so it is only analytic depending on how one defines the terms, something I have been saying from the beginning
This is exactly what I have been saying all along – see post #107 in this thread :

moving finger said:
Whether “conscious is required for understanding” is either an analytic or a synthetic statement is open to question, and depends on which definition of understanding one accepts.

If you agreed with this why didn’t you just say so at the time, and save us all this trouble?
Tisthammerw said:
would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving”…..?
The above does NOT tell me whether or not the computer is interpreting the data.
It’s your computer and your example – you tell me if it is interpreting or not – it’s not up to me to “guess”.
IF your computer is acquiring, storing, processing and interpreting the data, then by definition it is perceving. But only YOU can tell me if the computer you have in mind is doing any interpretation – I cannot guess it from your simple description.
Please answer the question – is the computer you have in mind doing any interpretation of the data?

Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?
moving finger said:
For an entity to “sense-perceive” a bright light it must possesses suitable sense receptors which respond to the stimulus of that light.
Whether that entity is necessarily “aware” of that bright light is a different question and it depends on one’s definition of awareness. I am sure that you define awareness as requiring consciousness. Which definition would you like to use?
Tisthammerw said:
It is not a different question it is crucial to the question I asked:
Perception is NOT synonymous with awareness, thus “does the agent perceive?” is a DIFFERENT question to “is the agent aware?”
I cannot answer the question you asked unless you first tell me how you wish to define “aware”.
Please answer the question.
(Note : The definition you gave of consciousness in your last post is NOT a definition of awareness – it USES the concept of awareness as one of the defining characteristics of consciousness, without defining awareness itself)

MF
 
Last edited:
  • #195
quantumcarl said:
A true understanding requires every type of experiencial input. If one relies soley on text or binary data or even word of mouth and pictures, there is only enough knowledge in these formats to form a definition of a horse. Not an understanding of a horse.
Thus, following your definition of semantic understanding, at least 95% of the human race has no semantic understanding of the term “horse”.
I see.
And you expect me to agree with this?
quantumcarl said:
"Semantic undstanding" is an oxymoron. "Semantic" implies a standard system while "understanding" implies relative individuality and points of view.
“Semantic” relates to the meaning of words as used in a language
“Understanding” relates to the intelligent use of knowledge and information generally, and is not necessarily restricted to languages and words
There are other types of “understanding” apart from “semantic understanding” – thus the phrase “semantic understanding” is not an oxymoron - or perhaps you do not understand this?

quantumcarl said:
However, if the agent has never been in close proximity with a horse, the agent does not understand horses... not how to ride them... not how to care for them... and pretty well everything else about a horse.
This is where you are confusing “understanding” with “semantic understanding” – and why it is important to understand the difference. I can semantically understand what a “Chinese person” is, but I cannot understand that person (in the sense of understanding his language).

quantumcarl said:
"Semantically understand" is an empty and ill defined phrase. I've already shown it to be null and void. Semantic knowledge is what you are referring to.
And I’ve shown where you misunderstand understanding
Knowledge is a different concept – knowledge forms the basis of understanding, but the fact than an agent possesses knowledge does not imply that the agent also possesses any understanding. Searle's whole CR argument is based on the premise that an agent can possesses knowledge of the English language without understanding the semantics of that language.

moving finger said:
Billions of humans share knowledge with each other through the use of binary language - what do you think the internet is? Are you suggesting that the internet does not contribute towards shared understanding?
quantumcarl said:
I haven't suggested this.
Then you won’t have a problem with the idea that we can gain understanding by communicating in binary

quantumcarl said:
Those who read about horses or hear about them have semantic knowledge of horses. They have no experience with horses... they do not understand the actual, true implications of the animal the horse.
Again you are referring to a different type of understanding – not simply semantic understanding of the term “horse”, but instead an intimate, physical, empathic, possibly even emotional and psychic, connection with the agent “horse”.

moving finger said:
I noticed that you chose not to reply to my criticism of your rather quaint phrase "complete understanding". Perhaps because you now see the folly of suggesting that any agent can ever have complete understanding of anything.
quantumcarl said:
In your case I do see the folly.
I’m glad to see that though you lost the argument you nevertheless haven’t lost your wit :biggrin:

MF
 
Last edited:
  • #196
What “explanatory gap” is this?
Surf "explanatory gap Levine".

Another strawman. How is this relevant to the question of whether semantic understanding is rule-based or not?

a) How can an answer to a question -- your question -- be a strawman argument ?

b) If you did the surfing you might be able to figure out for yourself.
 
  • #197
Concsious understanding

Here's an article about a person with a global aphasic which is a type of brain damage who prosesses semantic knowledge without a conscious understanding of the act they are preforming.

Semantic processing without conscious understanding in a global aphasic: evidence from auditory event-related brain potentials.
Revonsuo A, Laine M.
Academy of Finland.
We report a global aphasic who showed evidence of implicit semantic processing of spoken words. Auditory event-related brain potentials (ERPs) to semantically congruous and incongruous final words in spoken sentences were recorded. 17 elderly adults served as control subjects. Their ERPs were more negative to incongruous than to congruous final words between 300 and 800 ms after stimulus onset (N400), and more positive between 800 and 1500 ms (Late Positivity). The aphasic showed an exactly similar pattern of ERP components as the controls did, but his performance in a task demanding explicit differentiation between semantically congruous and incongruous sentences was at the chance level. During follow-up, his explicit understanding recovered over the chance level but the ERPs remained fairly similar. We conclude that implicit semantic activation at the conceptual level can take place even in the absence of conscious (explicit) comprehension of the meaningfulness of linguistic stimuli.

The accounts in this article demonstrate how semantic processing of information still takes place in the absence of conscious understanding. This narrows the field of the definition of the word and concept "understanding" and closely associates understanding with consciousness.

In fact MF may be right in that "understanding" is a "metaphor" for the state of consciousness. This sort of idea could make "understanding" synomonous with consciousness. And this is what TH and SA have also been proposing as well. Thank you QC.
 
  • #198
MovingFinger said:
You claim that
"understanding requires consciousness" is tantamount to a falsehood
This is a false allegation. If one wishes to claim that "understanding requires consciousness" is a true statement then one must first SHOW that "understanding requires consciousness". This has not been done (except tautologically)

Tautologies are truths.


Yet both claims depends on the definitons of the terms involved.
How many times do I need to repeat it?
Whether a statement is analytic or not depends on the definitions of the terms used. If two agents do not agree on the definitions of the terms used then they may also not agree on whether the statement is analytic. Period.


Whether a statement is true or not depends on the definitions of the terms
involved, because AS YOU YOURSELF ADMIT you cannot empricially test
a statement without first understanding it. Thus both analytical
and synthetic/empricial truth depend on defintions.

Whether a statement is analytically true depends ONLY on the
definitions of the terms involved.


You cannot claim that your approach to
the nature of understanding does not depend on a the way you define
"understanding" and you have not made it clear how your definition is
preferable to the "understanding requires consciousness" definition.
Where did I claim this? All terms used in arguments need to be defined either implicitly or explicitly. I choose to define understanding one way, you choose to define it another. Your choice assumes consciousness and experiential qualities are required, mine does not. Simple as that.

So can I dismiss your claims are "tautologous", as though that mean "false "?


The (unanswered) question is why you are suggesting it.
Because that is my belief, and I believe it because nobody, including Tournesol, has come up with a rational and coherent argument to show experiential knowledge IS necessary for semantic understanding, except by the tautological method of defining semantic understanding such that it requires experiential knowledge “by definition”.

I have appealed to the common observation that people who become personally
acquainted with something understand it better than those who have not.



If someone claimed “a human heart is necessary for semantic understanding”, but they were unable to SHOW this to be the case, then why should I believe them? The same applies to experiential knowledge, and consciousness. The onus is on the person making the claim of “necessity” to show rationally why the necessity follows (without resorting to tautological arguments). In absence of such demonstration there is no reason to believe in necessity.

Show that bachelorhood requires unammriedness.

I have answered your question of why I believe that experiential knowledge is not necessary for semantic understanding. Can you now answer the question of why you think it IS necessary?

I have argued SMART is not sufficient for semantics and suggested exeperiential
understanding as one of the things that could ground an abstract system of
rules.



My point is that naturalistically you would expect variations in
conscious experience to be proportionate to variations in the physical
substrate.
I do not see that this follows at all. The genetic differences (the differences in the genes) between humans and chimpanzees is very very tiny – but the consequences of these tiny differences in genetic makeup are enormous.
One cannot assume that a small difference in physical substrate results in a similarly small difference in the way the system behaves.


Physicalism requires one to assume simple and uniform natural laws, in the
absence of specific evidence to the contrary.
One can and should assume that a small difference in physical substrate results in a similarly small difference in the way the system behaves.

IOW, while your red might be a slightly different red to my
red, there is no way it is going to be the same as my grren, unless
one of us has some highly unusual neural wiring.
I see no reason why my subjective sensation of red should not be more similar to your subjective sensation of green than it is to your subjective sensation of red. We have absolutely no idea how the precise “qualia” arise in the first place, thus there is no a priori reason to assume that qualia are the same in different agents.


Yes there is: the physicalist assumption of the uniformity of nature.

There is a good aposteriori reason. If consciousness doesn't follow the
same-cause-same-effect rule, it is the only thing that doesn't.
I have never suggested it does. I am simply suggesting that “similar physical substrate” does not necessarily imply “similar details of operating system”.

No they don't necessarily. I am arguing for an initial assumption that can be
overridden by specific data. But above you say we should not even assume
same-cause-same-effect. I sometimes wonder if you know what "necessarily"
means.


I can argue that understanding red as “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm” is a far greater and deeper semantic understanding of red than simply defining it tautologically as “red is the experiential quality associated with seeing a red object”, which latter arguably gives us no “understanding” at all.

I dare say, but that is not what I mean by an expereiential understanding of
red: I mean what this[/color] looks like.

IOW, it is Reference, not Sense.



What I see as green, you may see as red, and another person may see as grey – yet that would not change the “semantic understanding that each of us has of these colours” one iota.

that is a classically anti-physicalist argument.
Call it what you like. The conclusion is true. Or do you perhaps deny this conclusion is true?

You cannot be sceptical about other people's qualia without
being sceptical about everything that makes up a scientific
world-view (for instance whether far-distant galaxies have the
same laws of physics as us). Scepticism is a univesal solvent.

I am claiming that two different agents necessarily have two different physical substrates (even if only slightly different), and we have no a priori reason for assuming that a small difference in physical substrate necessarily equates to a small difference in conscious experience.

Yes we do: the apriori assumption of simplicity and universality that I call
"physicalism".

It could be. And it could be that I am a Zombie and I have no understanding at all

It might seem to me that you could be a Zombie. Does it seem remotely possible to you
that you could be a zombie ? if not, why not ? (Hint: it begins with C...)

You need to demonstrate that you can understrand "experience" without having
any experience. All your attempts to do so lean on some pre-existing
semantics acquired by interaction with a world.
Because that is the only way a human has of acquiring information and knowledge in the first place. Humans are born without any semantic understanding – the only way for a human to acquire the information and knowledge needed to develop that understanding is via the conduits of the senses. But this limitation does not necessarily apply to all agents, and it does not follow that direct sense-experience is the only way for all agents to acquire information and knowledge.

But this debate is about whether abstract rules, SMART, are sufficient for
semantics. So agent A acquires information by ineteracting with
its surroundings. Can you then cut-and-paste the resultiong data into
agent B? (In the way you can't cut-an-paste C into FORTRAN or
French into German). And if you succeed in translating the
abstract rules to work in the new agent, are they
wroking by virtue of SMART, or by virtue of the causal
interactions, the informational inputs and performative
outputs of the total, embodied, system ?


We might be able to transfer information directly
form one computer to another, or even from one brain to another
anatomically similar one. But how do you propose to get it into the CR?
The CR already contains information in the form of the rulebook
But how do you propose to get it into the CR?
Ask Searle – its HIS thought experiment, not mine. Searle has never tackled the problem of explaining how he would “write” the CR rulebook in the first place – but that is not used as an objection to the CR thought-experiment.


If it were, it would just be another argument against AI - a stronger
version of the original CR. If the CR, as technology, fails to work, the
CR, as an argument succeeds.

In any case, Searle is attacking a particular approach to AI, the top-down
approach, so he would probably say: "Ask Marvin Minsky where to get the
rule-book from".


You haven't supplied any other way the CR can acquire semantics.
I am claiming that semantics is rule-based. Can you show it is not?
Can you give me an example of “the meaning of a word in English” which is NOT based on rules?

I have already listed arguments against rule-based semantics, and you didn't
reply. Here is the list again

1) The circularity argument: "gift" means "present", "present" means "gift",
etc.
2) The floogle/blint/zimmoid argument. Whilst small, local variations in
semantics wil probably show up as variations in symbol-manipulation, large,
global variations conceivably won't -- whaever variatiopns are entailed by substituting "pigeon" for "strawberry"
are canceled out by further substitutions. Hence the "global" versus "local"
aspect. Therefore, one cannot safely infer
that one's coloquitor has the same semantics as oneself just on the basis
that they fail to make errors (relative to your semantic model) in respect of symbol-manipulation.
3) The CR argument itself.

And while we are on the subject: being able to present a verbal defintion
for one term or another does not mean you can define all terms that
way without incurring circularity.

Nobody but you assumes apriori that semantics is rule-based. You are making
the extraordinary claim, the burden is on you to defend it.
How do you know that I am the only person who believes semantics is rule-based, or is this simply your opinion again?

You're the only one I've heard of.

And why does this make any difference to the argument anyway?

It means you have the burden of proof since you are makign the extradordinary
claim.

The proof of the pudding is in the eating. Give me any word in the English language and I can give you some of the rules that define my semantic understanding of the meaning of that word.

And while we are on the subject: being able to present a verbal defintion
for one term or another does not mean you can define all terms that
way without incurring circularity.

Can you give me an example of “the meaning of a word in English” which is NOT based on rules?

Yet another point I have already answered.

If semantics is really rule-based, you should be able to
tell me what "plobs" are, based on the following rules:-

If experiential knowledge is unecessary, you should have no trouble with
"nobbles made gulds, plobs and giffles"
"plobs are made of frint"
"giffles are made vob"
etc, etc.



I'll concede that if you solve the Hard Problem, you might be able to
programme in semantics from scratch. There are a lot of things
you could do if you could solve the HP.
Please define the Hard Problem.
The problem of how brains, as physical things (not abstract SMART systems),
generate conscious experience (as opposed to information processing).
We do not fully understand the detailed mechanisms underlying the operating of the conscious brain. I do not see how this strawman is relevant to the question of whether semantic understanding is rule-based or not.

It would give you a way of programming in semantics from scratch, as I said.
This doesn't seem important to you, because you think you can
defend SMART-based semantics without specifiying where the
rules come from. But you do need to specify
that, because that's the side of the debate you are on.
Searle doesn't.


Are you suggesting that “ability to understand” can be established any other way than by reasoning and experimental test? If so would you care to explain how you would go about it?

No: you are suggesting that "consciousness is not part of understanding" is
independent of definitions in a way that "consciousness is part of understanding"
is not. To judge that a system understands purely because it passes a TT
leans on an interpretation of "understanding" just as much as the judgement
that it doesn't.


Are you one of them ? Are you going to get off the fence on the issue ?
It seems important to you that you call me either a believer or disbeliever in physicalism. Please define exactly what you mean by physicalism and I might be able to tell you whether I believe in it or not.

Physicalism requires one to assume simple and uniform natural laws, in the
absence of specific evidence to the contrary.

Saying that physically similar brains produce consciousness
in the same way is just a value-neutral statement, like saying that physically
similar kidneys generate urine the same way.
But I have already shown above that your argument is unsound. One cannot assume that small differences in substrate necessarily lead to small differences in system behaviour. To use your terminology - this is not anti-physicalist, it is purely physicalist.

One should assume it -- but revisably, not "necessarily".


You have not shown it to be true apart from your
tendentious defintion of "semantic meaning".
Whether my definition is tendentious or not is a matter of opinion.
You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

So perhaps we could advance the argument by seeing which definition is correct
-- which does mean popular support, since meanings do not work like
facts.

Ask a blind person what red looks like.
He/she has no idea what red looks like, but it does not follow from this that he does not have semantic understanding of the term red,
it obviously follows that they do not have the same seamntic understanding as
someone who actually does know what red looks like. You keep trying
to pass off "some understanding" as "full understanding".
Would you care to define what you mean by “full understanding”?

eg. including what Mary learns when she leaves her prison.

Why is understanding red to be “my subjective experiential quality of seeing a red object” necessarily a more full understanding of red than understanding red to be “the sense-experience created within an agent which is endowed with visual-sensing apparatus when it perceives electromagnetic radiation with wavelengths of the order of 650nm”?

The one cannot be inferred from the other; it is therefore new information, in
the strictest sense of "information".


You have not shown experiential qualities to be necessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

You have not shown experiential qualities to be unnecessary for semantic understanding, except via your opinion of the definition of "semantic meaning"

You keep using "not identical" to mean "radically
different".
You keep assuming that similar substrates necessarily give rise to similar system behaviour. This does not follow.

It is not necessarily true; it is a required assumption of physicalism, but
only as a revisable working hypothesis.

Your position is based on defintions that DON'T EVEN HAVE popular support!
That is your opinion. I am not claiming that my argument is based on popular support – I am claiming it is based on defensible rationality and logic

It is partly based on definitions, and any argument based on definitions
should have
popular support. Definitional correctness - unlike factual accuracy --
is based on convention.

You have already conceded that the question of definitions cannot be
short-circuited by empirical investigation.
I have stated that a balance needs to be drawn. One could choose to “define everything”, assume everything in one’s definitions, and leave nothing to rational argument or empirical investigation (this seems to be your tactic),

False. To appeal to one particular analytical argument ("Fred is unmarried, so he is a
bachelor" -- "prove it!") is not to assume that all truths can be established
analytically.
 
  • #199
moving finger said:
This is exactly what I have been saying all along – see post #107 in this thread :

In post #107 you said this:

moving finger said:
it follows that the statement “understanding requires consciousness” is not analytic after all

Which is false, or at least not entirely true. It is analytic given my definitions. Of course, the statement is not necessarily analytic for other definitions. Whether or not the statement “understanding requires consciousness” is analytic cannot be answered yes or no until the terms are defined.


If you agreed with this why didn’t you just say so at the time, and save us all this trouble?

You're acting as if I didn't say that whether or not a statement is analytic depends on the definitions. Please read post #128 where I list some quotes from this thread. Why didn't you pay attention to what I said, and save us all this trouble?


Tisthammerw said:
Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)?

The above does NOT tell me whether or not the computer is interpreting the data.

What more information about the computer do you require?

It’s your computer and your example – you tell me if it is interpreting or not – it’s not up to me to “guess”.

I’m not asking you to guess, simply tell me whether the computer process as described here fits your definition of “interpretation.” If you need more information (e.g. the processing speed of the computer), feel free to ask questions.


Please answer the question – is the computer you have in mind doing any interpretation of the data?

I do not know if the computer process fits your definition of interpretation, because you have not defined the word. I described this scenario in part because I wanted to know what this definition of yours was.


Tisthammerw said:
Let me rephrase: under your definition of the term “perceive,” can an entity “perceive” an intensely bright light without being aware of it through the senses?

moving finger said:
Perception is NOT synonymous with awareness, thus “does the agent perceive?” is a DIFFERENT question to “is the agent aware?”

So is that a yes to my question?


I cannot answer the question you asked unless you first tell me how you wish to define “aware”.
Please answer the question.

I already referred to you the definition I’m using in my last post (post #186).


(Note : The definition you gave of consciousness in your last post is NOT a definition of awareness – it USES the concept of awareness as one of the defining characteristics of consciousness, without defining awareness itself)

I'll bold the part you apparently missed:

Tisthammerw said:
My definition of consciousness is the same as I defined it earlier.

  • Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.

Thus my definition of consciousness is such that if an entity possesses awareness, the entity possesses consciousness.

For those who wish further precision, these individual characteristics could also be defined. “Perception” is the state of being able to perceive. Using the Eleventh edition of Merriam Webster’s dictionary, I refer to “perceive” (definitions 1a and 2), “sensation” (1b), “thought” (1a), and “awareness” (2). These definitions are also available at http://www.m-w.com/

(Note: I corrected a typo in the quote.)
 
Last edited:
  • #200
moving finger said:
Thus, following your definition of semantic understanding, at least 95% of the human race has no semantic understanding of the term “horse”.
I see.
And you expect me to agree with this?

No, I don't expect you to agree with anything.

But I will point out that the phrase "semantic understanding" is never used in any well established profession. If you have an example where it is I would like you to direct our attention to its use and the context in which it is used.

I am convinced that what you are describing when you use the words "semantic understanding" you are actually describing "the possession of semantic knowledge"... not understanding.

As I have pointed out several times and as you have agreed with me ... (and as the latest article I posted points out)... it is entirely possible to possesses semantic knowledge of a subject without UNDERSTANDING it.

For instance I have semantically processed some information about horses to do with their digestive system... but I do not understand the full process or implications of their digestive process. I only have knowledge of the process... not understanding. I only have words and diagrams that describe the process, I have the knowledge... not the understanding.

I will never understand this information until I have shoveled road apples, slept with the horse in the woods, kept the horse moving for fear of freezing or discovered the proper vegetation that will help it digest the badger it accidently ate on the way back to the barn.

Medical students carry reams of semantic knowledge they processed during school but there is not one of them who understands the implications of the semantic, book knowledge, lab demos or videos.

That is why it is manditory that they perform a practicum for years before understanding a very small amount of the duties of being a doctor. The practicum allows their consciousness to be "standing under" the problems and solutions employed in the many, various medical professions and situations.

You, sir, have missed the mark in your bid to dilute and dis-credit the true meaning of understanding.

If you wish to assign the neuro-biological trait of "understanding" to a machine built by humans... then the machine will have to be one that has evolved to such a complexity, over millions of years, that it may as well be, and no doubt will become, a biological unit itself.
 
Back
Top