Can computers understand?Can understanding be simulated by computers?

  • Thread starter quantumcarl
  • Start date
  • Tags
    China
In summary, the conversation discusses the concept of the "Chinese Room" thought experiment and its implications on human understanding and artificial intelligence. John Searle, an American philosopher, argues that computers can only mimic understanding, while others argue that understanding is an emergent property of a system. The conversation also touches on the idea of conscious understanding and the potential of genetic algorithms in solving complex problems.
  • #106
Tournesol said:
In the absence of an objective explanation there is no objective way of
testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.
This subjective “test” that you suggest only allows the subject to determine whether “itself” is conscious. It says nothing about the consciousness of anything else.
.
Tournesol said:
That consciousness is part of understanding is established by the definitions of the words and the way language is used.
As I said, a circular argument
“consciousness is required for understanding” is a proposition (or statement), which can be either asserted or denied.
Whether “consciousness is required for understanding” is either an analytic or a synthetic statement is open to question, and depends on which definition of understanding one accepts.
According to MF’s definition of understanding, the statement “consciousness is required for understanding” is clearly synthetic.
To simply assert that the statement “consciousness is required for understanding” is true, and then to use this as a premise in an argument which concludes “understanding is not possible without consciousness”, results in a circular argument.
A circular argument shows nothing except that “whatever we assert to be true is true”.
If the conclusion of the argument is already contained within the premises of the argument then the argument is fallacious. You may not like the idea, but that is accepted logic.

The basic problem is that to engage in any rational debate about anything, we need a common language. We clearly do not have a common language, since "understanding" does not mean the same thing to you as it does to me.

An example. If the term "person" means "human being" to you, but to me "person" means "humanoid", then the statement "all persons are examples of the species homo sapiens" would be an analytic statement to you, but NOT to me.

Until we can agree on the language we are using, we will continue to disagree whether the statement "understanding requires consciousness" is analytic or not.

Tournesol said:
As I have stated several times, the intelligence of an artificial intelligence
needs to be pinned to human intelligence (albeit not it in a way that makes it
trivially impossible) in order to make the claim of "artificiallity"
intelligible. Otherwise, the computer is just doing something -- something
that might as well be called infromation-processing,or symbol manipulation.
Imho this is just what the human brain does – information-processing,or symbol manipulation.
Tournesol said:
No-one can doubt that computers can do those things, and Searle doesn't
either. Detaching the intelligence of the CR from human intelligence does
nothing to counteract the argument of the CR; in fact it is suicidal to the
strong AI case.
Where has anyone suggested “detaching the intelligence of the CR from human intelligence” ? (whatever this might mean)
Tournesol said:
But consciousness is a defintional quality of understanding, just as being umarried is being a defintional quality of being a bachelor.
Consciousness is not a definitional quality of understanding in my definition.
Tournesol said:
Is "bachelors are unmarried because bachelors are unmarried" viciously
circular too ? Or is it -- as every logician everywhere maintains -- a
necessary, analytical truth ?
The difference between an analytic statement and a synthetic one is that the former are true “by definition”, therefore to claim that something is an “analytical truth” is a non-sequitur. Analytic statements are essentially uninformative tautologies.
However, whether the statement “consciousness is necessary for understanding” is analytic or synthetic is open to debate. In my world (where I define understanding such that consciousness is not necessary for understanding), it is synthetic.
I guess that Tournesol would claim the statement “all unicorns eat meat” is synthetic, and not analytic?
But if I now declare “I define a unicorn as a carnivorous animal”” then (using your reasoning) I can claim the statement is now analytic, not synthetic.
According to your reasoning, I can now argue “all unicorns eat meat because I define a unicorn as a carnivorous animal”, and this argument is a sound argument?
This is precisely what the argument “consciousness requires understanding because I define consciousness as necessary for understanding” boils down to.
Tournesol said:
If you understand something , you can report that you know it, explain how you know it. etc.
Not necessarily. The ability “to report” requires more than just “understanding Chinese”.
Tournesol said:
That higher-level knowing-how-you-know is consciousness by definition.
I suppose this is your definition of consciousness? Is this an analytic statement again?
Tournesol said:
The question is whether syntax is sufficient for semantics.
I’m glad that you brought us back to the Searle CR argument again. Because I see no evidence that the CR does not understand semantics
Tournesol said:
If it is necessary but insufficient criterion for consciousness, and the CR doesn't have it, the CR doesn't have consciousness.
Not very relevant, since I am not claiming the CR does have consciousness.
Tournesol said:
How else would you do it ? Test for understanding without knowing what
"understanding means". Beg the question in the other direction by
re-defining "understanding" to not require consciousness ?
Are you suggesting the “correct” way to establish whether “understanding requires consciousness” is “by definition”?
The correct way to do it is NOT by definition at all. All this achieves is the equivalent of the ancient Greeks deciding how many teeth in a horse’s mouth “by debate” instead of by experiment.
In simple summary, here is the correct way :
Hypothesis : Understanding requires consciousness
Develop the hypothesis further – what predictions would this hypothesis make that could be tested experimentally?
Then carry out experimental tests of the hypothesis (try to falsify it)
Tournesol said:
I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality.
“Experiential quality” is not “understanding”
I do not need the “sensory experiential quality” of red to understand red, any more than I need the “sensory experiential quality” of x-rays to understand x-rays, or the “sensory experiential quality” of flying to understand aerodynamics.
Tournesol said:
You seem to be saying that
non-experiential knowledge ("red light has a wavelentght of 500nm") *is*
understanding, and all there is to understanding, and experience is
something extraneous that does not belong to understanding at all
(in contradiction to the conclusion of "What Mary Knew").
The conclusion to “What Mary Knew” is disputed.
Tournesol said:
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
Perhaps reasonable to you, not to me.
moving finger said:
I disagree. I do not need to have the power of flight to understand aerodynamics.
Tournesol said:
To theoretically understand it.
Sense-experience is not understanding.
moving finger said:
Vision is simply an access to experiential information, a person who “sees red” does not necessarily understand anything about “red” apart from the experiential aspect (which imho is not “understanding”).
Tournesol said:
How remarkably convenient. Tell me, is that true analytically, by defintion,
or is it an observed fact ?
Yes, isn’t it convenient? Just as convenient as concluding that “understanding requires consciousness because I define consciousness as necessary for understanding”?
Shall we start debating again whether particular statements are analytic or synthetic?
Tournesol said:
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but
more so.
Information and knowledge are required for understanding, not senses.
Tournesol said:
they understand just what Mary doesn't: what it looks like.
“what it looks like” is sense-experience, it is not understanding.
Tournesol said:
However, I do not need to argue that non-experiential knowledge is not knowledge.
Why not - is this perhaps yet another analytic statement?
Tournesol said:
However, if you can do both you clearly have more understanding than someone
who can only do one or the other or neither.
Its not at all “clear” to me – or perhaps you also “define” understanding as “requiring sense-experience”? Analytic again?
Tournesol said:
(Would you want to fly in a plane piloted by someone who had never been in the
air before ?)
the question is irrelevant – because “ability to fly a plane” is not synonymous with “understanding flight”.
Are you saying you only put your trust in the pilot because he “understands”?
If the same plane is now put onto autopilot, would you suddenly want to bail out with a parachute because (in your definition) machines “do not possesses understanding”?
Tournesol said:
They don't know what Mary doesn't know.
We are talking about “understanding”, not simply an experiential quality.
What is it that you think Mary “understands” once she has “experienced seeing red” that she necessarily did NOT understand before she had “experienced seeing red”?
(remember – by definition Mary already “knows all there is to know about the colour red”, and sense-experience is sense-experience, it is not understanding)
Tournesol said:
I claim that experience is necessary for a *full* understanding of *sensory*
language, and that an entity without sensory exprience therefore lacks full
semantics.
And I claim they are not. The senses are merely “possible conduits” of information.
There is no reason why all of the information required to “understand red”, or to “understand a concept” cannot be encoded directly into the computer (or CR) as part of its initial program. In principle, no sense-receptors are needed at all. The computer or CR can be totally blind (ie have no sense receptors) but still incorporate all of the information needed in order to understand red, syntactically and semantically. This is the thesis of strong AI, which you seem to dispute.
Tournesol said:
If you are going to counter this claim as stated, you need to rise to the
challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red".
My claim (and that of strong AI) is that it is simply information, and not necessarily direct access to information from sense-receptors, that is required for understanding. Senses in humans are a means of conveying information – but that is all they are. This need not be the case in all possible agents, and is not the case in the CR. If we could “program the human brain” with the same information then it would have the same understanding, in the absence of any sense-receptors.
with respect
MF
 
Last edited:
Physics news on Phys.org
  • #107
Tisthammerw said:
Well, yes. I have said many times that the answer to the question depends on the definition of “understanding” and “consciousness” used.
it follows that the statement “understanding requires consciousness” is not analytic after all, it is synthetic (because it is not accepted that the statement and premise “understanding requires consciousness” is necessarily true).
“conscious is required for understanding” is a proposition (or statement), which can be either asserted or denied.
Whether “conscious is required for understanding” is either an analytic or a synthetic statement is open to question, and depends on which definition of understanding one accepts.
According to MF’s definition of understanding, the statement “consciousness is required for understanding” is clearly synthetic.
To simply assert that the statement “consciousness is required for understanding” is true, and then to use this as a premise in an argument which concludes “understanding is not possible without consciousness”, results in a circular argument.
A circular argument shows nothing except that “whatever we assert to be true is true”.
If the conclusion of the argument is already contained within the premises of the argument then the argument is fallacious. You may not like the idea, but that is accepted logic.
The basic problem is that to engage in any rational debate about anything, we need a common language. You and I clearly do not have a common language, since "understanding" does not mean the same thing to you as it does to me.
An example. If the term "person" means "human being" to you, but to me "person" means "humanoid", then the statement "all persons are examples of the species homo sapiens" would be an analytic statement to you, but NOT to me.
Until we can agree on the language we are using, we will continue to disagree whether the statement "understanding requires consciousness" is analytic or not.
The question “does understanding require consciousness?” cannot be resolved by debate alone.
moving finger said:
I can make the statement “the moon is made of cheese”. Is that statement true or false? How would we know?
Tisthammerw said:
I’m not sure what relevance this has, but I would say the statement is false. We can “know” it is false by sending astronauts up there.
Excellent! I agree 10000%
To continue the analogy – I hope you are not suggesting that the “correct” way to establish whether “understanding requires consciousness” is “by definition”?
The correct way to do it is not by definition at all. All this achieves is the equivalent of the ancient Greeks deciding how many teeth in a horse’s mouth “by debate” instead of by experiment.
The correct way (as you pointed out) :
Hypothesis : Understanding requires consciousness
Develop the hypothesis further – what predictions would this hypothesis make that could be tested experimentally?
Then carry out experimental tests of the hypothesis (try to falsify it)
Tisthammerw said:
I’m not really sure you can call it fallacious, because in this case “the moon is made of cheese” is an analytic statement due to the rather bizarre definition of “cheese” in this case. By your logic any justification for the analytic statement “all bachelors are unmarried” is fallacious.
Again, I have not said that any statement is fallacious.
Statements are either true or false.
If a statement is true by definition (an analytic statement) then it becomes essentially an uninformative tautology. A tautology is necessarily true, but that tells us nothing useful.
Tisthammerw said:
Let me try this again. “This is what I mean by ‘understanding’…” is this premise true or false? Obviously it is true, because that is what I mean when I use the term. You yourself may use the word “understanding” in a different sense, but that has no bearing on the veracity of the premise, because what I mean when I use the term hasn’t changed.
It is not what I mean when I use the term, therefore (to me) the premise is false.
Tisthammerw said:
I’m not saying that drawing a conclusion from premises is strange, I’m saying it is strange to call logically sound arguments that demonstrate a statement to be analytic fallacious.
A tautological or circular argument is logically valid, but it nevertheless is a fallacious argument. Why? Because there is no way to know whether a circular argument is sound or not, because the truth of the conclusion is contained in the assumed veracity of the premises.
You may think this is strange, but it is accepted in logic.

I dispute that the conclusion "consciousness is necessary for understanding" is sound because it is not obvious to me that your premise “understanding requires consciousness” is necessarily true. You counter by saying “I DEFINE understanding as requiring consciousness” – does this now make an unsound argument sound? Of course it doesn’t. It simply makes it circular. And circular arguments are fallacious.
Tisthammerw said:
Again, I find it very strange that you call a logically valid argument fallacious.
A tautological or circular argument is logically valid, but it’s soundness is implicitly assumed in the veracity of the premises – hence by virtue of being circular it is a fallacious argument. You may think this is strange, but it is accepted in logic.
Tisthammerw said:
But let’s trim the fat here regarding this particular sort of circularity claim. In terms of justifying that a statement is analytic (by showing that the statement necessarily follows from the definitions of the terms), I deny that it is fallacious.
I never said any statement was fallacious.
To me, the statement “understanding requires consciousness” is synthetic, not analytic.
Tisthammerw said:
If it were, all justifications for analytical statements would fail (as would most of mathematics).
Justifications for analytic statements require a common language. Youn and I dispute the meaning and definition of understanding. That is the root problem.
Tisthammerw said:
And in any case this is beside the point, since we already agree that the statement “understanding requires consciousness” is analytic.
Is it? It comes again to having a “common language”. We do not share a common language because “understanding” does not mean to you what it means to me. Thus what is analytic to you is not necessarily so to me, and vice versa.
Tisthammerw said:
since we already agree that “understanding requires consciousness” is analytical, I suggest we simply move on.
We don’t. It may be analytic to you, not to me.
Tisthammerw said:
Usually, circular arguments are fallacious and I recognize that. So you don’t need to preach to the choir regarding that point.
But circular arguments are logically valid. The conclusion does indeed from the premises. I thought you found it strange that a logically valid argument could be fallacious? And now you are agreeing with me that circular arguments are fallacious?
Take the example
• Suppose Paul is not lying when he speaks.
• Paul is speaking.
• Therefore, Paul is telling the truth.
Is this, or is it not, a circular argument? It is perfectly valid (the conclusion follows from the premises), but the veracity of the conclusion “Paul is telling the truth” depends on the veracity of the premise “suppose Paul is not lying when he speaks”. If I dispute the premise, the argument is unsound.
Because the argument is circular, the veracity of the conclusion is already assumed in the assumed premise, therefore it is fallacious.
Now replace the premise “suppose Paul is lying when he speaks” with the premise “suppose understanding requires consciousness”
And replace “Paul is speaking” with “Paul is not conscious”
And replace the conclusion “Therefore, Paul is telling the truth” with the conclusion “Therefore, Paul does not understand”
The entire argument is now :
• Suppose understanding requires consciousness
• Paul is not conscious
• Therefore, Paul does not understand
Which is still a circular argument (you have admitted yourself that your argument is circular!) and it is by definition fallacious.
moving finger said:
What part of “I disagree with your premise” is unclear?
Tisthammerw said:
It’s unclear how that has any bearing to the matter at hand (which I have pointed out many times).
It’s unclear how “I disagree with your premise” has any bearing?
It’s quite simple. If the premises are not true, the conclusion is not necessarily true, and the argument is then unsound.
Tisthammerw said:
Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean?
I have read your statement very carefully, and No, I do not agree.
We have not been debating here about whether “computers can understand” per se. We have been debating whether a non-conscious agent can understand. Your argument thus far has not been that “computers cannot understand” it has been “non-conscious agents cannot understand”. You have not shown that all computer agents are necessarily non-conscious.
Tisthammerw said:
I never claimed otherwise, but that still doesn’t answer my question. What do you mean when you use the term “perceive”?
As simply as possible : “To perceive” is to acquire, process, interpret, select, and organise information as part of a knowledge-base.
Tisthammerw said:
But getting to the more relevant point at hand, let’s revisit my question:
Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.
No, I do not agree.
We have not been debating here about whether “computers can understand” per se. We have been debating whether a non-conscious agent can understand. Your argument thus far has not been that “computers cannot understand” it has been “non-conscious agents cannot understand”. You have not shown that all computer agents are necessarily non-conscious.
MF
 
Last edited:
  • #108
TheStatutoryApe said:
I believe that all information, at it's root, is syntactic. To believe otherwise would require that we believe information has some sort of platonic essence which endows it with meaning.
I believe that semantic meaning emerges from the aggregate syntactic context of information. The smallest bit of information has no meaning in and of itself except in relation to other bits of information. This relationship then, lacking semantic meaning on either part, is necessarily a syntactic pattern. Once an entity has acquired a large enough amount of syntactic information and stored it in context it has developed experience, or a knowledge base. When the entity can abstract or learn from these patterns of information it will find significance in the patterns. This significance equates to the "semantic meaning" of the information so far as I believe.
I broadly agree, but in the case of syntax I would put it the other way about.
In my view, at the core is “information”. The information “in and by itself” contains no syntax or semantics. Syntax (to me) is a property which arises or emerges from the subsequent manipulation of the information, the application of rules of correlation between the information (the rules are in themselves also information) and the relationships between one piece of information and another.
It seems to me that semantics is also emergent as just a higher level of the same type of information processing - manipulation of the information, the application of rules of correlation between the information, and the relationships between one piece of information and another.
I guess we are fairly close in our concepts.
moving finger said:
We seem to agree on most of this. However I would not say that the “acquisition of” information is necessary to understanding, rather I would say that the “possession of” information is necessary to understanding.
TheStatutoryApe said:
It seems that I just prefer more "active" words.
To me they just seem to fit better.
The reason I prefer “possession” is that I can envisage in principle an agent which understands (pre-programmed information) but has no sense-receptors (ie cannot acquire new information). This understanding requires possession of information in the first place, but not necessarily the ability to acquire new information.
moving finger said:
Do you believe the CR understands the meanings of the words it is using?
TheStatutoryApe said:
No.
Interesting. I do.
moving finger said:
Do you believe that the words mean to the CR what they mean to people who speak/read chinese?
TheStatutoryApe said:
No.
I tend to agree with you here.
moving finger said:
Do you believe the CR can hold a coherent and intelligent conversation in Chinese?
TheStatutoryApe said:
Yes, but only given the leaway of it being a hypothetical situation.
OK
TheStatutoryApe said:
I've been trying to site Searle's argument and make my argument at the same time. Perhaps I haven't maintained a proper devision between what is his argumetn and what is mine but it seems you often think I agree with Searle when I do not.
Yes, I did tend to think that. I apologise if I was mistaken.
TheStatutoryApe said:
So far there have been several ideas such as giving the man in the room access to sensory information via camera or allowing the man to be the entire system rather than just the processing chip. Every idea for altering the room though is constructed by Searle, or Tisthammerw, in a way that does not reflect reality properly and sets the man in the room up to fail. I know that you believe that the CR does in some sense possesses understanding which I do not agree with but I will have to come back to this to discuss that later.
OK, I look forward to that, also to finding out why you believe that the CR does not possesses understanding.
One final question if you have any time :
Do you believe there is a fundamental difference between a “process” and a “perfect simulation of that process”?
(Note I am not talking here about simulating objects – but about simulating processes)
Searle’s argument seems to base itself on the assertion that a “simulated process” is just that – a simulated process – and somehow differs from the “real process”. I dispute that assertion. But I’m interested to know what you think.
MF
 
  • #109
moving finger said:
Tisthammerw said:
Well, yes. I have said many times that the answer to the question depends on the definition of “understanding” and “consciousness” used.

it follows that the statement “understanding requires consciousness” is not analytic after all

Whether the statement is analytic depends on the definitions used. If my definitions of the terms are used, then the statement “understanding requires consciousness” is analytic.



According to MF’s definition of understanding, the statement “consciousness is required for understanding” is clearly synthetic.

I'm not sure how something like that could be determined by observation, but I suppose that might depend on how you define those terms.


If the conclusion of the argument is already contained within the premises of the argument then the argument is fallacious.

...

The basic problem is that to engage in any rational debate about anything, we need a common language. You and I clearly do not have a common language, since "understanding" does not mean the same thing to you as it does to me.

...

But circular arguments are logically valid. The conclusion does indeed from the premises. I thought you found it strange that a logically valid argument could be fallacious? And now you are agreeing with me that circular arguments are fallacious?
Take the example
...

You seem to be repeating yourself somewhat. I have responded to this sort of thing in the latter half of post #239 in the other thread.


Tisthammerw said:
I’m not sure what relevance this has, but I would say the statement is false. We can “know” it is false by sending astronauts up there.

Excellent! I agree 10000%
To continue the analogy – I hope you are not suggesting that the “correct” way to establish whether “understanding requires consciousness” is “by definition”?

If we the kind of understanding we are talking about is TH-Understanding, then we do not agree; since in this case it can be shown that “understanding requires consciousness” is an analytic statement.


The correct way to do it is not by definition at all. All this achieves is the equivalent of the ancient Greeks deciding how many teeth in a horse’s mouth “by debate” instead of by experiment.

The Greeks were also responsible for various advances in mathematics, which is all done by definition of terms. Some things can be demonstrated via definition (e.g. “all bachelors are unmarried” and “2 + 2 = 4”), others (e.g. how many teeth are inside a horse’s mouth) cannot.



Tisthammerw said:
I’m not really sure you can call it fallacious, because in this case “the moon is made of cheese” is an analytic statement due to the rather bizarre definition of “cheese” in this case. By your logic any justification for the analytic statement “all bachelors are unmarried” is fallacious.

Again, I have not said that any statement is fallacious.

I was referring to the argument used to justify that the statement is analytic.


Tisthammerw said:
Let me try this again. “This is what I mean by ‘understanding’…” is this premise true or false? Obviously it is true, because that is what I mean when I use the term. You yourself may use the word “understanding” in a different sense, but that has no bearing on the veracity of the premise, because what I mean when I use the term hasn’t changed.

It is not what I mean when I use the term, therefore (to me) the premise is false.

No, the premise is true, because the premise is not what you mean when you use the term, it is what I mean when I use the term. My definition of understanding is what you have called “TH-understanding.” Therefore, my conclusion could be rephrased as “consciousness is required for TH-understanding.” I have said time and time again that the statement “understanding requires consciousness” is analytic for my definitions; not necessarily yours.

The premise “This is what I mean by understanding…” is true because it is what I mean by understanding. See #239 in the other thread for more info on this.


Tisthammerw said:
But let’s trim the fat here regarding this particular sort of circularity claim. In terms of justifying that a statement is analytic (by showing that the statement necessarily follows from the definitions of the terms), I deny that it is fallacious.

I never said any statement was fallacious.

Again, I am referring to the argument used to justify that a statement is analytic.


Note on below: I put my quote into full context.


Tisthammerw said:
It’s unclear how that has any bearing to the matter at hand (which I have pointed out many times). I understand that you don’t agree with my definition of “understanding” in that you mean something different when you use the term. But that is irrelevant to the matter at hand. I’m not saying computers can’t understand in your definition of the term, I’m talking about mine. Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.

It’s unclear how “I disagree with your premise” has any bearing?

On the matter the quote is referring to, yes. It is very unclear.

It’s quite simple. If the premises are not true, the conclusion is not necessarily true, and the argument is then unsound.

Ah, so you “disagree” with the premise in that you believe it to be false. But you have not shown that “TH-understanding requires consciousness” is not an analytic statement, whereas I have shown the opposite. And “I don’t mean the same thing you do when I say ‘understanding’” is not at all relevant regarding if computers have the kind of understanding that I mean when I use the term.


Tisthammerw said:
Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean?

I have read your statement very carefully, and No, I do not agree.

So you don’t agree that computers (given the model I described) cannot have TH-understanding? Well then, please look at post #102 of this thread where I justify my claim that computers (of this sort) cannot have TH-understanding. Now we can finally get to the matter at hand.


As simply as possible : “To perceive” is to acquire, process, interpret, select, and organise information as part of a knowledge-base.

It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to
http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=perceive).


Your argument thus far has not been that “computers cannot understand”

Please see post #102
 
  • #110
I only read the first few posts in this thread so please forgive me if someone else has already expressed this idea.

Turing says that whatever it is in the room, it thinks. Searle says no. Neither gives a reason why. End of interesting argument.
 
  • #111
jimmysnyder said:
I only read the first few posts in this thread so please forgive me if someone else has already expressed this idea.
Turing says that whatever it is in the room, it thinks. Searle says no.

No, not quite.

Turing says that if something (e.g. a program) can simulate understanding (e.g. a conversation) it necessarily understands. Searle says no, and gives a counterexample called the Chinese room thought experiment.

The original Chinese room thought experiment was in response to something similar to the Turing test. In the scenario Searle addressed, imagine a computer program that is fed a story (e.g. about a man buying pie) and questions regarding the story (e.g. what pie did the man buy?) to which the program gives answers to.

The Chinese room thought experiment goes as follows. Suppose we have a man in a room whose only language is English. Near him are stacks of paper written in Chinese. He can recognize and distinguish Chinese characters, but he cannot discern their meaning. He has a rulebook written in English containing a complex set of instructions of what to write given the Chinese characters written on the slips of paper. When he looks at the slips of paper, he writes down another set of Chinese characters according to the rules in the rulebook. Unbeknownst to the man in the room, one stack of papers are stories written in Chinese, another stack of papers actually contains questions, and the man is writing back answers. The rulebook (the “program”) works well enough so that, from the point of view of a person outside the room, the answers are indistinguishable from a native speaker of Chinese.

Variants of this thought experiment could be made, including a kind of Turing test for understanding Chinese. One slips questions written in Chinese under the door, and the man writes back answers using the complex set of instructions contained in the rulebook. Again, responses are indistinguishable from a native speaker of Chinese. Yet it would seem that the man inside the room does not understand, nor does the book, nor the slips of paper. Thus, the thought experiment has been used as a counterexample to the claim that passing the Turing test is sufficient for literal understanding.
 
  • #112
moving finger said:
I agreed that understanding is not a prerequisite for translation, but I suggested that to perform an accurate translation of one complex language into another it helps to understand both languages. Do you dispute it?

I have refined my position somewhat during the course of the discussion.
I dispute the use of the word "understand" in the context of comprehending languages. I know "everyone" uses the word to describe a knowledge of the gramatical, linguistical and vocabular components of a language but, I believe the word " has a rarer and less readily available definition... one that it was intended for during its first use... some milenia ago.

One does not, by my defintion, understand a language except by how it feels to speak it and the cultural differences it has to offer one's senses.
One cannot "understand" a language, by my definition, yet one can comprehend what a language is conveying. By my definition, understanding a language would be like trying to understand a grain of sand... impossible... one could understand the processes involved in creating a grain of sand, one can understand what a grain of sand feels like in the butt crack... and one could understand what a grain of sand is made of... etc... but, one cannot understand a grain of sand. Just as one cannot "understand" a language.

I hope this demonstrates to you why deem the use of the term, "understanding" in the context of language comprehension and in the Chinese Room Thought Experiment incorrect and out of context.



moving finger said:
There is no evidence that empathy is necessarily required for understanding to take place.

If you meant that you, personally, have no necessity for empathy to arrive at an understanding of an issue, I'd question what you mean by "an understanding". This is because, by my definition of the word understanding, (which is a compound word composed of "under" and "standing") empathy and understanding are close to being synonyms of one another.

m f said:
Do you disagree with the statement “There is no microscopic part of the brain which "understands" what it is doing”?

Until I am a microscopic part of a brain I will not be able to answer that.

As I said earlier, neurons behave in ways that conventional cells don't. I may go as far as to say that a cell is capable of understanding because, a cell is as evolved as the rest of an organism. So, in that case, it is possible that a single cell (plant or animal) possess's a conscious understanding of its existence and its function.


moving finger said:
“guilty by association”? What is that supposed to mean?

I have used a word out of context and it has rendered my statement invalid much in the way the CR experiment has used non-contextual terminology, rendering it, invaid.





moving finger said:
Do you have an answer for the question?

Here you are asking me what the Chinese Room is translating.

It is translating nothing. The man in the CR is translating, by definition, the caligraphy that is passed to him. He uses the first caligraphy to find a counterpart which has additional information associated with it. This is a form of translation and the actions certainly mimic an act of translation.


moving finger said:
Replace the human with a mechanical device – the CR performs exactly the same as before (because all the human is doing is manipulating symbols on paper – this is his sole function).MF

The human must remain conscious, aware and he must care (empathize) enough to perform what has been requested of him in order to complete a flawed experiment.
 
  • #113
MF said:
One final question if you have any time :
Do you believe there is a fundamental difference between a “process” and a “perfect simulation of that process”?
(Note I am not talking here about simulating objects – but about simulating processes)
Searle’s argument seems to base itself on the assertion that a “simulated process” is just that – a simulated process – and somehow differs from the “real process”. I dispute that assertion. But I’m interested to know what you think.
Yes, I was meaning to get to this question at the same time since my explination and answer to this question are linked.

Searle builds the CR in such a manner that it is a reactionary machine spitting out preformulated answers to predetermined questions. The machine is not "thoughtful" in this process, that is it does not contemplate the question or the answer, it merely follows it's command to replace one predetermined script with another preformulated script. I do not believe that this constitutes "understanding" of the scripts contents. Why?
First of all as already pointed out the machine is purely reactionary. It doesn't "think" about the questions or the answers it only "thinks" about the rote process of interchanging them, if you would like to even consider that thinking. The whole process is predetermined by the designer who has already done all of the thinking and formulating (this part is important to my explination but I will delve into that deeper later). The CR never considers the meanings of the words only the predetermined relationships between the scripts. In fact the designer does not even give the CR the leaway by which to consider these things, it is merely programed to do the job of interchanging scripts.
Next, the CR is not privy to the information that is represented by the words it interchanges. Why? It isn't programed with that information. It is only programed with the words. The words, though a form of information in and of themselves, are representative of information about reality as determined by the entities who have formulated them. This is what gives them their semantic property, or rather the semantic property exists in the minds and knowledge of those that have agreed upon the system of communication. The CR contains no raw information about reality and has no means by which to acquire it let alone the leaway or motivation with in it's program to do so.
Lastly, the manner in which the simulation of understanding is achieved in the CR. The designer of the CR's program undoubtedly has an understanding of Chinese. When the designer predetermines the questions and formulates the answers he is making a mirror of his own understanding in the program. But this does not actually endow the program itself with understanding. If the designer were to write a letter out on a piece of paper in chinese the letter itself does not possesses understanding it only reflects the authors understanding. When the designer authors the CR responses he is making them to reflect his own understanding of chinese, they are simply multiple letters in chinese insead of just one.
The designer instead of a program could create a simple mechanical device. It could be a cabinet. On a table next to the cabinet there could be several wooden balls of various sizes. On these balls we can have questions printed, these parallel the predetermined questions in the CR program. There can be a chute opening on top of the cabinet where one is instructed to insert the ball with the question of their choice, paralleling the slot where the message from the outside world comes to the man in the CR. Inside the cabinet the chute leads to an inclined track with two rails that widen as the track progresses. At the point where the smallest of the balls will fall between the rails the ball will fall and hit a mechanism that will drop another ball through a chute and into a cup outside the cabinet where the questioner is awaiting an answer. On this ball there is printed an answer corresponding to the question printed on the smallest of the balls. The same is done for each of the balls of varying sizes. The cabinet is now answering questions put to it in fundamentally the same manner as the CR. The only elements lacking are the vastness of possible questions and answers and the illusion that the CR does not know the questions before they are asked.
So does that cabinet possess understanding of the questions and answers? Or does it merely reflect the understanding of the designer?

Now on the difference between a "perfect simulation" and the real thing.
I guess this depends really on your definition of "simulation". Mine personally is anything that is made to fool the observer into believing it is something other than it really is. A "perfect simulation" I would classify as something that fools the observer in every way until the simulation is deconstructed showing that the elements that seemed to be present were in fact not present. If something can be deconstructed completely and still be indestinguishable from the real thing then I would call that a "reproduction" as opposed to a "simulation".
In the case of the CR I would say that once you have deconstructed the simulation you will find that the process occurring is not the same as the process of "understanding" as you define it (or at least as I define it) even though it produces the same output.
And if it produces the same output then what is the difference between the processes? If all you are concerned about is the output then none obviously. If you care about the manner in which the work then there are obviously differances. Also which process is more effective and economical? Which process is capable of creative and original thinking? Which process allows for freewill (if such a thing exists)? Which process is most dynamic and flexible? I could go on for a while here. It stands that the processes are fundamentally different and allow for differing possibilities though are equally capable of communicating in chinese.
 
  • #114
QC said:
One cannot "understand" a language, by my definition, yet one can comprehend what a language is conveying. By my definition, understanding a language would be like trying to understand a grain of sand... impossible... one could understand the processes involved in creating a grain of sand, one can understand what a grain of sand feels like in the butt crack... and one could understand what a grain of sand is made of... etc... but, one cannot understand a grain of sand. Just as one cannot "understand" a language.

I hope this demonstrates to you why deem the use of the term, "understanding" in the context of language comprehension and in the Chinese Room Thought Experiment incorrect and out of context.
I see! This is very good, I would have to agree with your conclusions here. So you are defining the process of understanding as a strictly personal experience? So the "understanding" of language then would actually be the personal "understanding" of the perception of language? Purely a personal epistemological issue?
 
  • #115
Tisthammerw said:
Whether the statement is analytic depends on the definitions used. If my definitions of the terms are used, then the statement “understanding requires consciousness” is analytic.

We seem to agree that "for two agents to agree on whether a statement is analytic or not" requires that the agents first agree the definitions of terms used in the statement. This much seems obvious. We do not agree on the definition of understanding, therefore we do not agree the statement “understanding requires consciousness” is analytic.

Tisthammerw said:
You seem to be repeating yourself somewhat.
Yes I am repeating myself, because most of my posts are replying to the same things that you keep repeating. As I have said several times before, we keep repeating the same cycle of questions and answers, and this is becoming pointless.

Tisthammerw said:
If we the kind of understanding we are talking about is TH-Understanding, then we do not agree; since in this case it can be shown that “understanding requires consciousness” is an analytic statement.
Let's "trim the fat" as you suggested.

I suggest the following :

“TH-Understanding requires consciousness” is an analytic statement
“MF-Understanding does not require consciousness” is also an analytic statement.
But "Understanding requires consciousness" is a synthetic statement, because we do not agree on the definition of "understanding".

Do you agree?

This imho sums it up.

Tisthammerw said:
Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.
Please read my complete answer very carefully this time. I am here replying to your precise question as phrased above. You have not shown that all computer agents are necessarily non-conscious. Therefore I do not agree.

Tisthammerw said:
But you have not shown that “TH-understanding requires consciousness” is not an analytic statement, whereas I have shown the opposite.
I have never denied that the statement “TH-Understanding requires consciousness” is analytic, you again are making mistakes in your reading and comprehension of these posts. Please read more carefully.

Tisthammerw said:
Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean?
Please read my complete answer very carefully.
Let me help you out here. I am here replying to your precise question as phrased above. By “understand in the sense that I mean when I use the term “ I assume you mean “TH-Understanding”.
Your question is thus “Do you agree that computers cannot TH-Understand?”.
Now TH-Understanding is defined such that it requires consciousness.
But you have not shown that all computer agents are necessarily not conscious, therefore I do not see how the question can be answered.

Tisthammerw said:
So you don’t agree that computers (given the model I described) cannot have TH-understanding? Well then, please look at post #102 of this thread where I justify my claim that computers (of this sort) cannot have TH-understanding. Now we can finally get to the matter at hand.
Post #102 makes NO explicit reference to “TH-Understanding”

moving finger said:
As simply as possible : “To perceive” is to acquire, process, interpret, select, and organise information as part of a knowledge-base.
Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses?
Allow me to explain.
“being aware of an intensely bright light through the senses” is simply one possible mechanism for “acquiring, processing and interpreting information” – which (by my definition) is included in "perception".

Tisthammerw said:
Please see post #102
Separate post, thus I will post a separate reply.
MF
 
Last edited:
  • #116
TheStatutoryApe said:
I see! This is very good, I would have to agree with your conclusions here. So you are defining the process of understanding as a strictly personal experience? So the "understanding" of language then would actually be the personal "understanding" of the perception of language? Purely a personal epistemological issue?

Yes, the understanding of a language would not only be the personal understanding of the language but the personal experience of comrehending the language. Comprehension is close to being another compound verb... like the word understanding, only, comprehension describes the ability to "apprehend" a "composition" of data. A personal understanding of the language would be based on personal experiences with the language.

It is rare that I hear someone claiming to "understand a language". The more common declaration of language comprehension is "I speak Yiddish" or "Larry speaks Cantonese" or "Sally knows German".

To say "I understand Yiddish" is a perfect example of the incorrect use of english and represents the misuse and abuse of the word "understand".

The word, "understand", represents the speaker's or writer's position (specifically the position of "standing under") with regard to a topic.

The word "understand" describes that the individual has experienced the phenomenon in question and has developed their own true sense of that phenomenon with the experiencial knowledge they have collected about it. This collection of data is the "process of understanding" or the "path to understanding" a phenomenon. This is why I dispute MF's claim that there are shades of understanding. There is understanding and there is the process of attaining an understanding. Although, I suppose the obviously incorrect use of the word understanding could be construed as "shady".

When two people or nations "reach an understanding" during a dispute, they match their interpretations of events that have transpired between them. They find common threads in their interpretations. These commonalities are only found when the two party's experiences are interpreted similarily by the two partys. There then begins to emerge a sense of truth about certain experiences that both partys have experienced. After much examination and investigation... an understanding (between two parties) is attained by the light of a common, cross-party, interpretation of the phenomenon, or specific components of the phenomenon, in question.
 
  • #117
In the absence of an objective explanation there is no objective way of
testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.

This subjective “test” that you suggest only allows the subject to determine whether “itself” is conscious. It says nothing about the consciousness of anything else.
.

If we want to truly know about the consciousness of anything else, we have to
start with the fact the we know ourselves to be conscious, and find out how
that consciousness is generated. We happen not have the ability ot infer from
subjective to objective test of consciousness in that way at the moment, so we may be
tempted
to use things like the Turing test as a stopgap. Searle's argument is that
we should not take the Turing test as definitive, since there is a conceivable
set of circumstances in which the test is passed but the appropriate
consciousness is not present (by the subjective test).



As I have stated several times, the intelligence of an artificial intelligence
needs to be pinned to human intelligence (albeit not it in a way that makes it
trivially impossible) in order to make the claim of "artificiallity"
intelligible. Otherwise, the computer is just doing something -- something
that might as well be called information-processing,or symbol manipulation.

Imho this is just what the human brain does – information-processing,or symbol manipulation.

So how does that relate to artifical intelligence ? You seem to be saying not
so much that computers are artificial brains as brains are natural computers.

Is "bachelors are unmarried because bachelors are unmarried" viciously
circular too ? Or is it -- as every logician everywhere maintains -- a
necessary, analytical truth ?

The difference between an analytic statement and a synthetic one is that the former are true “by definition”, therefore to claim that something is an “analytical truth” is a non-sequitur.

No, there are analytical falsehoods as well , eg "Bachelors are unmarried".


Analytic statements are essentially uninformative tautologies.

However, whether the statement “consciousness is necessary for understanding” is analytic or synthetic is open to debate. In my world (where I define understanding such that consciousness is not necessary for understanding), it is synthetic.
I guess that Tournesol would claim the statement “all unicorns eat meat” is synthetic, and not analytic?
But if I now declare “I define a unicorn as a carnivorous animal”” then (using your reasoning) I can claim the statement is now analytic, not synthetic.
According to your reasoning, I can now argue “all unicorns eat meat because I define a unicorn as a carnivorous animal”, and this argument is a valid argument?

Whether it is a valid analytical argument depends on whether the definitions it
relies on are conventional or eccentric.

This is precisely what the argument “consciousness requires understanding because I define consciousness as necessary for understanding” boils down to.

That is what it would boil down to if "understanding requires consciousness"
were unconventional, like "unicorns are carnivores", not conventional like
"bachelors are unmarried".

If you understand something , you can report that you know it, explain how you know it. etc.

Not necessarily.


yes, by defintion. That is the difference between understanding, and instinct
intuition, etc. A beaver can buld dams, but it cannot give lectures on civil
engineering.

The ability “to report” requires more than just “understanding Chinese”.

??

That higher-level knowing-how-you-know is consciousness by definition.

I suppose this is your definition of consciousness? Is this an analytic statement again?

You don't seem to have an alternative.

The question is whether syntax is sufficient for semantics.

I’m glad that you brought us back to the Searle CR argument again. Because I see no evidence that the CR does not understand semantics

Well, I have already given you a specific reason; there are words in human
languages which refer specifically to sensory experiences.

Here is a general reason for the under-determination of semantics by syntax:

given the set of sentences.

"all floogles are blints"
"all blints are zimmoids"
"some zimmoids are not blints"

you could answer the question
"are floogles zimmoids ?"
and so on -- without of course knowing
what floogles (etc) are. Moreover, I could supply
a semantic model for the syntax, such as:-

floogle=pigeon
blint=bird
zimmoids=vertebrate

so far, so CR-ish. The internal Operator is using meaningless
symbols such as "floogle", and the external Inquisitor is applying
the semantics above, and getting meaningful results.
But I could go further and supply a second semantic model

floogle=strawberry
blint=fruit
zimmoids=plant

and, in a departure from the CR, supply the second semantics
to the Operator. Now the operator thinks he understands what
the symbols he is manipulating, so does the Inquisitor...but
their interpretations are quite different!

Thus there is a prima facie case that syntax underdetermines
semantics: more than one semantic model can be consitently given to
the same symbols.

Now the strong AI-er could object that the examples are too
simple to be realistic, and if you threw in enough symbols,
you would be able to resolve all ambiguities succesfully.

To counter that the anti-Aier
needs an example of something which could not conceivably be
pinned down that way and that's just where sensory terminology comes
in.

(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)


How else would you do it ? Test for understanding without knowing what
"understanding means". Beg the question in the other direction by

re-defining "understanding" to not require consciousness ?

Are you suggesting the “correct” way to establish whether “understanding requires consciousness” is “by definition”?

I have been consistently suggesting that establishing definitions is completely
different to establishing facts. Defining a word in a certain way does not
demonstrate that anything corresonding to it actaully exists. Hence my frequent
use of unicorns as an example.

The correct way to do it is NOT by definition at all. All this achieves is the equivalent of the ancient Greeks deciding how many teeth in a horse’s mouth “by debate” instead of by experiment.

How can you establish a fact without definitions? How do you tell that the
creature in fron of you is in fact a horse without a definition of "horse" ?

Hypothesis : Understanding requires consciousness
Develop the hypothesis further – what predictions would this hypothesis make that could be tested experimentally?
Then carry out experimental tests of the hypothesis (try to falsify it)

How can you falsify a hypothesis without definitions of the terms in which it
is expressed ?


I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality.

“Experiential quality” is not “understanding”
I do not need the “sensory experiential quality” of red to understand red, any more than I need the “sensory experiential quality” of x-rays to understand x-rays, or the “sensory experiential quality” of flying to understand aerodynamics.

You seem to be saying that
non-experiential knowledge ("red light has a wavelentght of 500nm") *is*
understanding, and all there is to understanding, and experience is
something extraneous that does not belong to understanding at all
(in contradiction to the conclusion of "What Mary Knew").

The conclusion to “What Mary Knew” is disputed.

Indeed, but in what ways, and with what reasonableness ?
If it was a valid response to plonkingly deny that exeriential knowledge
is knowledge at all, why didn't the Jackson's critics do that
instead of coming out with the more complex responses they
did come out with ? can't you see that you are making an extraordinary claim ?



It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.

Perhaps reasonable to you, not to me.

Perhaps you are in the minority. Perhaps you are making an extraordinary claim
with an unshouldered argumentative burden.

Sense-experience is not understanding.

Tu quoque.


It doesn't have any human-style senses at all. Like Wittgenstien's lion, but
more so.

Information and knowledge are required for understanding, not senses.

Aren't senses (at least) channels of information ? Don't different senses
convey different information.


“what it looks like” is sense-experience, it is not understanding.

Tu quoque.

However, I do not need to argue that non-experiential knowledge is not knowledge.

Why not - is this perhaps yet another analytic statement?

It doesn't affect my conclusion.

However, if you can do both you clearly have more understanding than someone
who can only do one or the other or neither.

Its not at all “clear” to me – or perhaps you also “define” understanding as “requiring sense-experience”? Analytic again?


Sigh...it's just a common-sense observation. People who have learned from experience
have more understanding than people who haven't.

the question is irrelevant – because “ability to fly a plane” is not synonymous with “understanding flight”.
Are you saying you only put your trust in the pilot because he “understands”?
If the same plane is now put onto autopilot, would you suddenly want to bail out with a parachute because (in your definition) machines “do not possesses understanding”?

They do not have as much understanding, or planes would be on autopilot all
the time.

They don't know what Mary doesn't know.

We are talking about “understanding”, not simply an experiential quality.
What is it that you think Mary “understands” once she has “experienced seeing red” that she necessarily did NOT understand before she had “experienced seeing red”?

What red looks like. The full meaning of the word "red" (remember, this is
ultimately about semantics).

(remember – by definition Mary already “knows all there is to know about the colour red”, i

No, by stipulation Mary knows all there is to know about red that can be
expressend in physical, 3rd-person, non-experiential terms.

and sense-experience is sense-experience, it is not understanding)

Tu quoque.

I claim that experience is necessary for a *full* understanding of *sensory*
language, and that an entity without sensory exprience therefore lacks full
semantics.

And I claim they are not. The senses are merely “possible conduits” of information.
There is no reason why all of the information required to “understand red”, or to “understand a concept” cannot be encoded directly into the computer (or CR) as part of its initial program.

Yes there is. If no-one can write down a definition of the experiential nature
of "red" -- and you have consistenly failed to do so -- no-one can encode it into a programme.
Now, you could object that the way "red" looks is just the way the visual
system conveys information and is not informatiomn itself; and I could reply
that knowledge about how red looks is still knowledge, even if it isn't
the same knowledge as is conveyed by red as an information-channel.

In principle, no sense-receptors are needed at all. The computer or CR can be totally blind (ie have no sense receptors) but still incorporate all of the information needed in order to understand red, syntactically and semantically. This is the thesis of strong AI, which you seem to dispute.

Yes. No-one knows how to encode all the information. You don't.


If you are going to counter this claim as stated, you need to rise to the
challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red".

My claim (and that of strong AI) is that it is simply information, and not necessarily direct access to information from sense-receptors, that is required for understanding. Senses in humans are a means of conveying information – but that is all they are. This need not be the case in all possible agents, and is not the case in the CR. If we could “program the human brain” with the same information then it would have the same understanding, in the absence of any sense-receptors.
with respect


You still haven't shown , specifically, how to encode experiential infomation.
Saying "it must be possible because strong AI says it is possible" is , of
course,
circular. The truth of strong AI is what is being disputed, and the
uncommunicability of experiential meaning is one of the means of disputing it.
 
Last edited:
  • #118
Tournesol said:
In the absence of an objective explanation there is no objective way of testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.
moving finger said:
This subjective “test” that you suggest only allows the subject to determine whether “itself” is conscious. It says nothing about the consciousness of anything else.
Tournesol said:
If we want to truly know about the consciousness of anything else, we have to start with the fact the we know ourselves to be conscious, and find out how
that consciousness is generated. We happen not have the ability ot infer from
subjective to objective test of consciousness in that way at the moment, so we may be
tempted to use things like the Turing test as a stopgap. Searle's argument is that
we should not take the Turing test as definitive, since there is a conceivable
set of circumstances in which the test is passed but the appropriate
consciousness is not present (by the subjective test).
How does MF know whether the agent Tournesol “understands” English? The only way MF has of determining whether the agent Tournesol understands English or not is to put it to the test – in fact the Turing test - to ask it various questions designed to “test its understanding of English”. If the agent Tournesol passes the test then I conclude that the agent understands English.
Why should it be any different for a machine?
I am not suggesting that Turing’s test is definitive. But in the absence of any other test it is the best we have (and certainly imho better than “defining” our way out of the problem). I’m sure we would all love to see a better test, if you can suggest one.
If you reject the Turing test as a test of machine understanding, then why should I believe that any human agent truly understands English?
Tournesol said:
So how does that relate to artifical intelligence ? You seem to be saying not so much that computers are artificial brains as brains are natural computers.
Both computers and human brains process information and manipulate symbols. If you wish to conclude from this either that computers are artificial brains or that brains are natural computers that is up to you.
Tournesol said:
Whether it is a valid analytical argument depends on whether the definitions it relies on are conventional or eccentric.
Conventional by whose definition? Tournesol’s?
In rational debate we use words as tools – so long as we clearly define what we mean by the tools we use then we may use whatever tools we wish.
Tournesol said:
That is what it would boil down to if "understanding requires consciousness" were unconventional
Unconventional by whose definition? Tournesol’s?
In rational debate we use words as tools – so long as we clearly define what we mean by the tools we use then we may use whatever tools we wish.
Tournesol said:
If you understand something , you can report that you know it, explain how you know it. etc.
moving finger said:
Not necessarily.
Tournesol said:
yes, by defintion. That is the difference between understanding, and instinct intuition, etc. A beaver can buld dams, but it cannot give lectures on civil
engineering.
I cannot report that I know anything if my “means of reporting” has been removed.
A beaver might in principle “understand” civil engineering, but it can’t give lectures if it cannot speak.
Tournesol said:
You don't seem to have an alternative.
Consciousness imho is the internal representation and manipulation of a “self model” within an information-processing agent, such that the agent can ask rational questions “of itself”, for example “what do I know?”, “how do I know”, “do I know that I know?”, etc etc. The ability of an agent to do this is NOT necessary for understanding per se, although it is probably the case that a certain level of understanding is necessary for any reasonably complex consciousness to exist (hence one reason why understanding and consciousness are asociated in homo sapiens).
Tournesol said:
The question is whether syntax is sufficient for semantics.
moving finger said:
I’m glad that you brought us back to the Searle CR argument again. Because I see no evidence that the CR does not understand semantics
Tournesol said:
Well, I have already given you a specific reason; there are words in human languages which refer specifically to sensory experiences.
Why do you consider this is evidence that the CR does not understand semantics? Sensory experiences are merely conduits for information transfer, they do not endow understanding per se, much less semantic understanding.
Tournesol said:
Here is a general reason for the under-determination of semantics by syntax:
given the set of sentences.
"all floogles are blints"
"all blints are zimmoids"
"some zimmoids are not blints"
you could answer the question
"are floogles zimmoids ?"
and so on -- without of course knowing
what floogles (etc) are. Moreover, I could supply
a semantic model for the syntax, such as:-
floogle=pigeon
blint=bird
zimmoids=vertebrate
so far, so CR-ish.
“CR-ish” by whose definition? Yours?
Tournesol said:
The internal Operator is using meaningless
symbols such as "floogle", and the external Inquisitor is applying
the semantics above, and getting meaningful results.
But I could go further and supply a second semantic model
floogle=strawberry
blint=fruit
zimmoids=plant
and, in a departure from the CR,
“departure from the CR” by whose definition? Yours?
Tournesol said:
supply the second semantics
to the Operator. Now the operator thinks he understands what
the symbols he is manipulating, so does the Inquisitor...but
their interpretations are quite different!
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context. This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantics of two different agents.
Tournesol said:
Thus there is a prima facie case that syntax underdetermines
semantics: more than one semantic model can be consitently given to
the same symbols.
This type of misunderstanding can happen between two human agents. There is nothing special about the CR in this context. This argument does not show that the CR does not understand semantics, it shows only that there may be differences between the semantic understanding of two different agents.
Tournesol said:
Now the strong AI-er could object that the examples are too
simple to be realistic, and if you threw in enough symbols,
you would be able to resolve all ambiguities succesfully.
See above.
Tournesol said:
To counter that the anti-Aier
needs an example of something which could not conceivably be
pinned down that way and that's just where sensory terminology comes
in.
(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)
Who has suggested that “semantics is redundant”? Only Tournesol.
Using your logic, one might equally ask why do we have the separate concepts of “programmable computer” and “pocket calculator” – both are in fact “calculating machines” therefore why not just call them both “calculating machines” and be done with it.
Tournesol said:
I have been consistently suggesting that establishing definitions is completely different to establishing facts. Defining a word in a certain way does not
demonstrate that anything corresonding to it actaully exists.
Excellent! Therefore we can finally dispense with this stupid idea that “understanding requires consciousness because it is defined that way”
Tournesol said:
How can you establish a fact without definitions? How do you tell that the creature in fron of you is in fact a horse without a definition of "horse" ?
Ask yourself “what are the essential qualities of understanding that allow me to say “this agent understands”” – avoid prejudicial definitions and and avoid anthropocentrism
Tournesol said:
How can you falsify a hypothesis without definitions of the terms in which it is expressed ?
Ask yourself “what are the essential qualities of understanding that allow me to say “this agent understands”” – avoid prejudicial definitions and and avoid anthropocentrism
Tournesol said:
I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality.
Experiential qualities are agent-dependent (ie subjective). “Tournesol’s experiential quality of seeing red” is peculiar to Tournesol – subjective - it is meaningless to any other agent. This does not mean that “writing the definition” is impossible, it just means that it is a subjective definition, hence not easily accessible to other agents
moving finger said:
The conclusion to “What Mary Knew” is disputed.
Tournesol said:
Indeed, but in what ways, and with what reasonableness ?
If it was a valid response to plonkingly deny that exeriential knowledge
is knowledge at all, why didn't the Jackson's critics do that
instead of coming out with the more complex responses they
did come out with ? can't you see that you are making an extraordinary claim ?
You seem plonkingly confused. I have denied that experiential knowledge is synonymous with understanding, I have NOT denied that experiential knowledge is knowledge
Tournesol said:
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
moving finger said:
Perhaps reasonable to you, not to me.
Tournesol said:
Perhaps you are in the minority. Perhaps you are making an extraordinary claim with an unshouldered argumentative burden.
Your assertion assumes that vision is required for understanding. Vision provides experiential information, not understanding. I understand the terms “X-ray” and “ultra-violet” and “infra-red” and “microwave” even though I possesses no experiential information associated with these terms. What makes you think I need “experiential information” to understand the terms “red” and “green”? The onus is on you to show why the experiential information is indeed necessary for understanding red and green, but not for x-rays or ultra-violet rays..
Tournesol said:
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but more so.
moving finger said:
Information and knowledge are required for understanding, not senses.
Tournesol said:
Aren't senses (at least) channels of information ? Don't different senses convey different information.
Yes. But information per se is not understanding. See above argument re x-rays etc.
moving finger said:
“what it looks like” is sense-experience, it is not understanding.
Tournesol said:
Tu quoque.
Yup. Can you show otherwise?
Tournesol said:
However, I do not need to argue that non-experiential knowledge is not knowledge.
moving finger said:
Why not - is this perhaps yet another analytic statement?
Tournesol said:
It doesn't affect my conclusion.
It affects whether your conclusion is simply “your opinion” or not
Tournesol said:
However, if you can do both you clearly have more understanding than someone who can only do one or the other or neither.
moving finger said:
Its not at all “clear” to me – or perhaps you also “define” understanding as “requiring sense-experience”? Analytic again?
Tournesol said:
Sigh...it's just a common-sense observation. People who have learned from experience have more understanding than people who haven't.
Sigh…..that is a very anthropocentric viewpoint. Humans acquire most of their information from their senses in the form of reading, listening etc – the same information could be programmed directly into a machine. The fact that humans are so dependent on sense-receptors for their information gathering does not lead to the conclusion that understanding is impossible in the absence of sense-receptors in all possible agents.
moving finger said:
If the same plane is now put onto autopilot, would you suddenly want to bail out with a parachute because (in your definition) machines “do not possesses understanding”?
Tournesol said:
They do not have as much understanding, or planes would be on autopilot all the time.
It makes the point that “ability to fly a plane” is not synonymous with “understanding flight”
moving finger said:
What is it that you think Mary “understands” once she has “experienced seeing red” that she necessarily did NOT understand before she had “experienced seeing red”?
Tournesol said:
What red looks like.
“what red looks like” is not understanding – it is simply “subjective experiential information”
“What red looks like” to Mary is not necessarily the same as “what red looks like” to Tournesol
Tournesol said:
The full meaning of the word "red" (remember, this is ultimately about semantics).
Your argument continues to betray a peculiar anthropocentic perspective. What makes you think that the experiential quality of “red” is the same to you as it is to me? If the experiential qualities are not the same between two agents, then why should it then matter (in terms of understanding semantics) if the experiential quality of “red” is in fact totally absent in one of the agents? I can understand “semantically” just what is meant by the term red without ever “experiencing seeing red”, just as I can understand sematically just what is meant by the term “x-rays” without ever “experiencing seeing x-rays”.
Tournesol said:
No, by stipulation Mary knows all there is to know about red that can be expressend in physical, 3rd-person, non-experiential terms.
As I have oft repeated, sense-experience is sense-experience, it is not understanding.
Do I fail to understand what is meant (semantically) by the term “x-ray” because I have never seen x-rays?
.
Tournesol said:
I claim that experience is necessary for a *full* understanding of *sensory* language, and that an entity without sensory exprience therefore lacks full
semantics.
moving finger said:
And I claim they are not. The senses are merely “possible conduits” of information.
There is no reason why all of the information required to “understand red”, or to “understand a concept” cannot be encoded directly into the computer (or CR) as part of its initial program.
Tournesol said:
Yes there is. If no-one can write down a definition of the experiential nature of "red" -- and you have consistenly failed to do so -- no-one can encode it into a programme.
Now (with respect) you are being silly. Nobody can write down a universal “definition of the experiential nature of red" because it is a purely subjective experience. There is a pattern of information in Tournesol’s brain which corresponds to “Tournesol seeing red”, but that pattern means absolutely nothing to any other agent.
Tournesol said:
Now, you could object that the way "red" looks is just the way the visual system conveys information and is not informatiomn itself; and I could reply
that knowledge about how red looks is still knowledge, even if it isn't
the same knowledge as is conveyed by red as an information-channel.
Again you are missing the point. Information is not synonymous with understanding (if it was then the AI case would be much easier to make!)
moving finger said:
In principle, no sense-receptors are needed at all. The computer or CR can be totally blind (ie have no sense receptors) but still incorporate all of the information needed in order to understand red, syntactically and semantically. This is the thesis of strong AI, which you seem to dispute.
Tournesol said:
Yes. No-one knows how to encode all the information. You don't.
Oh really Tournesol. Whether “MF knows know how to do it or not” is irrelevant.
Tournesol said:
If you are going to counter this claim as stated, you need to rise to the challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red".
Nobody can write down a universal “definition of the experiential nature of red" because it is a subjective experience. There is a pattern of information in Tournesol’s brain which corresponds to “Tournesol seeing red”, but that pattern means nothing to any other agent.
Tournesol said:
You still haven't shown , specifically, how to encode experiential infomation.
I don’t see why you seem to think it’s such a problem.
Information is information.
The interesting aspect of experiential information is that it has “meaning” only to the agent to which it relates. In other words the information contained in the experiential state of “Tournesol seeing red” only means something to the agent “Tournesol”, the same information means nothing (indeed does not exist) to any other agent.
MF
 
Last edited:
  • #119
moving finger said:
We seem to agree that "for two agents to agree on whether a statement is analytic or not" requires that the agents first agree the definitions of terms used in the statement.

Sort of. Note what I've been saying for a long time now: I've been claiming that “understanding requires consciousness” in the sense that I mean when I use the terms. I have explicitly stated (many times) that the statement is not necessarily analytic in all definitions of the terms (yours for instance).


This much seems obvious. We do not agree on the definition of understanding, therefore we do not agree the statement “understanding requires consciousness” is analytic.

See above. When calling it analytic, I have been specifically referring to my own definitions. I don’t know why you have insisted on ignoring this.


Tisthammerw said:
Let's "trim the fat" as you suggested.

I suggest the following :

“TH-Understanding requires consciousness” is an analytic statement
“MF-Understanding does not require consciousness” is also an analytic statement.
But "Understanding requires consciousness" is a synthetic statement, because we do not agree on the definition of "understanding".

Do you agree?

This imho sums it up.

Yes and no. That “TH-understanding requires consciousness” is an analytic statement is of course what I’ve been claiming for quite some time now.

The last part, “’Understanding requires consciousness’ is a synthetic statement” is not quite true methinks, because it does not seem to be the sort of thing that can be determined by observation. A better statement perhaps would be “’ Understanding requires consciousness’ is analytic depending on how the terms are defined; since the statement is not necessarily analytic for all people’s definitions of the terms.”


Tisthammerw said:
Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean? Simply saying, “I don't mean the same thing you do when I say ‘understanding’” doesn't really answer my question at all. So please answer it.

Please read my complete answer very carefully this time. I am here replying to your precise question as phrased above. You have not shown that all computer agents are necessarily non-conscious. Therefore I do not agree.

Since you are replying to my precise question as phrased above, please see post #102 where I provide a reduction ad absurdum (e.g. let “program X” equal any program that could be considered the “right” program…).

Note: I do tend to read your complete answers, what makes you think I have not done so in this case (whatever case you are insinuating)? If anything, I should make that charge against you: since you seem to have ignored some points I’ve been repeating for quite some time (e.g. that my claim “understanding requires consciousness” is analytic for my definitions, not necessarily for all others).


Tisthammerw said:
But you have not shown that “TH-understanding requires consciousness” is not an analytic statement, whereas I have shown the opposite.

I have never denied that the statement “TH-Understanding requires consciousness” is analytic, you again are making mistakes in your reading and comprehension of these posts. Please read more carefully.

I SHOULD SAY THE SAME FOR YOU!

Sorry for that outburst. Let me explain. I have (rather explicitly) been referring to my definitions when I claim that “understanding requires consciousness” is an analytic statement, also explicitly stating that it is analytic using my definitions and not necessarily everyone else’s. Note the context of the quote:

It’s quite simple. If the premises are not true, the conclusion is not necessarily true, and the argument is then unsound.

Ah, so you “disagree” with the premise in that you believe it to be false. But you have not shown that “TH-understanding requires consciousness” is not an analytic statement, whereas I have shown the opposite.

The “premise” in this case is “understanding requires consciousness” but since this claim of mine rather explicitly refers to my definition of the terms (and not necessarily everybody else’s) you can understand my response. Please read my complete responses more carefully.


Tisthammerw said:
Please read carefully this time. Do we agree that computers cannot understand in the sense that I mean when I use the term? That (given the model of a complex set of instructions manipulating input etc.) computers cannot perceive the meaning of words, and they cannot be aware of what the words mean?

Please read my complete answer very carefully. Let me help you out here. I am here replying to your precise question as phrased above. By “understand in the sense that I mean when I use the term “ I assume you mean “TH-Understanding”.
Your question is thus “Do you agree that computers cannot TH-Understand?”.
Now TH-Understanding is defined such that it requires consciousness.
But you have not shown that all computer agents are necessarily not conscious, therefore I do not see how the question can be answered.

My method of attack isn’t showing that computers can’t possesses consciousness, merely that they do not possesses what you call “TH-understanding.” So why my adamant support of the analytic statement “understanding requires consciousness” (again, using my definitions)? For some rebuttals (e.g. the systems reply) that claim the system can understand, I can make responses like, “Does the combination of the book, the paper, and the man somehow magically create a separate consciousness that understands Chinese? That isn’t plausible.” So the analytic statement can be useful for my rejoinders.


Tisthammerw said:
So you don’t agree that computers (given the model I described) cannot have TH-understanding? Well then, please look at post #102 of this thread where I justify my claim that computers (of this sort) cannot have TH-understanding. Now we can finally get to the matter at hand.

Post #102 makes NO explicit reference to “TH-Understanding”

Please read my complete post very carefully this time. Post #102 of this thread very explicitly defines the definition of understanding that I use (the very same definition you have called “TH-understanding”). Remember, the term “TH-understanding” is a word you defined, not me.


Allow me to explain.
“being aware of an intensely bright light through the senses” is simply one possible mechanism for “acquiring, processing and interpreting information” – which (by my definition) is included in "perception".

I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.
 
  • #120
Tisthammerw said:
When calling it analytic, I have been specifically referring to my own definitions. I don’t know why you have insisted on ignoring this.
I have been ignoring nothing – I have been responding literally to the statement referred to. The statement “understanding requires consciousness” is a stand-alone statement. If you wish to call this statement analytic then by the rules of logic it MUST stand or fall on its own (it does not magically become analytic because you add some qualifying remarks outside of the statement)..
Imho what you meant to say (should have said) is “understanding as defined by Tisthammerw requires consciousness” is an analytic statement (then I would of course have agreed)
Tisthammerw said:
That “TH-understanding requires consciousness” is an analytic statement is of course what I’ve been claiming for quite some time now.
With respect, this is incorrect. You have been claiming (until very recently) that "understanding requires consciousness” is an analytic statement. I hope you can see and understand the difference in the statements “TH-Understanding requires consciousness” and “understanding requires consciousness”? They are NOT the same.
Tisthammerw said:
The last part, “’Understanding requires consciousness’ is a synthetic statement” is not quite true methinks, because it does not seem to be the sort of thing that can be determined by observation.
We may not yet be able to agree on a “test for understanding”, but that does not mean that such a test is impossible.
Tisthammerw said:
A better statement perhaps would be “’ Understanding requires consciousness’ is analytic depending on how the terms are defined; since the statement is not necessarily analytic for all people’s definitions of the terms.”
If it is “not analytic” for my definition of terms then it is synthetic for me. Why should I accept that it is analytic just because you choose to define understanding differently?
Tisthammerw said:
Since you are replying to my precise question as phrased above, please see post #102 where I provide a reduction ad absurdum (e.g. let “program X” equal any program that could be considered the “right” program…).
OK, I will respond to post #102 separately (when I get around to it)
Tisthammerw said:
Note: I do tend to read your complete answers, what makes you think I have not done so in this case (whatever case you are insinuating)?
Then I must assume that you are misreading or misunderstanding?
Tisthammerw said:
If anything, I should make that charge against you: since you seem to have ignored some points I’ve been repeating for quite some time (e.g. that my claim “understanding requires consciousness” is analytic for my definitions, not necessarily for all others).
Like you, I do read your posts, but the reason I do not agree with you is simply because “I do not agree with you”. As I have pointed out above, we may not yet be able to agree on a “test for understanding”, but that does not mean that such a test is impossible. If a statement is “not analytic” for my definition of terms then it is synthetic for me. Why should I accept that it is analytic just because you choose to define understanding differently?
Tisthammerw said:
But you have not shown that “TH-understanding requires consciousness” is not an analytic statement, whereas I have shown the opposite.
moving finger said:
I have never denied that the statement “TH-Understanding requires consciousness” is analytic, you again are making mistakes in your reading and comprehension of these posts. Please read more carefully.
Tisthammerw said:
I SHOULD SAY THE SAME FOR YOU!
Sorry for that outburst.
That’s OK. I can get frustrated at times too when it seems that people are not understanding what I am saying.
Tisthammerw said:
The “premise” in this case is “understanding requires consciousness” but since this claim of mine rather explicitly refers to my definition of the terms (and not necessarily everybody else’s) you can understand my response. Please read my complete responses more carefully.
I am as far as I can be rigorous and methodical in my interpretation of logic and reasoning. If you assert that the statement “understanding requires consciousness” is analytic then I take this assertion at face value – and I disagree.
Imho what you meant to say (should have said) is “understanding as defined by Tisthammerw requires consciousness” is an analytic statement (then I would of course have agreed)
Tisthammerw said:
My method of attack isn’t showing that computers can’t possesses consciousness, merely that they do not possesses what you call “TH-understanding.”
How have you shown this?
IF we suppose that all possible computers are not conscious THEN it follows that no computer can possesses TH-Understanding. But you have not shown that all possible computers are not conscious.
Tisthammerw said:
“Does the combination of the book, the paper, and the man somehow magically create a separate consciousness that understands Chinese? That isn’t plausible.” So the analytic statement can be useful for my rejoinders.
Why should it need to be conscious? MF-Understanding does not require consciousness in order to enable the CR to understand Chinese.
Tisthammerw said:
Please read my complete post very carefully this time. Post #102 of this thread very explicitly defines the definition of understanding that I use (the very same definition you have called “TH-understanding”). Remember, the term “TH-understanding” is a word you defined, not me.
I suggested TH-Understanding as a way to help our understanding of understanding. If you think it does not help then by all means offer a better solution.
If you wish to re-phrase your argument in terms of “TH-Understanding” then I will reply to it in that context. If you insist instead on using “Understanding” (the definition of which we do not agree) then I will reply in that context.
Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.
Firstly, if you read my entire post #91 of this thread (which I know you have done, because you have just told me that you DO read all my posts carefully) then you will see that “after cogitation” I refined my idea of “what kind of perception is required” for understanding. The only part of perception necessary for understanding is the kind of introspective perception of the kind “I perceive the truth of statement X”. The sense-dependent type of perception which most humans think of when “perceiving” is simply a means of information transfer and is NOT a fundamental part of understanding in itself.
moving finger said:
Actually, having cogitated on this issue for a little longer, I do not see that "perception" (ie the processing of data received from external sense-receptors) is a necessary part of understanding per se. I can imagine a completely self-contained agent which "understands Chinese", but has no sense-receptors at all - hence it couild not "perceive", and yet could still claim to understand Chinese. Therefore on reflection I now delete the requirement "to perceive" from my list of "necessary items" for understanding.
(on the other hand, there are other possible meanings to "perceive", for example "to perceive the truth of something, such as a statement" - an agent which understands is able to "perceive the truth of" things, therefore it necessarily perceives in this sense of the word)
also with respect, you have not shown that it is impossible for all computers to possesses consciousness, therefore your question does not make sense to me (it assumes that all possible computers are necessarily not conscious).
MF
 
  • #121
quantumcarl said:
Yes, the understanding of a language would not only be the personal understanding of the language but the personal experience of comrehending the language. Comprehension is close to being another compound verb... like the word understanding, only, comprehension describes the ability to "apprehend" a "composition" of data. A personal understanding of the language would be based on personal experiences with the language.
It is rare that I hear someone claiming to "understand a language". The more common declaration of language comprehension is "I speak Yiddish" or "Larry speaks Cantonese" or "Sally knows German".
To say "I understand Yiddish" is a perfect example of the incorrect use of english and represents the misuse and abuse of the word "understand".
The word, "understand", represents the speaker's or writer's position (specifically the position of "standing under") with regard to a topic.
The word "understand" describes that the individual has experienced the phenomenon in question and has developed their own true sense of that phenomenon with the experiencial knowledge they have collected about it. This collection of data is the "process of understanding" or the "path to understanding" a phenomenon. This is why I dispute MF's claim that there are shades of understanding. There is understanding and there is the process of attaining an understanding. Although, I suppose the obviously incorrect use of the word understanding could be construed as "shady".
When two people or nations "reach an understanding" during a dispute, they match their interpretations of events that have transpired between them. They find common threads in their interpretations. These commonalities are only found when the two party's experiences are interpreted similarily by the two partys. There then begins to emerge a sense of truth about certain experiences that both partys have experienced. After much examination and investigation... an understanding (between two parties) is attained by the light of a common, cross-party, interpretation of the phenomenon, or specific components of the phenomenon, in question.
I completely agree. I've been considering getting into these things in my discussion with MF but found it difficult to express in a brief and clear manner.
I specifically wanted to get into the importance of "experience". While in theory I more or less agree with MF that you can input the experiencial data into the computer in a static form rather than making the computer gather the experience itself I have reservations regarding just how that might work out.
If you added the data that is the experience of "red" to an AI's knowledge base and you asked the AI "How do you know what red is?" will it be able to justify it's "understanding" and just how important is that justification to the actual "understanding"?
 
  • #122
Reply to post #102 :
Tisthammerw said:
To recap the definitions I’ll be using:
In terms of a man understanding words, this is how I define understanding:
* The man actually knows what the words mean, i.e. that he perceives the meaning of the words, or to put it another way, that he is aware of the truth of what the words mean.
This particular definition of understanding requires consciousness. The definition of consciousness I’ll be using goes as follows:
* Consciousness is the state of being characterized by sensation, perception, thought, awareness, etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
To see why (given the terms as defined here) understanding requires consciousness, we can instantiate a few characteristics:
* Consciousness is the state of being characterized by sensation, perception (of the meaning of words), thought (knowing the meaning of words), awareness (of the meaning of words), etc. By the definition in question, if an entity has any of these characteristics the entity possesses consciousness.
So, if a person is aware of the meaning of words, then by definition the individual possesses consciousness. A cautionary note: it is entirely possible to create other definitions of the word “understand.” One could make define “to grasp the meaning of” in such a way that would not require consciousness, though this form of understanding would perhaps be more metaphorical (at least, metaphorical to the definition supplied above). For the purposes of this thread (and all others regarding this issue) these are the definitions I’ll be using, in part because “being aware of what the words mean” seems more applicable to strong AI.
A note at this point. As you know, I do not agree with your definition of understanding. My definition of understanding does not require consciousness. In my responses I may therefore refer to TH-Understanding and MF-Understanding to distinguish between understanding as defined by Tisthammerw and understanding as defined by MF repectively.
Tisthammerw said:
Let's call the “right” program that, if run on the robot, would produce literal understanding “program X.” Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.
You have merely “asserted” that no understanding is taking place. How would you “show” that no understanding (TH or MF type) is taking place? What test do you propose to carry out to support your assertion? In absence of any test then your assertion remains that – an assertion.
Tisthammerw said:
So it seems that even having the “right” rules and the “right” program is not enough even with a robot.
So it seems your thought experiment is based on “I assert the system as described does not possesses understanding, hence I conclude it does not possesses understanding”.
Where is your evidence? Where have you “shown” that it does not understand?
Tisthammerw said:
Some strong AI adherents claim that having “the right hardware and the right program” is enough for literal understanding to take place. In other words, it might not be enough just to have the right program. A critic could claim that perhaps a human running program X wouldn’t produce literal understanding, but the robot’s other “normal” processor of the program would. But it isn’t clear why that would be a relevant difference if the exact same operations are being made. Is it that the processor of the program has to be made of metal? Then does literal understanding take place? Does the processor require some kind of chemical? Does an inscription need to be engraved on it? Does it need to possesses a magic ball of yarn? What?
The above paragraph is not relevant until the first question has been answered – ie How would you “show” that no understanding is taking place? What test do you propose to carry out to support your assertion?
MF
 
  • #123
TheStatutoryApe said:
Searle builds the CR in such a manner that it is a reactionary machine spitting out preformulated answers to predetermined questions. The machine is not "thoughtful" in this process, that is it does not contemplate the question or the answer, it merely follows it's command to replace one predetermined script with another preformulated script.
I’m not sure I fully agree here. Why do you think the specified machine is not “thoughtful”? Is this stipulated by Searle?
If the human brain is algorithmic then it follows that human thought can be encoded into a machine. Such a machine would be thoughtful in the same way that the human brain is thoughtful.
TheStatutoryApe said:
I do not believe that this constitutes "understanding" of the scripts contents.
I agree if the machine is not thoughtful then it does not understand. Where we seem to disagree is on whether the machine is thoughtful.
TheStatutoryApe said:
First of all as already pointed out the machine is purely reactionary.
I’m not sure exactly what you mean by reactionary. If you simply mean “lacking in thoughtfulness” then as I have said I disagree this is necessarily the case.
TheStatutoryApe said:
It doesn't "think" about the questions or the answers it only "thinks" about the rote process of interchanging them, if you would like to even consider that thinking.
Did Searle actually specify as one of the conditions of his thought-experiemnt that the machine was not permitted to “think about the questions”? This certainly is not a precondition of the Turing test, so if this is indeed one of Searle’s conditions then his test is clearly a false example or corruption of the Turing test.
TheStatutoryApe said:
The whole process is predetermined by the designer who has already done all of the thinking and formulating (this part is important to my explination but I will delve into that deeper later).
I don’t see how this is relevant, but will defer to later.
TheStatutoryApe said:
The CR never considers the meanings of the words only the predetermined relationships between the scripts.
Again is this stipulated by Searle as a precondition? If so then it is once again a false Turing test.
TheStatutoryApe said:
In fact the designer does not even give the CR the leaway by which to consider these things, it is merely programed to do the job of interchanging scripts.
Again is this stipulated by Searle as a precondition? If so then it is once again a false Turing test.
TheStatutoryApe said:
Next, the CR is not privy to the information that is represented by the words it interchanges. Why? It isn't programed with that information. It is only programed with the words.
Again is this stipulated by Searle as a precondition? If so then it is once again a false Turing test.
The rest of your post is rather lengthy, but I get the gist of it.
What we need to clarify, imho, is whether in fact the machine (or the CR) is in fact “deliberately constrained by Searle” in the way you have suggested. If this is indeed the case then I would agree with you that the machine does NOT possesses anything that could be called true understanding, but I would also suggest that it would fail the Turing test (eg adequate cross-examination of its understanding of semantics would reveal that it did not truly understand semantics).
TheStatutoryApe said:
The designer instead of a program could create a simple mechanical device. It could be a cabinet. On a table next to the cabinet there could be several wooden balls of various sizes. On these balls we can have questions printed, these parallel the predetermined questions in the CR program. There can be a chute opening on top of the cabinet where one is instructed to insert the ball with the question of their choice, paralleling the slot where the message from the outside world comes to the man in the CR. Inside the cabinet the chute leads to an inclined track with two rails that widen as the track progresses. At the point where the smallest of the balls will fall between the rails the ball will fall and hit a mechanism that will drop another ball through a chute and into a cup outside the cabinet where the questioner is awaiting an answer. On this ball there is printed an answer corresponding to the question printed on the smallest of the balls. The same is done for each of the balls of varying sizes. The cabinet is now answering questions put to it in fundamentally the same manner as the CR. The only elements lacking are the vastness of possible questions and answers and the illusion that the CR does not know the questions before they are asked.
So does that cabinet possesses understanding of the questions and answers? Or does it merely reflect the understanding of the designer?
My belief is that such a simplistic cabinet would fail a properly conducted Turing test. However, this is not to say that understanding could not in principle arise in a sufficiently complex mechanical processing device (the neural pathways in a human brain and the electronic gates in silicon chips are but two substrates for storing and processing information - the same processes of understanding could in principle be enacted by a mechanical device or even by an operator consulting a rule-book)
TheStatutoryApe said:
Now on the difference between a "perfect simulation" and the real thing.
I guess this depends really on your definition of "simulation". Mine personally is anything that is made to fool the observer into believing it is something other than it really is. A "perfect simulation" I would classify as something that fools the observer in every way until the simulation is deconstructed showing that the elements that seemed to be present were in fact not present. If something can be deconstructed completely and still be indestinguishable from the real thing then I would call that a "reproduction" as opposed to a "simulation".
Good point. Reproduction is a better word.
TheStatutoryApe said:
In the case of the CR I would say that once you have deconstructed the simulation you will find that the process occurring is not the same as the process of "understanding" as you define it (or at least as I define it) even though it produces the same output.
Yes, but I would not call the CR “as you just described it” a perfect simulation, it is instead a deliberately constrained translation device, I would not claim that it possesses understanding, and also would not claim that it would pass a properly constructed Turing test.
TheStatutoryApe said:
And if it produces the same output then what is the difference between the processes? If all you are concerned about is the output then none obviously. If you care about the manner in which the work then there are obviously differances. Also which process is more effective and economical? Which process is capable of creative and original thinking? Which process allows for freewill (if such a thing exists)? Which process is most dynamic and flexible? I could go on for a while here. It stands that the processes are fundamentally different and allow for differing possibilities though are equally capable of communicating in chinese.
OK, but simply having “two agents with fundamentally different processes” does not allow us to conclude that one agent understands and the other does not. I have no idea whether your process of understanding is the same as mine – indeed I am sure that our respective processes differ in many ways – but that does not mean that one of us understands and the other does not.
The proof of the pudding is in the eating – this is what the Turing test is all about – if an agent “demonstrates consistently and repeatedly in the face of all reasonable tests that it understands” then it understands. This is what I thought the CR argument (being supposedly based on the Turing test) was supposed to show. This is the only way I have of knowing whether another human being understands!

Excellent standard of post by the way,

MF
 
Last edited:
  • #124
As I've stated before, Searle has specifically built a construct that will fail to "understand". Also I do not believe that the CR was meant specifically to target the Turing test but to target a particular AI program of the time. He may have actually built it with perameters mirroring that of the program. I'll see if I can find the description of Searle's manuels for the CR...
Against "strong AI," Searle (1980a) asks you to imagine yourself a monolingual English speaker "locked in a room, and given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch." The rules "correlate one set of formal symbols with another set of formal symbols"; "formal" (or "syntactic") meaning you "can identify the symbols entirely by their shapes." A third batch of Chinese symbols and more instructions in English enable you "to correlate elements of this third batch with elements of the first two batches" and instruct you, thereby, "to give back certain sorts of Chinese symbols with certain sorts of shapes in response." Those giving you the symbols "call the first batch 'a script' [a data structure with natural language processing applications], "they call the second batch 'a story', and they call the third batch 'questions'; the symbols you give back "they call . . . 'answers to the questions'"; "the set of rules in English . . . they call 'the program'"
This is probably the closest thing to the original I could find quickly.

First let me comment on some points you are questioning. Specifically the CR's "thoughtfulness" or lack there of and whether or not Searle specified this. The fact is that Searle does not specify anything about thoughtfulness on the part of the CR. The fact that he leaves this out automatically means the CR is incapable. If a program is to think it must be endowed with the ability and nothing in Searle's construction of the CR grants this, save for the human but the human really only represents a processor.
The "program" by the definition quoted above is completely static and I think we have already agreed that a static process does not really qualify as "understanding". The human is a dynamic factor but I do not believe Searle intends the human to tamper with the process occurring. If the human were to experiment by passing script that is not part of the program then it will obviously not be simulating a coherant conversation in chinese any longer. This is why I prefer my cabinet because it does fundamentally the same thing without muddling the experiment by considering the possible actions and qualities of the human involved.

MF said:
Yes, but I would not call the CR “as you just described it” a perfect simulation, it is instead a deliberately constrained translation device, I would not claim that it possesses understanding, and also would not claim that it would pass a properly constructed Turing test.
Yes I tried to point this out before but I guess you still did not understand what I meant about the way in which the CR works. I stated that figuring all of the possible questions and all of the possible answers (or stories) then cross referancing them in such a way as that the CR could simulate a coherant conversation in chinese would be impossible in reality. Since you stated that we are only talking about the principle rather than reality I decided to consider that if such a vast reserve of questions and answers were possible in theory it could carry on a coherant coversation and pass the Turing test. I suspended deciding whether or not the theory would actually work since it would be hard to argue that it wouldn't if we have a hypothetically infinite instruction manuel.
Come to think of it though we should find that it is incapable of learning. Should we happen to find a piece of information that it is not privy to it will not understand and have valid output regarding it. You could adjust the CR so that it can take in the new words and apply it to the existing rules but without giving it the ability to actually learn it will only parrot the new information back utilizing the same contexts that it was initially given. Then there's an issue of memory. Will it remember my name when I tell it my name and then be able to answer me when I ask it later? Ofcourse once you introduce these elements we are no longer talking about Searle's CR and are coming much closer to what you and I would define as "understanding".
The place where I think we can logically disagree with Searle's (and Tisthammerw's) conclusions about the CR is where he seems to believe that no matter how you change it it will always lack understanding. This is ofcourse in part due to the issue of his conclusions in regard to syntax and semantics.
MF said:
OK, but simply having “two agents with fundamentally different processes” does not allow us to conclude that one agent understands and the other does not. I have no idea whether your process of understanding is the same as mine – indeed I am sure that our respective processes differ in many ways – but that does not mean that one of us understands and the other does not.
The proof of the pudding is in the eating – this is what the Turing test is all about – if an agent “demonstrates consistently and repeatedly in the face of all reasonable tests that it understands” then it understands. This is what I thought the CR argument (being supposedly based on the Turing test) was supposed to show. This is the only way I have of knowing whether another human being understands!
The two of us may not think alike and learn in the same fashion but I believe the very basic elements of how we "understand" are fundamentally the same.
In the case of the CR as built by Searle the process theoretically yields the same results in conversation but likely does not yield the same results in other areas.
 
  • #125
MF said:
I’m not sure exactly what you mean by reactionary. If you simply mean “lacking in thoughtfulness” then as I have said I disagree this is necessarily the case.
Ah! This reminds me. I read an essay not that long ago on the issue of consciousness that a friend sent me. I'm not sure if I'll be able to find it agian but I will try. You may find it interesting.
It was written by a determinist who I believe was very against the notion of "freewill" and consciousness as being any sort of special property. He based this partly on the idea that the majority of the tasks we do daily (in his opinion all of them) are by rote, not requiring meaningful thought for any of them. In this I think he had a point though he, either intentionally or naively, did not discuss any tasks that seem very obviously to require meaningful thought such as problem solving.
He touches on the homunculus concept interlinked with his own personal theory on how the notion of "self" came to be. He believes that it has only been around for perhaps a couple thousand years. This latter bit ofcourse is based on rather flimsy evidence and I actually found rather amusing.

At any rate the concept that we do not invoke meaningful thought for the majority of our daily activities I thought was a good point and note worthy in such a discussion as we are having. I'll see if I can find it and if I can't I'll try to give you a summery of his argument as best I can remember it.
 
  • #126
TheStatutoryApe said:
I completely agree. I've been considering getting into these things in my discussion with MF but found it difficult to express in a brief and clear manner.
I specifically wanted to get into the importance of "experience". While in theory I more or less agree with MF that you can input the experiencial data into the computer in a static form rather than making the computer gather the experience itself I have reservations regarding just how that might work out.
If you added the data that is the experience of "red" to an AI's knowledge base and you asked the AI "How do you know what red is?" will it be able to justify it's "understanding" and just how important is that justification to the actual "understanding"?

If the Artificial Intelligence community lacks the social responsibility and imagination to produce an alternative word to describe being"understood by computer" then I would recommend the Artificial Intelligence community qualify the "type of understanding" they are referring to by terming it "Artificial Understanding" in the same manner that "Artificial Intelligence" distiquishes a quality of intelligence when compared to the human intelligence that created Artificial Intelligence.

Then, I'd like to see some tests whereby the human members of the Artificial Intelligence community attempt to prove that understanding can take place in an currently existing computer... perhaps by replacing one of the AI community member's psychotherapists with a computer or replacing a member's closest friend with a computer. If this involves downloading all the exeriencial information of the therapist or friend into the computer... by all means, knock yourself out.
 
  • #127
TheStatutoryApe said:
As I've stated before, Searle has specifically built a construct that will fail to "understand". Also I do not believe that the CR was meant specifically to target the Turing test but to target a particular AI program of the time. He may have actually built it with perameters mirroring that of the program. I'll see if I can find the description of Searle's manuels for the CR...
Here is a direct quote from Searle in his 1997 book “The Mystery of Consciousness” :
John Searle said:
Imagine I am locked in a room with a lot of boxes of Chinese symbols (the “database”). I get small bunches of Chinese symbols passed to me (questions in Chinese), and I look up in the rule book (the “program”) what I am supposed to do. I perform certain operations on the symbols in accordance with the rules (that is, I carry out the steps in the program) and give back small bunches of symbols (answers to the questions) to those outside the room.
The critical part is highlighted in bold. How do we interpret the phrase perform certain operations on the symbols? Does it mean that Searle simply manipulates a static information database (in which case I agree no dynamic variables, no thoughtfulness, no learning), or does it mean that Searle can create and manipulate dynamic variables as part of the process, and possibly also cause changes to the program itself (in which case I would argue that there could be learning, and this also opens the door to thoughtfulness). It all hinges on whether the database and program are static parts of the process, or whether both database and program are dynamic (ie changeable).
TheStatutoryApe said:
First let me comment on some points you are questioning. Specifically the CR's "thoughtfulness" or lack there of and whether or not Searle specified this. The fact is that Searle does not specify anything about thoughtfulness on the part of the CR. The fact that he leaves this out automatically means the CR is incapable. If a program is to think it must be endowed with the ability and nothing in Searle's construction of the CR grants this, save for the human but the human really only represents a processor.
With respect I think this is a matter of interpretation. If Searle did not “rule it out” but also “did not rule it in” then it is not clear to me that the CR is indeed incapable of thinking. “Thinking” is a process of symbol manipulation, and I think we already agree that the enactment of the CR includes the manpulation of symbols (because we agree the CR "understands" syntax). I see nothing that allows us to conclude “the symbol manipulation is purely syntactic, and there is no real thinking taking place”.
TheStatutoryApe said:
The "program" by the definition quoted above is completely static and I think we have already agreed that a static process does not really qualify as "understanding".
By static program and process I assume you mean that the program is not itself modified as part of the enactment of the CR?
Again I think this may be open to interpretation. I do agree that dynamic processing/manipulation of information and dynamic evolution of the program would be essential to enable thoughtfulness and learning.

If the CR as described is simply manipulating a static database, with input and output, but with no dynamic variables and no modification of the program then there could be no thoughtfulness and no learning.

As I suggested above, Searle’s brief description of the process of “performing certain operations on the symbols” is very vague, and does not specify whether any dynamic variables are created and manipulated as part of the process. I would agree that the absence of dynamic variables, and the absence of any “evolution” of the program itself (dynamic programming?), would be important missing elements, and such elements are probably essential for any “thinking” to take place, but it is not clear from Searle’s description that both dynamic variables and dynamic programming are indeed missing.
TheStatutoryApe said:
The human is a dynamic factor but I do not believe Searle intends the human to tamper with the process occurring. If the human were to experiment by passing script that is not part of the program then it will obviously not be simulating a coherant conversation in chinese any longer.
Agreed, but it is the human which “enacts” the process, turning the static program + static data into a dynamic process.
TheStatutoryApe said:
This is why I prefer my cabinet because it does fundamentally the same thing without muddling the experiment by considering the possible actions and qualities of the human involved.
OK, I understand. In the cabinet example the “process” is clearly dynamic, but again it would appear that the database is static and there are no dynamic “variables”, and certainly no dynamic programming – again the cabinet is simply taking input and translating that into output. No dynamic variables, no thinking possible.
TheStatutoryApe said:
figuring all of the possible questions and all of the possible answers (or stories) then cross referancing them in such a way as that the CR could simulate a coherant conversation in chinese would be impossible in reality.
Perhaps so, but it is supposed to be a thought experiment
TheStatutoryApe said:
Since you stated that we are only talking about the principle rather than reality I decided to consider that if such a vast reserve of questions and answers were possible in theory it could carry on a coherant coversation and pass the Turing test. I suspended deciding whether or not the theory would actually work since it would be hard to argue that it wouldn't if we have a hypothetically infinite instruction manuel.
That is an interesting point – if the instruction manual were potentially infinite then I guess it is conceivable that it could in principle perfectly simulate understanding, and pass the Turing test.
TheStatutoryApe said:
Come to think of it though we should find that it is incapable of learning.
Yes, this follows from the absence of any dynamic variables. But “inability to learn” does not by itself equate with “inability to understand”.
TheStatutoryApe said:
Should we happen to find a piece of information that it is not privy to it will not understand and have valid output regarding it. You could adjust the CR so that it can take in the new words and apply it to the existing rules but without giving it the ability to actually learn it will only parrot the new information back utilizing the same contexts that it was initially given. Then there's an issue of memory. Will it remember my name when I tell it my name and then be able to answer me when I ask it later? Ofcourse once you introduce these elements we are no longer talking about Searle's CR and are coming much closer to what you and I would define as "understanding".
Yes, to do the above we would need to introduce dynamic variables
TheStatutoryApe said:
The two of us may not think alike and learn in the same fashion but I believe the very basic elements of how we "understand" are fundamentally the same.
In the case of the CR as built by Searle the process theoretically yields the same results in conversation but likely does not yield the same results in other areas.
If the CR does not contain dynamic variables or dynamic programming then I agree it would be simply a “translating machine” with no understanding. Whether such a room could pass the Turing test is something I’m still not sure about.
MF
 
Last edited:
  • #128
moving finger said:
Tisthammerw said:
When calling it analytic, I have been specifically referring to my own definitions. I don’t know why you have insisted on ignoring this.

I have been ignoring nothing – I have been responding literally to the statement referred to. The statement “understanding requires consciousness” is a stand-alone statement.

If by “stand-alone” you mean it can be known as analytic without defining the terms I disagree. The terms have to be defined if they are to be considered analytic. And you have not been responding to the statement as it has been referred to: “understanding requires consciousness” refers to the terms I have defined them.


Imho what you meant to say (should have said) is “understanding as defined by Tisthammerw requires consciousness” is an analytic statement (then I would of course have agreed)

With all due respect, what the @#$% do you think I’ve been saying this whole time? At least, what the @#$% do you think I meant regarding statements like:

Tisthammerw said:
My definition of understanding is what you have called “TH-understanding.” Therefore, my conclusion could be rephrased as “consciousness is required for TH-understanding.” I have said time and time again that the statement “understanding requires consciousness” is analytic for my definitions; not necessarily yours.
[post #109 of this thread]

Tisthammerw said:
Remember, I was talking about the analytic statement using the terms as I have defined them.
[post #99 of this thread]

Tisthammerw said:
To reiterate, my “argument” when it comes to “understanding requires consciousness” is merely to show that the statement is analytical (using the terms as I mean them).
[post #99 of this thread]

Tisthammerw said:
[Justifying the analytic statement]
The first premise is the definition of understanding I'll be using….

To see why (given the terms as defined here) understanding requires consciousness

Note that the premises are true: these are the definitions that I am using; this is what I mean when I use the terms. You may mean something different when you use the terms, but that doesn’t change the veracity of my premises. The argument here is quite sound.
[post #88 of this thread]

Tisthammerw said:
[Explicitly defines what I mean by understanding and consciousness]

To see why (given the terms as defined here) understanding requires consciousness…

Given the definitions I’ve used, the phrase “understanding requires consciousness” is an analytic statement

….

My definition of understanding requires consciousness. Do we agree? Now please understand what I'm saying here. Do all definitions of understanding require consciousness? I'm not claiming that. Does your definition of understanding require consciousness? I'm not claiming that either. But understanding in the sense that I use it would seem to require consciousness.

[post #75 of this thread]

And this is just this thread, I am not counting all the times I explained this to you previously. So you can understand why I find your claim “I have been ignoring nothing” a bit hard to swallow.


Tisthammerw said:
That “TH-understanding requires consciousness” is an analytic statement is of course what I’ve been claiming for quite some time now.

With respect, this is incorrect.

With respect, you are wrong. You have called my definition of understanding “TH-understanding” but methinks I have been rather clear in pointing out that “understanding requires consciousness” is referring to my definition of the term “understanding.”


I hope you can see and understand the difference in the statements “TH-Understanding requires consciousness” and “understanding requires consciousness”? They are NOT the same.

They are the same if I explicitly point out that the definition of “understanding” I am using is my own explicitly defined definition (what you have called TH-understanding).


The last part, “’Understanding requires consciousness’ is a synthetic statement” is not quite true methinks, because it does not seem to be the sort of thing that can be determined by observation.

We may not yet be able to agree on a “test for understanding”, but that does not mean that such a test is impossible.

Doesn’t mean it’s possible either. My point is that you seem to have no grounds for claiming the statement “understanding requires consciousness” to be a synthetic statement.


Why should I accept that it is analytic just because you choose to define understanding differently?

Because I am claiming that “understanding requires consciousness” is analytic if we use my definition of the terms; I’m not claiming this statement is universally analytical for all people’s definitions of those terms.


Tisthammerw said:
Note: I do tend to read your complete answers, what makes you think I have not done so in this case (whatever case you are insinuating)?

Then I must assume that you are misreading or misunderstanding?

Then I must assume that question be better applied to you (see above for an example)?




Tisthammerw said:
The “premise” in this case is “understanding requires consciousness” but since this claim of mine rather explicitly refers to my definition of the terms (and not necessarily everybody else’s) you can understand my response. Please read my complete responses more carefully.

I am as far as I can be rigorous and methodical in my interpretation of logic and reasoning. If you assert that the statement “understanding requires consciousness” is analytic then I take this assertion at face value

You would do better to read it in context.

Tisthammerw said:
My method of attack isn’t showing that computers can’t possesses consciousness, merely that they do not possesses what you call “TH-understanding.”

How have you shown this?

Post #102, but then you respond to this later.


Tisthammerw said:
“Does the combination of the book, the paper, and the man somehow magically create a separate consciousness that understands Chinese? That isn’t plausible.” So the analytic statement can be useful for my rejoinders.

Why should it need to be conscious? MF-Understanding does not require consciousness in order to enable the CR to understand Chinese.

But I am not referring to your definition of understanding here, I am referring to mine. Please read what I say in context.


Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.

Firstly, if you read my entire post #91 of this thread (which I know you have done, because you have just told me that you DO read all my posts carefully) then you will see that “after cogitation” I refined my idea of “what kind of perception is required” for understanding. The only part of perception necessary for understanding is the kind of introspective perception of the kind “I perceive the truth of statement X”. The sense-dependent type of perception which most humans think of when “perceiving” is simply a means of information transfer and is NOT a fundamental part of understanding in itself.

So, is the answer yes? I’m just trying to wrap my head around your definition of “perceiving” that does not imply consciousness, since I find your definition a tad unusual. Hence my previous mark saying:

Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to http://www.m-w.com/cgi-bin/dictionary?book=Dictionary&va=perceive).
 
  • #129
moving finger said:
Reply to post #102 :
A note at this point. As you know, I do not agree with your definition of understanding.

You don't have to mean the same thing I do when I use the word “understanding,” just know what I’m talking about here.


Tisthammerw said:
Let's call the “right” program that, if run on the robot, would produce literal understanding “program X.” Suppose this robot does indeed have program X. Let’s replace the part of the robot that would normally process the program with Bob. Bob uses a rulebook containing a complex set of instructions identical to program X. Bob does not understand what the strings of binary digits mean, but he can perform the same mathematical and logical operations the computer hardware can. We run program X, get valid output, the robot moves its limbs etc. and yet no real understanding is taking place.

You have merely “asserted” that no understanding is taking place. How would you “show” that no understanding (TH or MF type) is taking place? What test do you propose to carry out to support your assertion?

In the context of post #102 I am of course referring to what you have called “TH-understanding.” But to answer you question, simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world. If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.

Do you wish to propose the systems reply for this thought experiment? If so, let me give yet another response, this time one similar to Searle’s. Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).

If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.
 
  • #130
TheStatutoryApe said:
I've been considering getting into these things in my discussion with MF but found it difficult to express in a brief and clear manner.
I specifically wanted to get into the importance of "experience". While in theory I more or less agree with MF that you can input the experiencial data into the computer in a static form rather than making the computer gather the experience itself I have reservations regarding just how that might work out.
Imho the real difficulty with directly programming experiential data is that the experiential data is completely subjective – the experiential data corresponding to “MF sees red” is peculiar to MF, and may be completely different to the experiential data corresponding to “TheStatutoryApe sees red”. Thus there is no universal set of data which corresponds to “this agent sees red”. I am not suggesting that "programming a machine with experiential data" would be a trivial task - but it would be possible in principle.
TheStatutoryApe said:
If you added the data that is the experience of "red" to an AI's knowledge base and you asked the AI "How do you know what red is?" will it be able to justify it's "understanding" and just how important is that justification to the actual "understanding"?
Ask the same agent “how do you know what x-rays are?” – will it be stumped because it has never seen x-rays? No.
Does “experiencing the sight of red” necessarily help an agent to “know what red is”? I suggest it does not.
I believe the issue of experiential data is irrelevant to the question of understanding. If I have never experienced seeing red, does that affect my understanding in any way? I would argue not.
I have no experience of seeing x-rays, yet I understand (syntactically and semantically) what x-rays are. The “red” part of the visible spectrum is simply another part of the electromagnetic spectrum, just as x-rays are. Why do I need to “experience seeing red” in order to understand, both syntactically and semantically, what red is if I do not need to experience x-rays in order to understand what x-rays are?
The source of confusion lies in the colloquial use of the verb “to know”. We say “I know that 2 + 2 = 4”, and we also say that “I know what red looks like”, but these are two completely different kinds of knowledge. The former contributes to understanding, but imho the latter contributes nothing to understanding.
When Mary “knew all there was to know about red, but had never seen red”, and she then “experienced the sight of red”, did she necessarily understand any more about red than she did before she had first seen red? Some might say “yes, of course, she finally knew what red looked like”. But “knowing what red looks like” is not anything to do with understanding, it is simply “knowing what red looks like”.
MF
 
  • #131
Tisthammerw said:
You don't have to mean the same thing I do when I use the word “understanding,” just know what I’m talking about here.
In the context of post #102 I am of course referring to what you have called “TH-understanding.” But to answer you question, simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world. If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.
Do you wish to propose the systems reply for this thought experiment? If so, let me give yet another response, this time one similar to Searle’s. Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).
If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.

There's something else that is providing clues to the human in Chinese Room with regard to the content and meaning of the caligraphy being handed to him under the door.

Chinese characters describe certain phenomena in the world. Sometimes the characters resemble the object or concept they describe because the character is a drawing of the object. Historically, the first Chinese characters were actual drawings of the object or concept they described. Like a hieroglyph. As their writing evolved, the drawings remained as a central theme around each character.

If Searle or the human in place of Searle has any personal experience with drawing or decifering images that have been stylized by drawing... they will... concsiously or unconsciously, be interpreting the Chinese character to mean "a man beside a house" or "a river with mountains" and this would demonstrate an understanding of the meaning of a Chinese character.

What I have pointed out here is another flaw in the Chinese Room thought experiment and brings us to what I perceive as a total 3 fatal flaws we could point out to John Searle about his experiment.

Using much less universal languages suchas Finnish or Hungarian may provide a better control for the experiment. However, I maintain that, in this experiment's set-up, the terminology is incorrect and this fact renders the experiment and any of its conclusions invalid.
 
  • #132
quantumcarl said:
There's something else that is providing clues to the human in Chinese Room with regard to the content and meaning of the caligraphy being handed to him under the door.
Chinese characters describe certain phenomena in the world. Sometimes the characters resemble the object or concept they describe because the character is a drawing of the object. Historically, the first Chinese characters were actual drawings of the object or concept they described. Like a hieroglyph. As their writing evolved, the drawings remained as a central theme around each character.
If Searle or the human in place of Searle has any personal experience with drawing or decifering images that have been stylized by drawing... they will... concsiously or unconsciously, be interpreting the Chinese character to mean "a man beside a house" or "a river with mountains" and this would demonstrate an understanding of the meaning of a Chinese character.

I find that a tad implausible (at least, if we are using my definition of understanding) and a bit speculative, and to the very least it's not a necessary truth. Searle's point is that the man can simulate a conversation in Chinese using the rulebook etc. without knowing the language, and I think this is true (it is logically possible for the man to faithfully use the rulebook without understanding Chinese). Personally (for the most part) I don't think I would understand the meaning of a Chinese character no matter how often I looked at it. For the most part (as far as I know) modern Chinese characters aren't the same as pictures or drawings; and a string of these characters would still leave me even more baffled. And don't forget the definition of understanding I am using: a person has to know what the word means. This clearly isn't the case here ex hypothesi.

And if need be we could give the person in the room a learning disability preventing him from learning a Chinese character merely through looking at it (and when I look at Chinese words, I sometimes think I have that learning disability).


What I have pointed out here is another flaw in the Chinese Room thought experiment and brings us to what I perceive as a total 3 fatal flaws we could point out to John Searle about his experiment.

And what flaws are these? My main beef with some of these arguments is that they show how a human can learn a new language. That's great, but it still seems a bit of an ignoratio elenchi and misses the point of the thought experiment big time. Remember, what’s at the dispute is not whether humans can learn, but whether computers can. Even if we teach computers language the “same” way we do with humans (e.g. we introduce sights and sounds to a computer programmed with learning algorithms) that doesn't appear to work at least in part because we're still following the same model of a complex set of instructions manipulating input etc. The story of program X seems to illustrate that point rather nicely.
 
  • #133
Hi Tisthammerw
I will again trim the fat that keeps coming into the posts.
Tisthammerw said:
When calling it analytic……
With respect, most of your post is along the lines that you seem to expect me to agree that the statement “understanding requires consciousness” is analytic, when it most definitely is synthetic. You seem to think that “understanding” is the same as “TH-Understanding”. We can spend the rest of our lives arguing back and forth on this, and I have better things to do than spend my time on arguing with someone who refuses to see the truth. Thus I move on. Perhaps you should do so too.
Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.
Perceiving in the sense-perception meaning of the word is the process of acquiring, interpreting, selecting, and organising (sensory) information. If the computer that you refer to is acquiring, interpreting, selecting, and organising (sensory) information then by definition it is perceiving. Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?
Tisthammerw said:
I’m just trying to wrap my head around your definition of “perceiving” that does not imply consciousness, since I find your definition a tad unusual.
I think some of your definitions are unusual but you are of course entitled to your opinion.
Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to definition 2 of the Merriam-Webster dictionary).
I am not saying this, where did you get this idea?
I am saying that there are different meanings of “perception”. The statement “I perceive the truth of statement X” is a meaningful statement in the English language, but it has nothing whatsoever to do with experiential sense-perception, it has everything to do with introspective perception. Does your dictionary explain this to you? No? Then with respect perhaps you need to look for a better dictionary which offers less-fuzzy definitions?

MF
 
  • #134
Tisthammerw said:
simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world.
Thus you have established that the individual called Bob (or to be precise, Bob’s consciousness) does not understand. But you have not shown that the system (of which Bob’s consciousness is simply one component) does not possesses understanding.
Tisthammerw said:
If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.
If one thinks understanding is magic then this of course explains why one might think it is not plausible. One cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.
Tisthammerw said:
Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).
When you say in the above “Bob understands nothing”, are you referring now to Bob’s consciousness, or to the entire system which is cyborg Bob (which also now incorporates the rulebook)?
If the former, then my previous reply once again applies - one cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.
If the latter, how have you shown that the entire system which is cyborg Bob (and not just Bob’s consciousness) does not possesses understanding?
Tisthammerw said:
If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.
I have answered your questions. Please now answer mine.
MF
 
  • #135
Tisthammerw said:
simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world.
Thus you have established that the individual called Bob (or to be precise, Bob’s consciousness) does not understand. But you have not shown that the system (of which Bob’s consciousness is simply one component) does not possesses understanding.
Tisthammerw said:
If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.
If one thinks understanding is magic then this of course explains why one might think it is not plausible. One cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot establish whether Tisthammerw understands by interrogating one of the neurons in Tisthammerw's brain.
Tisthammerw said:
Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).
When you say in the above “Bob understands nothing”, are you referring now to Bob’s consciousness, or to the entire system which is cyborg Bob (which also now incorporates the rulebook)?
If the former, then my previous reply once again applies - one cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.
If the latter, how have you shown that the entire system which is cyborg Bob (and not just Bob’s consciousness) does not possesses understanding?
Tisthammerw said:
If you wish to claim that the robot's ordinary processor would make things any different, please answer my questions regarding this issue.
I have answered your questions. Please now answer mine.
MF
 
  • #136
Passive vs Creative Understanding

To make some inroads into “understanding what is going on in understanding” I suggest that we neeed to distinguish between different forms or qualities of understanding. I suggest that we can categorise examples of understanding as either passive or creative.

Let us define passive understanding of subject X as the type of understanding which an agent can claim to have by virtue of its existing (previously learned) knowledge of subject X. For example, an agent may be able to claim that it understands Pythagoras’s theorem by explaining how the theorem works logically and mathematically, based on its existing knowledge and information base. The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other. In the case of our example of Pythagoras’s theorem, it would be perfectly possible for an agent to “learn by rote” the detailed explanation of Pythagoras’s theorem, and therefore to “fool” an interrogator into thinking that it understands when in fact all it is doing is repeating (regurgitating) pre-existing information. To passively understand (or to simulate understanding of) subject X, an agent needs only a static database and static program (in other words, an agent needs neither a dynamic database nor a dynamic program in order to exhibit passive understanding).

Let us define creative understanding of subject Y as the type of new understanding which an agent develops during the course of learning a new subject – by definition therefore the agent does not already possesses understanding of subject Y prior to learning about subject Y, but instead develops this new understanding of subject Y during the course of its learning. Clearly, to creatively understand subject Y, an agent needs a dynamic database and possibly also a dynamic program (since the agent needs to learn and develop new information and knowledge associated with the new subject Y). An important question is : Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?

I suggest that the classic Turing test is usually aimed at testing only passive, and not creative, understanding – the Turing test interrogator asks the agent questions in order to test the agent’s existing understanding of already “known” subjects. I suggest also that the CR as defined by Searle is also aimed at testing only passive, and not creative, understanding (In his description of the CR Searle makes no reference to any ability of the room to learn any new information, knowledge or understanding).

But we have seen that it is possible, at least in principle, for an agent to “simulate” passive understanding, and thereby to “fool” both the Turing test interrogator and the CR interrogator.

It seems clear that to improve our model and to improve our understanding of “understanding” we need to modify both the Turing test and the CR experiment, to incorporate tests of creative understanding. How might this be done? Instead of simply asking questions to test the agent’s pre-existing (learned) understanding, the interrogator might “work together with” the agent to explore new concepts and ideas, incorporating new information and knowledge, leading to new understanding. It is important however that during this process of creatively understanding the new subject the interrogator must not always be in a position of “leading” or “teaching”, otherwise we are simply back in the situation where the agent can passively accept new information and thereby simulate new understanding. The interrogator must allow the agent to demonstrate that it is able to develop new understanding through its own processing of new information and knowledge, putting this new information and knowledge into correct context and association with pre-existing information and knowledge, and not simply “learn new knowledge by rote”.

IF the Turing test is expanded to incorporate such tests of creative understanding, would we then eliminate the possibility that the agent has “simulated understanding and fooled the interrogator by learning by rote”?

Constructive criticism please?MF
 
  • #137
moving finger said:
With respect, most of your post is along the lines that you seem to expect me to agree that the statement “understanding requires consciousness” is analytic, when it most definitely is synthetic. You seem to think that “understanding” is the same as “TH-Understanding”.

That is (as I’ve said many times before) the definition of understanding I am using when I claim that “understanding requires consciousness.” I am not claiming that “understanding requires consciousness” for all people’s definitions of those terms (as I’ve said many times). Do you understand this?


Tisthammerw said:
I suppose it depends what you mean by “acquiring, processing…” and all that. Under your definition, would a computer that acquired visual data (via a camera) store it in its databanks, process it by circling any blue squares in the picture be considered “perceiving” even though the process is automated and does not include consciousness (as I have defined the term)? If the answer is “yes” then I think I understand your definition.
Perceiving in the sense-perception meaning of the word is the process of acquiring, interpreting, selecting, and organising (sensory) information. If the computer that you refer to is acquiring, interpreting, selecting, and organising (sensory) information then by definition it is perceiving.

So, is that a yes? Is this an example of a computer acquiring, selecting etc. and is thus perceiving even though it does not possesses consciousness?


Do you think the computer you refer to is acquiring, interpreting, selecting, and organising (sensory) information?

That’s what I’m asking you.


Tisthammerw said:
I’m just trying to wrap my head around your definition of “perceiving” that does not imply consciousness, since I find your definition a tad unusual.

I think some of your definitions are unusual but you are of course entitled to your opinion.

My definition is found in Merriam-Webster’s dictionary remember? If anything, it’s your definition that is unconventional (an entity perceiving without possessing consciousness).


Tisthammerw said:
It’s still a little fuzzy here. For instance, are you saying a person can “perceive” an intensely bright light without being aware of it through the senses? If so, we are indeed using different definitions of the term (since in this case I would be referring to definition 2 of the Merriam-Webster dictionary).

I am not saying this, where did you get this idea?

You. You said that your definition did not require consciousness, remember? If a person perceives in the sense that I’m using the word then the person by definition possesses consciousness. But if your definition does not require consciousness, then by all logic it must be different from my definition (please look it up in the dictionary I cited). So please answer my question regarding a person “perceiving” an intensely bright light.


The statement “I perceive the truth of statement X” is a meaningful statement in the English language, but it has nothing whatsoever to do with experiential sense-perception

True, hence (if you recall) I defined “perceive” as definitions 1a and 2 in Merriam-Webster’s dictionary. (Which definition applies is clear in the context of the sentence.)


Does your dictionary explain this to you?

Yes (see above).


No?

That is incorrect. My answer is yes.
 
  • #138
moving finger said:
Tisthammerw said:
simply ask Bob in this thought experiment. He’ll honestly tell you that he has no idea what’s going on in the outside world.

Thus you have established that the individual called Bob (or to be precise, Bob’s consciousness) does not understand. But you have not shown that the system (of which Bob’s consciousness is simply one component) does not possesses understanding.

At least we’re covering some ground here. We've established that Bob does not understand what's going on even though he runs program X.


Tisthammerw said:
If program X is for understanding Chinese, ask Bob if he knows what Chinese word X means and he’ll honestly reply “I have no idea” even though he runs program X. Unless perhaps you are going to claim that the combination of the man, the rulebook etc. somehow magically creates a separate consciousness that understands Chinese, which doesn’t sound very plausible.

If one thinks understanding is magic then this of course explains why one might think it is not plausible. One cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands, just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.

I only call it “magic” because that is what it sounds like because if its rank implausibility. Do you really believe that the combination of the man, the rulebook etc. actually creates a separate consciousness that understands Chinese? As for the assessment of understanding in the system by asking Bob’s consciousness, see below.


Tisthammerw said:
Suppose Bob is a cyborg: when in learning mode his cyborg eyes communicate to his brain a stream of binary digits. Bob doesn't know what the binary digits mean, but he has memorized the rulebook (containing a complex set of instructions identical to program X). And so he does all the mathematical and logical operations that a computer would do. He then does the appropriate actions (make certain sounds he does not understand, move his limbs certain ways etc.). And still, Bob understands nothing. He doesn't even see anything (demonstrating that merely using Program X on the input isn't sufficient for seeing).

When you say in the above “Bob understands nothing”, are you referring now to Bob’s consciousness, or to the entire system which is cyborg Bob (which also now incorporates the rulebook)?

Both, actually. Remember, the definition of understanding I am using (which you have called TH-understanding) requires consciousness. Bob’s consciousness is fully aware of all the rules of the rulebook and indeed carries out those rules. Yet he still doesn’t possesses TH-understanding here.

So, now that we have that cleared up, please answer the rest of my questions in post #102.


If the former, then my previous reply once again applies - one cannot show that the system does not possesses understanding by asking Bob’s consciousness (one component of the system) whether it understands

I don’t think this reply applies, given that Bob’s consciousness is the only consciousness in the system. Think back to the Chinese room (and variants thereof). Does the system understand (in the sense of TH-understanding)? Does the combination of the man, the rulebook etc. create a separate consciousness that understands Chinese?


just as I cannot extablish whether Tisthammerw understands by interrogating one of the neurons in your brain.

But I wasn’t asking Bob’s individual neurons “What does Chinese word X mean?” I was asking Bob’s consciousness. And Bob’s honest reply is “I have no idea.”


If the latter, how have you shown that the entire system which is cyborg Bob (and not just Bob’s consciousness) does not possesses understanding?

See above regarding Bob’s consciousness being the only consciousness in the system, unless you claim that the combination of…


I have answered your questions. Please now answer mine.
MF

You’ve only partially answered my questions (because apparently some clarification was needed). And what are your questions? Are there any more you would like answered that weren’t asked here?
 
  • #139
moving finger said:
[defines his terms]

Constructive criticism please?

Yes. All your definitions are wrong, because I don't agree with them. Everything you say is wrong, fallacious and circulus in demonstrado because I use those words differently than you do. I will of course ignore the fact that you are not referring to everybody's definitions, and for many posts to come I will repeatedly claim that your definitions are wrong in spite of any of your many clarification attempts to undo the severe misunderstanding of what you are saying.

HA! NOW YOU KNOW HOW IT FEELS!

Moving on:

moving finger said:
Let us define passive understanding of subject X as the type of understanding which an agent can claim to have by virtue of its existing (previously learned) knowledge of subject X. For example, an agent may be able to claim that it understands Pythagoras’s theorem by explaining how the theorem works logically and mathematically, based on its existing knowledge and information base. The “problem” with passive understanding, which is at the heart of Searle’s CR thought experiment, is that it is not possible for an interrogator to distinguish between an agent with true passive understanding on the one hand and an agent which has simply “learned by rote” on the other.

That was kind of Searle's point. Just because an entity (as a computer program) can simulate understanding doesn't mean the computer actually has it.


Let us define creative understanding of subject Y as the type of new understanding which an agent develops during the course of learning a new subject – by definition therefore the agent does not already possesses understanding of subject Y prior to learning about subject Y, but instead develops this new understanding of subject Y during the course of its learning. Clearly, to creatively understand subject Y, an agent needs a dynamic database and possibly also a dynamic program (since the agent needs to learn and develop new information and knowledge associated with the new subject Y). An important question is : Would it be posible for an agent to “simulate” creative understanding, and thereby to “fool” an interrogator into thinking that it has learned new understanding of a new subject when in fact it has not?

Yes. Variants of the Chinese room include learning a person's name etc. by the man in the room having many sheets of paper to store data and having the appropriate rules (think Turing machine).


It seems clear that to improve our model and to improve our understanding of “understanding” we need to modify both the Turing test and the CR experiment, to incorporate tests of creative understanding. How might this be done? Instead of simply asking questions to test the agent’s pre-existing (learned) understanding, the interrogator might “work together with” the agent to explore new concepts and ideas, incorporating new information and knowledge, leading to new understanding.

See above. The Chinese room can be modified as such, and yet there is still no (TH) understanding.


IF the Turing test is expanded to incorporate such tests of creative understanding, would we then eliminate the possibility that the agent has “simulated understanding and fooled the interrogator by learning by rote”?

The answer is no (at least when it comes to TH-understanding).
 
Last edited:
  • #140
Tisthammerw said:
I find that a tad implausible (at least, if we are using my definition of understanding) and a bit speculative, and to the very least it's not a necessary truth. Searle's point is that the man can simulate a conversation in Chinese using the rulebook etc. without knowing the language, and I think this is true (it is logically possible for the man to faithfully use the rulebook without understanding Chinese).

I see. Like in the example I gave of the person who only understands Mongolian but is translating French into English using translation texts, the only understanding that is present is the understanding the Mongolian has of everything else... excluding the two languages of French and English.

Tisthammerw said:
Personally (for the most part) I don't think I would understand the meaning of a Chinese character no matter how often I looked at it. For the most part (as far as I know) modern Chinese characters aren't the same as pictures or drawings; and a string of these characters would still leave me even more baffled. And don't forget the definition of understanding I am using: a person has to know what the word means. This clearly isn't the case here ex hypothesi.

I see what your getting at. I will still enter the contaminating effect of the pictorial nature of the Chinese language as a potential error in the experiment. The very fact that "subliminal influence" is a real influence in the bio-organic (human etc..) learning process suggests that what remains of the "pictorial nature" of Chinese caligraphy in modern Chinese caligraphy also presents a contamination to the experiment.

This brings me to the categories of "sub-consciousness" and consciousness. Your statement where "understanding requires consciousness" is made true by the fact that if the man in the room were "un-conscious" he would not only have no understanding of a language... but no understanding of his task or his curcumstances.

With regard to the sub-conscious, however, I doubt we can repeat what I've determined about "consciousness" and understanding above. I mentioned "subliminal influences" and these, I believe, rely on what is termed as the "sub-conscious". The subconsciousis a powerful function of the brain because it is able to compile information from every source simultaneously. It also organizes the information in its non-stop attempt to assist with the continued survival and existence of the organism which includes a brain.


Tisthammerw said:
And if need be we could give the person in the room a learning disability preventing him from learning a Chinese character merely through looking at it (and when I look at Chinese words, I sometimes think I have that learning disability).

Ha!

To do this experiment properly I believe we need a sorting machine... like at the post office. These machines read the Zip code and sort each piece of mail according to Zip. So, in this case, we'd slip a piece of paper under the door into my "MAIL ROOM THOUGHT experiment".

The piece of paper would have a ZIP code on it. When the piece of paper ended up in Palm Beach, California... we could ask... "did the room "know or understand" that it was to send my piece of paper with a zip code on it to Palm Beach?

Tisthammerw said:
And what flaws are these? My main beef with some of these arguments is that they show how a human can learn a new language. That's great, but it still seems a bit of an ignoratio elenchi and misses the point of the thought experiment big time. Remember, what’s at the dispute is not whether humans can learn, but whether computers can.

Yes... I see. Computers are programmed. They don't really have a choice if they are programmed or not. Humans learn. Some have the motivation to do so... some do not. There are many many factors behind the function of learning. I realize that at some fictitious point a computer might have the power to gather its own data and program itself... but, I prefer to deal with actualities rather that "what ifs".
 

Similar threads

  • General Discussion
Replies
5
Views
2K
  • Science Fiction and Fantasy Media
2
Replies
44
Views
5K
  • General Discussion
Replies
6
Views
2K
  • General Discussion
Replies
3
Views
1K
  • General Discussion
Replies
4
Views
652
  • General Discussion
Replies
3
Views
816
  • Art, Music, History, and Linguistics
Replies
11
Views
1K
Replies
4
Views
1K
Writing: Input Wanted Clone Ship vs. Generation Ship
  • Sci-Fi Writing and World Building
Replies
30
Views
2K
  • Special and General Relativity
Replies
1
Views
991
Back
Top