It is a test based on an explanation; I am saying we have to solve the hard problem first, before we can have a genuine test.
In other words, in absence of an explanation, it makes no sense to test for consciousness? There is thus no logical basis for Searle’s conclusion that the CR does not possesses consciousness, correct?
In the absence of an objective explanation there is no objective way of
testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.
If manipulating symbols is all there is to understanding, and if consciousness is part of understanding, then there should be a conscious awareness of Chinese in the room (or in Searle's head, in the internalised case).
That is a big “if”. For Searle’s objection to carry any weight, it first needs to be shown that consciousness is necessary for understanding. This has not been done (except by “circulus in demonstrando”, which results in a fallacious argument)
[/QUOTE]
That consciousness is part of understanding is established by the definitions
of the words and the way language is used. Using words correctly is not
fallaciously circular argumentation: "if there is a bachelor in the room, there is a man in the room" is a a perfectly valid argument. So is
"if there is a unicorn in the room, there is a horned animal in the
room". The conceptual, definitional, logical correctness of an
argument (validity) is a separate issue to its factual, empirical correctness (soundness).
If the conclusion to an arguemnt were not in some way contained in its
premisses, there would be no such things as logical arguements in the first
place.
The problem with circular arguemnts is not that the premiss contains the
conclusion, the problem is that it does so without being either analytically true
(e.g by defintion) or synthetically true (factually, empirically).
You could claim that consciousness is not necessarly part of machine understanding; but that would be an admission that the CR's understanding is half-baked compared to human understanding...unless you claim that huamn understanding has nothing to do with consciousness either.
I am claiming that consciousness is not necessary for understanding in all possible agents. Consciousness may be necessary for understanding in humans, but it does not follow from this that this is the case in all possible agents.
To conclude from this that “understanding without consciousness is half baked” is an unsubstantiated anthropocentric (one might even say prejudiced?) opinion.
As I have stated several times, the intelligence of an artificial intelligence
needs to be pinned to human intelligence (albeit not it in a way that makes it
trivially impossible) in order to make the claim of "artificiallity"
intelligible. Otherwise, the computer is just doing something -- something
that might as well be called infromation-processing,or symbol manipulation.
No-one can doubt that computers can do those things, and Searle doesn't
either. Detaching the intelligence of the CR from human intelligence does
nothing to counteract the argument of the CR; in fact it is suicidal to the
strong AI case.
But consciousness is a defintional quality of understanding, just as being umarried is being a defintional quality of being a bachelor.
To argue “consciousness is necessary for understanding because understanding is defined such that consciousness is a necessary part of understanding” is a simple example of “circulus in demonstrando”, which results in a fallacious argument.
Is "bachelors are unmarried because bachelors are unmarried" viciously
circular too ? Or is it -- as every logician everywhere maintains -- a
necessary, analytical truth ?
Quote:
Originally Posted by Tournesol
If you understand something , you can report that you know it, explain how you know it. etc. That higher-level knowing-how-you-know is consciousness by definition.
I dispute that an agent needs to in detail “know how it knows” in order for it to possesses an “understanding of subject X”.
“To know” is “to possesses knowledge”. A computer can report that it “knows” X (in the sense that the knowledge X is contained in it’s memory and processes), it might (if it is sufficiently complex) also be able to explain how it came about that it possesses that knowledge. By your definition such a computer would then be conscious?
Maybe. The question is whether syntax is sufficient for semantics.
I think not. imho what you suggest may be necessary, but is not sufficient, for consciousness.
Allow me to speculate.
Consciousness also requires a certain level of internalised self-representation, such that the conscious entity internally manipulates (processes) symbols for “itself” which it can relate to other symbols for objects and processes in the “perceived outside world”; in doing this it creates an internalised representation of itself in juxtaposition to the perceived outside world, resulting in a self-sustaining internal model. This model can have an unlimited number of possible levels of self-reference, such that it is possible that “it knows that it knows”, “it knows that it knows that it knows” etc.
Not very relevant.
If it is necessary but insufficient criterion for consciousness, and the CR doesn't have it, the
CR doesn't have consciousness.
I see. We first define understanding such that consciousness is necessary to understanding. And from our definition of understanding, we then conclude that understanding requires consciousness. Is that how its done?
How else would you do it ? Test for understanding without knowing what
"understanding means". Beg the question in the other direction by
re-defining "understanding" to not require consciousness ?
Write down a definition of "red" that a blind person would understand.
Are you suggesting that a blind person would not be able to understand a definition of “red”?
No, I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality. (Inasmuch as you can write down a theoretical,
non-experiential defintion. a blnd person would be able to understand it).
Thus the argument that all words can be defined
in entirely symbolic terms fails, thus the assumption that symbol-manipulation
is sufficient for semantics fails.
Sense-experience (the ability to experience the sensation of red) is a particular kind of knowledge, and is not synonymous with “understanding the concept of red”. Compare with the infamous “What Mary “Didn’t Know”” thought experiment.
Well, quite. It is a particular kind of knowledge, and somehow who lacks that
particular kind of knowledge lacks full semantics. You seem to be saying that
non-experiential knowledge ("red light has a wavelentght of 500nm") *is*
understanding, and all there is to understanding, and experience is
something extraneous that does not belong to understanding at all
(in contradiction to the conclusion of "What Mary Knew").
Of course, that would be circular and question-begging.
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
I disagree. I do not need to have the power of flight to understand aerodynamics.
To theoretically understand it.
Vision is simply an access to experiential information, a person who “sees red” does not necessarily understand anything about “red” apart from the experiential aspect (which imho is not “understanding”).
How remarkably convenient. Tell me, is that true analytically, by defintion,
or is it an observed fact ?
Experiential information may be used as an aid to understanding in some agents, but I dispute that experiential information is necessary for understanding in all agents.
How can it fail to be necessary for a semantic understanding of words that refer
sepcifically to experiences ?
Would you deny a blind person’s ability to understand Chinese?
Or a deaf person’s?
They don't fully lack it, they don't fully have it. But remember that a computer is much more restricted.
More restricted in what sense?
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but
more so.
The latter is critical to the ordinary, linguistic understanding of "red".
I dispute that an agent which simply “experiences the sight of red” necessarily underdstands anything about the colour red.
Well, that is just wrong; they understand just what Mary doesn't: what it
looks like. It may well be the case that they don't know any of the stuff that
Mary does know. However, I do not need to argue that non-experiential
knowledge is not knowledge.
I also dispute that “experiencing the sight of red” is necessary to achieve an understanding of red (just as I do not need to be able to fly in order to understand aerodynamics).
You don't need to fly in order to understand aerodynamics *theoretically*.
However, if you can do both you clearly have more understanding than someone
who can only do one or the other or neither.
(Would you want to fly in a plane piloted by someone who had never been in the
air before ?)
If the "information processing" sense falls short of full human understanding, and I maintain it does, the arguemnt for strong AI founders and Searle makes his case.
And I mainitain it does not. I can converse intelligently with a blind person about the colour “red”, and that person can understand everything there is to know about red, without ever “experiencing the sight of red”.
No they can't. They don't know what Mary doesn't know.
If what you say is true, it would be impossible for anyone to ever learn form,
or be surprised by, an experience. Having been informed that caviare is
sturgeon eggs, they would not be surprised by the taste of caviare.
But "Sturgeon eggs" conveys almost nothing about the taste of caviare.
Your argument seems to be that “being able to see red” is necessary for an understanding of red, which is like saying “being able to fly” is necessary for an understanding of flight.
It is necessary for full understanding.
If you place me in a state of sensory-deprivation does it follow that I will lose all understanding? No.
They are necessary to learn the meaning of sensory language ITFP.
They are aids to understanding in the context of some agents (eg human beings), because that is exactly how human beings acquire some of their information. It is not obvious to me that “the only possible way that any agent can learn is via sense-experience”, is it to you?
I didn't claim sensory experience was the only way to learn, simpliciter.
I claim that experience is necessary for a *full* understanding of *sensory*
language, and that an entity without sensory exprience therefore lacks full
semantics.
If you are going to counter this claim as stated, you need to rise to the
challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red". (ie show that if Mary had access to the right books -- the
ones containing this magic definition -- she would have had nothing left to learn).