In the absence of an objective explanation there is no objective way of
testing for consciousness. Of course there is still a subjective way; if
you are conscious, the very fact that you are conscious tells you you are
conscious. Hence Searle puts himself inside the room.
This subjective “test” that you suggest only allows the subject to determine whether “itself” is conscious. It says nothing about the consciousness of anything else.
.
If we want to truly know about the consciousness of anything else, we have to
start with the fact the we know ourselves to be conscious, and find out how
that consciousness is generated. We happen not have the ability ot infer from
subjective to objective test of consciousness in that way at the moment, so we may be
tempted
to use things like the Turing test as a stopgap. Searle's argument is that
we should not take the Turing test as definitive, since there is a conceivable
set of circumstances in which the test is passed but the appropriate
consciousness is not present (by the subjective test).
As I have stated several times, the intelligence of an artificial intelligence
needs to be pinned to human intelligence (albeit not it in a way that makes it
trivially impossible) in order to make the claim of "artificiallity"
intelligible. Otherwise, the computer is just doing something -- something
that might as well be called information-processing,or symbol manipulation.
Imho this is just what the human brain does – information-processing,or symbol manipulation.
So how does that relate to artifical intelligence ? You seem to be saying not
so much that computers are artificial brains as brains are natural computers.
Is "bachelors are unmarried because bachelors are unmarried" viciously
circular too ? Or is it -- as every logician everywhere maintains -- a
necessary, analytical truth ?
The difference between an analytic statement and a synthetic one is that the former are true “by definition”, therefore to claim that something is an “analytical truth” is a non-sequitur.
No, there are analytical falsehoods as well , eg "Bachelors are unmarried".
Analytic statements are essentially uninformative tautologies.
However, whether the statement “consciousness is necessary for understanding” is analytic or synthetic is open to debate. In my world (where I define understanding such that consciousness is not necessary for understanding), it is synthetic.
I guess that Tournesol would claim the statement “all unicorns eat meat” is synthetic, and not analytic?
But if I now declare “I define a unicorn as a carnivorous animal”” then (using your reasoning) I can claim the statement is now analytic, not synthetic.
According to your reasoning, I can now argue “all unicorns eat meat because I define a unicorn as a carnivorous animal”, and this argument is a valid argument?
Whether it is a valid analytical argument depends on whether the definitions it
relies on are conventional or eccentric.
This is precisely what the argument “consciousness requires understanding because I define consciousness as necessary for understanding” boils down to.
That is what it would boil down to if "understanding requires consciousness"
were unconventional, like "unicorns are carnivores", not conventional like
"bachelors are unmarried".
If you understand something , you can report that you know it, explain how you know it. etc.
Not necessarily.
yes, by defintion. That is the difference between understanding, and instinct
intuition, etc. A beaver can buld dams, but it cannot give lectures on civil
engineering.
The ability “to report” requires more than just “understanding Chinese”.
??
That higher-level knowing-how-you-know is consciousness by definition.
I suppose this is your definition of consciousness? Is this an analytic statement again?
You don't seem to have an alternative.
The question is whether syntax is sufficient for semantics.
I’m glad that you brought us back to the Searle CR argument again. Because I see no evidence that the CR does not understand semantics
Well, I have already given you a specific reason; there are words in human
languages which refer specifically to sensory experiences.
Here is a general reason for the under-determination of semantics by syntax:
given the set of sentences.
"all floogles are blints"
"all blints are zimmoids"
"some zimmoids are not blints"
you could answer the question
"are floogles zimmoids ?"
and so on -- without of course knowing
what floogles (etc) are. Moreover, I could supply
a semantic model for the syntax, such as:-
floogle=pigeon
blint=bird
zimmoids=vertebrate
so far, so CR-ish. The internal Operator is using meaningless
symbols such as "floogle", and the external Inquisitor is applying
the semantics above, and getting meaningful results.
But I could go further and supply a second semantic model
floogle=strawberry
blint=fruit
zimmoids=plant
and, in a departure from the CR, supply the second semantics
to the Operator. Now the operator thinks he understands what
the symbols he is manipulating, so does the Inquisitor...but
their interpretations are quite different!
Thus there is a prima facie case that syntax underdetermines
semantics: more than one semantic model can be consitently given to
the same symbols.
Now the strong AI-er could object that the examples are too
simple to be realistic, and if you threw in enough symbols,
you would be able to resolve all ambiguities succesfully.
To counter that the anti-Aier
needs an example of something which could not conceivably be
pinned down that way and that's just where sensory terminology comes
in.
(Too put it another way: we came to the syntax/semantics distinction by
analysing language. If semantics were redundant and derivable from syntax,
why did we ever feel the need for it as a category?)
How else would you do it ? Test for understanding without knowing what
"understanding means". Beg the question in the other direction by
re-defining "understanding" to not require consciousness ?
Are you suggesting the “correct” way to establish whether “understanding requires consciousness” is “by definition”?
I have been consistently suggesting that establishing definitions is completely
different to establishing facts. Defining a word in a certain way does not
demonstrate that anything corresonding to it actaully exists. Hence my frequent
use of unicorns as an example.
The correct way to do it is NOT by definition at all. All this achieves is the equivalent of the ancient Greeks deciding how many teeth in a horse’s mouth “by debate” instead of by experiment.
How can you establish a fact without definitions? How do you tell that the
creature in fron of you is in fact a horse without a definition of "horse" ?
Hypothesis : Understanding requires consciousness
Develop the hypothesis further – what predictions would this hypothesis make that could be tested experimentally?
Then carry out experimental tests of the hypothesis (try to falsify it)
How can you falsify a hypothesis without definitions of the terms in which it
is expressed ?
I am suggesting that no-one can write a definition that conveys the
sensory, experiential quality.
“Experiential quality” is not “understanding”
I do not need the “sensory experiential quality” of red to understand red, any more than I need the “sensory experiential quality” of x-rays to understand x-rays, or the “sensory experiential quality” of flying to understand aerodynamics.
You seem to be saying that
non-experiential knowledge ("red light has a wavelentght of 500nm") *is*
understanding, and all there is to understanding, and experience is
something extraneous that does not belong to understanding at all
(in contradiction to the conclusion of "What Mary Knew").
The conclusion to “What Mary Knew” is disputed.
Indeed, but in what ways, and with what reasonableness ?
If it was a valid response to plonkingly deny that exeriential knowledge
is knowledge at all, why didn't the Jackson's critics do that
instead of coming out with the more complex responses they
did come out with ? can't you see that you are making an extraordinary claim ?
It is perfectly reasonable to suggest that anyone needs normal vision in order to fully understand colour terms in any language.
Perhaps reasonable to you, not to me.
Perhaps you are in the minority. Perhaps you are making an extraordinary claim
with an unshouldered argumentative burden.
Sense-experience is not understanding.
Tu quoque.
It doesn't have any human-style senses at all. Like Wittgenstien's lion, but
more so.
Information and knowledge are required for understanding, not senses.
Aren't senses (at least) channels of information ? Don't different senses
convey different information.
“what it looks like” is sense-experience, it is not understanding.
Tu quoque.
However, I do not need to argue that non-experiential knowledge is not knowledge.
Why not - is this perhaps yet another analytic statement?
It doesn't affect my conclusion.
However, if you can do both you clearly have more understanding than someone
who can only do one or the other or neither.
Its not at all “clear” to me – or perhaps you also “define” understanding as “requiring sense-experience”? Analytic again?
Sigh...it's just a common-sense observation. People who have learned from experience
have more understanding than people who haven't.
the question is irrelevant – because “ability to fly a plane” is not synonymous with “understanding flight”.
Are you saying you only put your trust in the pilot because he “understands”?
If the same plane is now put onto autopilot, would you suddenly want to bail out with a parachute because (in your definition) machines “do not possesses understanding”?
They do not have as much understanding, or planes would be on autopilot all
the time.
They don't know what Mary doesn't know.
We are talking about “understanding”, not simply an experiential quality.
What is it that you think Mary “understands” once she has “experienced seeing red” that she necessarily did NOT understand before she had “experienced seeing red”?
What red looks like. The full meaning of the word "red" (remember, this is
ultimately about semantics).
(remember – by definition Mary already “knows all there is to know about the colour red”, i
No, by stipulation Mary knows all there is to know about red that can be
expressend in physical, 3rd-person, non-experiential terms.
and sense-experience is sense-experience, it is not understanding)
Tu quoque.
I claim that experience is necessary for a *full* understanding of *sensory*
language, and that an entity without sensory exprience therefore lacks full
semantics.
And I claim they are not. The senses are merely “possible conduits” of information.
There is no reason why all of the information required to “understand red”, or to “understand a concept” cannot be encoded directly into the computer (or CR) as part of its initial program.
Yes there is. If no-one can write down a definition of the experiential nature
of "red" -- and you have consistenly failed to do so -- no-one can encode it into a programme.
Now, you could object that the way "red" looks is just the way the visual
system conveys information and is not informatiomn itself; and I could reply
that knowledge about how red looks is still knowledge, even if it isn't
the same knowledge as is conveyed by red as an information-channel.
In principle, no sense-receptors are needed at all. The computer or CR can be totally blind (ie have no sense receptors) but still incorporate all of the information needed in order to understand red, syntactically and semantically. This is the thesis of strong AI, which you seem to dispute.
Yes. No-one knows how to encode all the information. You don't.
If you are going to counter this claim as stated, you need to rise to the
challenge and show how a *verbal* definition of "red" can convey the *experiential*
meaning of "red".
My claim (and that of strong AI) is that it is simply information, and not necessarily direct access to information from sense-receptors, that is required for understanding. Senses in humans are a means of conveying information – but that is all they are. This need not be the case in all possible agents, and is not the case in the CR. If we could “program the human brain” with the same information then it would have the same understanding, in the absence of any sense-receptors.
with respect
You still haven't shown , specifically, how to encode experiential infomation.
Saying "it must be possible because strong AI says it is possible" is , of
course,
circular. The truth of strong AI is what is being disputed, and the
uncommunicability of experiential meaning is one of the means of disputing it.