Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

John Searle's China Room

  1. Oct 10, 2005 #1
    Here is a good puzzle for the Artifical Intellegence buffs in the crowd. The premise is known as the China Room and doesn't have Jane Fonda or Jack Lemon in the cast.... in fact it only has some Chinese Characters that hold any of the attention during this little drama.

    Here is the situation explained with a bit of background on John Searle.

    If these ideas spur any thoughts, please feel free to share them in this thread on the differences between human thoughts and physical events such as brain states..... if any.
     
  2. jcsd
  3. Oct 10, 2005 #2
    Searle's Chinese Room argument is "infamous". Already the subject of, or discussed in, several other threads here such as :

    https://www.physicsforums.com/showthread.php?t=89713

    https://www.physicsforums.com/showthread.php?t=91002


    I agree with the final paragraph in the first post of this thread which states :

    "although the individual in the room does not understand Chinese, neither do any of the individual cells in our brains. A person's understanding of Chinese is an emergent property of the brain and not a property possessed by any one part. Similarly, understanding is an emergent property of the entire system contained in the room, even though it is not a property of any one component in the room - person, book, or paper."


    MF
     
    Last edited: Oct 10, 2005
  4. Oct 10, 2005 #3
    The systems reply says the thinker (in the scenario) isn't Searle, it's the whole Searle-in-the-room system. Searle responds by imagining himself to "internalize all the elements of the system" by memorizing the instructions, etc.: "all the same," he intuits, he "understands nothing of the Chinese" and "neither does the system" (p. 419).
     
  5. Oct 10, 2005 #4
    I dispute Searle's conclusion. In actuality, Searle's mind would not be CONSCIOUS of any understanding of Chinese, but having internalised all the elements of the system the physical entity called Searle would understand Chinese nevertheless.

    Searle's argument that the Chinese Room does not truly understand Chinese only works if one accepts the implicit assumption that understanding necessarily means conscious understanding, but this is an anthropocentric perspective. I do not agree that conscious understanding is necessary in order to achieve understanding.

    MF
     
  6. Oct 10, 2005 #5
    A cure for cancer - and/or the answer to many seemingly unanswerable questions - lies waiting in the use of what are termed "genetic algorythms". Genetic algorythms are a function of artificial intelligence that are able to draw on the resources of multiple sources......ie: one networks all the computing power of idle computers available and begins to process all the data in the main server of a clinic/research facility.

    The extra computing power of the combined, previously idle computers is used to cross-reference all..... and I mean all of the patient and disease data gathered by research in all areas, statistics in all areas and other misc. data from all areas of the clinic and research facilities......

    Genetic algorythms simulate the "free associative" function of our brain but has 800% more access to 1000 percent more information (exageration). The result being a much broader "understanding" of the problem, unfettered by ego, competitions, money concerns, bribes, wives, car problems and other human concerns that hinder the proper examination of problems and their solutions.

    Does a genetic algorhythm "understand" what its studying any better than a human? Does a positive result from the use of the genetic algorhythm (ie: cure for cancer... or... coming up with a strategy to best convert to an alternative for an oil-based economy)........does this positive result demonstrate a "better understanding" of a problem than the human understanding that has no ability to solve the problem?
     
  7. Oct 10, 2005 #6
    Interesting question.

    I suggest Searle's response (forgive me my presumptiveness) would be "No, there is no way such an algorythm (sic) could in principle understand anything, unless it possessed consciousness". Anyone disagree?

    My response would be : Let us see. I am prepared to believe (in principle) that such an algorythm (sic), if it were sufficiently complex, could literally understand the problem (without being conscious), and I am also prepared to believe (again in principle) that such an algorythm (sic) could demonstrate a better understanding of the problem than any human has done or could do.

    I look forward to the day this happens!

    MF
     
  8. Oct 10, 2005 #7
    I would agree with you in a sense but only in regard to a selfreferencial linguistic such as math. I would say that a computer can understand math because it takes no further information then simply the structure and rules of the language to understand it.
    Languages like english, chinese, or what have you require more information to understand than simply the language itself. The purpose of such a language is to transmit information which the words represent. Without access to the information being represented there will be no "understanding". A vast majority of words require experiencial information to understand and most of the rest are just connecting "syntactic" words. A computer would require an understanding of the information being represented in order to formulate proper coherant responses. Lacking this understanding it would only be capable of spitting out stock responses which would be rather limited.

    It would have an understanding of the process it is using and the math involved. It doesn't necessarily possess an understanding of the fact that it is helping process information about cancer or what cancer is. The computer cannot even determine whether or not the process it is using is successful except that it has successfully executed the algorythm it was asked to execute. The computers paradigm doesn't actually extend outside of it's program and network unless you can give it the means to accomplish that.
     
  9. Oct 10, 2005 #8
    As usual for any good discussion to survive, the terminology used at the centre of the discussion must be clearly and somewhat universally defined so that there is a mutually accepted understanding of the subject matter of the discussion.

    And I completely forgot to include some definitions for "understanding" and "conscious".

    It is trivial to continue to discuss a subject without agreed upon definitions of the words we (mostly I so !far¡) spout. So I've found this definition for consciousness which the writers claim is imposible to define while offering this:

     
  10. Oct 10, 2005 #9
    So what do you think? Is it possible or likely that computers possess either 'Phenomenal Consciousness' or 'Access Consciousness' or both in some form or another?
    I'm not so sure if I agree that consciousness requires a sense of self. I think this is just more or less a byproduct of the sort of consciousness that humans experience.


    Here's a basic dictionary definition of "understanding"...
     
  11. Oct 11, 2005 #10
    The "information being represented" can be encoded into a database. I understand English, if you cut me off from the outside world I no longer have direct access to the information sources from the outside world, but I still continue to understand English, because the information I need to be able to understand English is encoded into my database. The only significant difference between me and a computer that understands English is the fact that I am conscious of the fact that I understand English whereas a computer need not necessarily be conscious. In what sense do you think the computer (or the CR) would not have access to the information being represented?

    Experiencial information is information nevertheless, and all information can be encoded into a database. Take the noun "chair". I guess you would say that means something to you and me because we have experienced a chair, what it looks like, what it feels like, what purpose it serves in everyday life, etc etc, but the computer or CR could not experience these things therefore would not understand the noun chair, is that it? I disagree with this logic. The "experiencial information" of what a chair looks like, feels like, etc etc can all be encoded into the computer or CR as part of its database, just as the same experiencal information is encoded into my brain, so that I continue to have an image of what a chair looks like etc even when I am cut off from all sensory experience.

    The computer could understand in just the same way as you or I because it could be encoded with the same experiencial information as you or I. My eyes are simply a way of gathering visual information, it is quite possible in principle for the same visual information to be encoded into my brain directly, bypassing my eyes, so that I could understand what a chair looks like without ever having seen a chair. In a similar way, a computer could have access to the data which allows it to understand what a chair looks like.

    "doesn't necessarily" - but it COULD be programmed with that understanding. There is no reason why a computer working on a cure for cancer should not be programmed to understand what cancer is, how it affects people, why it is working on a cure, what the implications are, etc etc etc. There is in principle no limit to the amount of understanding that it could be programmed with.

    The means to accomplish that are (in the case of the CR) the scribbled notes passed back and forth - this is the computer's access to the outside world.

    My paradigm does not extend outside of my program and network unless you give me the means to accomplish that - but if you lock me away in a room and deprive me of all sensory information does that mean I suddenly cease to understand Englsh? No, of course not. And the same would be true of the computer.

    With respect

    MF
     
  12. Oct 11, 2005 #11
    With respect, the question that needs to be addressed in this thread is NOT whether computers possess any kind of consciousness, but whether consciousness is a necessary pre-requisite for understanding.

    MF
     
  13. Oct 11, 2005 #12
    I am not saying that I accept this as a correct or full definition, but let us work with it for the time being.

    Note that nowhere in this definition does it specify or imply that consciousness is a prerequisite for understanding.

    Now - Which part of this do you think a computer could not in principle accomplish, and why?

    With Respect

    MF
     
  14. Oct 11, 2005 #13
    We are comparing human thought processes to those of a computer here. So far we haven't specified what type of computer... for instance say one that is housed in sensory devices that feed the system data from its environment. So far all we've noted is that humans feed the computer data.

    Here's a good definition of "understanding" in addition to what Statutory Ape has offered and that MF has found reason to create a rebuttle in (timely) response.

    I think we can see the bias in this definition. However, personally and briefly my definition is a little less formal in that I believe understanding to mean more than being able to match symbols and concepts to physical objects and events. I think understanding requires a compassion or an empathy that can be utilized in "grasping" (thank you apeman) any given subject.

    There are several "strings" or tangents that come along with understanding a subject, culture, language etc.... that can only be used to understand a subject. If the enquirer has spent time chemically reacting with a similar environment or set of sets that the subject has also experienced the state of understanding the subject is easier and much quicker to reach than the traditional method of gathering "data" on a subject in order to understand it.

    For instance there was the example of understanding a chair that was mentioned and the one thing a computer or computer with sensory devices housing it does not have is a chemically comprised body wrapped in an organ called the dermis. When this chemical bag we call a body sits on a chair, we get an understanding of the chair as it applies to a human body. Not as it applies to mathmatics or physics or even ergonomics... its really about how our musculatrature, skeletal, cellular etc... makes contact with the chair and how our rate of fall is caught and whether we feel "secure" in the chair... etc.

    What I'm saying, I just noticed, is that "understanding" is relative to the conditions of the enquirer.....ie: the conditions of its physical state as a physcial phenomenon on the planet (with gravity, water, sun, blah blah).....

    I hope I'm made some progress in describing understanding here... although it also seems that it was a process of 2 steps forward, 2 steps back.

    Any further comments are appreciated.
     
  15. Oct 11, 2005 #14
    My applogies to the StatApe for practically duplicating his thread here in Philosphy. Its interesting that its taking a turn toward defining understanding and consciousness in this forum rather than leaning heavily to the computing aspect of AI. CAn you dig the Azimov dude? Right on! His trilogy that ended with ROBOTS OF DAWN is gnarly!
     
  16. Oct 11, 2005 #15
    The claim of Strong AI is that computers can fully implement -- not just
    imitate or emulate -- a human mind. That being the case, the "anthropocentric" attitude to undertstanding is appropriate -- a fully-featured human mind is indeed the benchmark.

    To argue that there are features of human consciousness which can reasonably be left out on AI is to argue for weak AI, and Searle is not even attempting to argue against weak AI.
     
    Last edited: Oct 11, 2005
  17. Oct 11, 2005 #16
    For the record, I did not rebut the previous definition offered by Statutory Ape - I said let's work with it for the time being.

    A computer could also (in principle) be constructed which "chemically reacts" (quaint terminology) with its environment. But the "reaction with the environment" is simply a way of gathering data (input). It is well known and accepted that I can get a better appreciation for and understanding of French by going to live and work in France, but nevertheless it is still possible for me to learn (and understand) French from a teacher and textbook in a classroom in London, cut off from direct French experience. In principle, this knowledge and understanding of French could also be programmed directly into a brain or computer, bypassing textbook and teacher.

    There is no reason in principle why we should not be able to construct a computer with a "dermis substitute" through which it could acquire sensory data equivalent to our feeeling/touching. But once again, as with the "learning French" example above, there is also no reason why the sensory information could not also be directly programmed into the computer's database so that it acquires the experiential data of touching/feeling a chair even though it has never sat in a chair.

    And similarly a computer coud in principle be constructed which could appreciate, know and understand how a chair feels to the computer body.

    More correctly, experiential knowledge is relative to the conditions of the enquirer.

    How does any of this show that a computer cannot in principle understand anything?

    MF
     
  18. Oct 11, 2005 #17
    With respect, the question implicit at the beginning of this thread is whether or not a computer could rightfully claim to possess understanding, NOT whether a computer can “fully implement a human mind”.

    I believe the subject under discussion here is therefore “whether or not a computer could ever (in principle) rightfully claim to possess understanding”, and not “whether or not a computer could ever (in principle) fully implement a human mind”. The two questions are very different.

    Any guidance from quantumcarl about which question we should be debating here, before I continue?

    With Respect

    MF
     
  19. Oct 11, 2005 #18
    I would interject here and try to guide the discussion toward a better understanding of the state of "understanding" for the moment. I believe its of value to do so before we can "carte blanc" say that understanding is a singlularity so easily "understood".

    There have been two categories put forth that we can relate to "understanding"...."'Phenomenal Consciousness' or 'Access Consciousness'. Let's term them "phenomenal understanding" and "access understanding".

    Here we can demonstrate "phenomenal understanding" when we learn french in France with all the phenomena of the culture, the wine, coffee houses, various collections of genetic material and dialects bombarding our senses and demanding that we learn french or starve etc.....

    Demonstrating "access understanding" we'd simply sit in a classroom and watch videos of the Eiffel Tower, striped shirts and flauncy skirts and learn the linguistic's grammatical interpretations and the vocabulary hoops and rollercoasters. We could import some of the culture of france to the classroom in the form of bread, cheese and even a guest speaker.... but, are we experiencing the phenomenon of France and the origins and conditions of its language? Not by a long shot.

    I would also say that there are more than two types of understanding. I would say there is a type of understanding for every stage of a human's development as a biological unit. There is a type of understanding for every different animal in the mamalian group and so on and so forth.

    When MF suggests "we" could build a computer with a skin that senses the chair and that "we" could build a computer that uses chemical interaction to assess its environment that is simply MF's opinion based on a fictitious fancy that could or could not come true. There's no proof that we could do these things and I doubt we could judging from the lack of advances in robotics over the last 20 years. Besides, the economics involved in building such a "bot" would be ludicris when you realize we have "ready-built" humans from which to draw "understanding" of any number of topics... including chairs.

    The closest I can get to a definition of "understanding" is that it is a word that humans use to describe a type of a state of awareness... in a human.

    The key here is that "understanding" is a human description that probably does not apply to any other organism or otherwise. When we try to assign a human term to a computer or even to a chimpanzee... we are simply anthropomorphizing something that is not human... and that is contradictory in nature and probably a fruitless endevour.
     
  20. Oct 12, 2005 #19
    Thank you.
    I am not saying that I agree with the logic here, but let’s go with it for the time being. Both forms of understanding described above qualify as understanding in my book. In both cases, the net result (the educated pupil) is a person who understands French. Their appreciation for French culture etc, and the quality of their understanding of French, may be slightly better in one case than the other, but in both cases there is an understanding of French.
    Possibly.
    I beg to differ, but do we really want to get into a long detailed description of how machines could be equipped with various forms of sensory input? Does anyone dispute that a machine could be constructed which is able to acquire and to process visual image inputs, or audio inputs? Why should it be so difficult to conceive of a machine which could acquire and process tactile inputs? If anyone thinks this is impossible in principle then, with all due respect, I suggest they do some more homework.
    With all due respect, quantumcarl, what on earth do the “economics” have to do with any of this? We are (I thought) discussing whether things are possible in principle, and not whether it makes economic sense to carry them out.
    With respect, this is an extreme version of anthropocentrism. Imagine we meet an intelligent alien at some future date – are you suggesting that we would necessarily conclude the alien has no ability to understand ANYTHING simply because it is not human?
    With respect, I suggest the definition of “understanding” that you offer here is totally inadequate.
    (An analogy : It is like saying that “intelligence” is a “human description that probably does not apply to any other organism or otherwise” – does this mean that it is impossible for any other species or agent (including aliens) to possess intelligence?
    I suggest that the problem in fact resides in your extremely anthropocentric definition of understanding. I would ask whether anyone else reading this thread agrees with your definition, which implies that only homo sapiens could ever be said to understand anything.

    With respect

    MF
     
    Last edited: Oct 12, 2005
  21. Oct 12, 2005 #20
    What is implicit is in the eye of the beholder. The thread is explicitly about
    the "China Room".
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?



Similar Discussions: John Searle's China Room
  1. The Room (Replies: 68)

  2. John Titor (Replies: 6)

  3. China exports China (Replies: 3)

Loading...