TheStatutoryApe said:
As I've stated before, Searle has specifically built a construct that will fail to "understand". Also I do not believe that the CR was meant specifically to target the Turing test but to target a particular AI program of the time. He may have actually built it with perameters mirroring that of the program. I'll see if I can find the description of Searle's manuels for the CR...
Here is a direct quote from Searle in his 1997 book “The Mystery of Consciousness” :
John Searle said:
Imagine I am locked in a room with a lot of boxes of Chinese symbols (the “database”). I get small bunches of Chinese symbols passed to me (questions in Chinese), and I look up in the rule book (the “program”) what I am supposed to do. I perform certain operations on the symbols in accordance with the rules (that is, I carry out the steps in the program) and give back small bunches of symbols (answers to the questions) to those outside the room.
The critical part is highlighted in bold. How do we interpret the phrase
perform certain operations on the symbols? Does it mean that Searle simply manipulates a static information database (in which case I agree no dynamic variables, no thoughtfulness, no learning), or does it mean that Searle can create and manipulate dynamic variables as part of the process, and possibly also cause changes to the program itself (in which case I would argue that there could be learning, and this also opens the door to thoughtfulness). It all hinges on whether the database and program are static parts of the process, or whether both database and program are dynamic (ie changeable).
TheStatutoryApe said:
First let me comment on some points you are questioning. Specifically the CR's "thoughtfulness" or lack there of and whether or not Searle specified this. The fact is that Searle does not specify anything about thoughtfulness on the part of the CR. The fact that he leaves this out automatically means the CR is incapable. If a program is to think it must be endowed with the ability and nothing in Searle's construction of the CR grants this, save for the human but the human really only represents a processor.
With respect I think this is a matter of interpretation. If Searle did not “rule it out” but also “did not rule it in” then it is not clear to me that the CR is indeed incapable of thinking. “Thinking” is a process of symbol manipulation, and I think we already agree that the enactment of the CR includes the manpulation of symbols (because we agree the CR "understands" syntax). I see nothing that allows us to conclude “the symbol manipulation is purely syntactic, and there is no real thinking taking place”.
TheStatutoryApe said:
The "program" by the definition quoted above is completely static and I think we have already agreed that a static process does not really qualify as "understanding".
By static program and process I assume you mean that the program is not itself modified as part of the enactment of the CR?
Again I think this may be open to interpretation. I do agree that dynamic processing/manipulation of information and dynamic evolution of the program would be essential to enable thoughtfulness and learning.
If the CR as described is simply manipulating a static database, with input and output, but with no dynamic variables and no modification of the program then there could be no thoughtfulness and no learning.
As I suggested above, Searle’s brief description of the process of “performing certain operations on the symbols” is very vague, and does not specify whether any dynamic variables are created and manipulated as part of the process. I would agree that the absence of dynamic variables, and the absence of any “evolution” of the program itself (dynamic programming?), would be important missing elements, and such elements are probably essential for any “thinking” to take place, but it is not clear from Searle’s description that both dynamic variables and dynamic programming are indeed missing.
TheStatutoryApe said:
The human is a dynamic factor but I do not believe Searle intends the human to tamper with the process occurring. If the human were to experiment by passing script that is not part of the program then it will obviously not be simulating a coherant conversation in chinese any longer.
Agreed, but it is the human which “enacts” the process, turning the static program + static data into a dynamic process.
TheStatutoryApe said:
This is why I prefer my cabinet because it does fundamentally the same thing without muddling the experiment by considering the possible actions and qualities of the human involved.
OK, I understand. In the cabinet example the “process” is clearly dynamic, but again it would appear that the database is static and there are no dynamic “variables”, and certainly no dynamic programming – again the cabinet is simply taking input and translating that into output. No dynamic variables, no thinking possible.
TheStatutoryApe said:
figuring all of the possible questions and all of the possible answers (or stories) then cross referancing them in such a way as that the CR could simulate a coherant conversation in chinese would be impossible in reality.
Perhaps so, but it is supposed to be a thought experiment
TheStatutoryApe said:
Since you stated that we are only talking about the principle rather than reality I decided to consider that if such a vast reserve of questions and answers were possible in theory it could carry on a coherant coversation and pass the Turing test. I suspended deciding whether or not the theory would actually work since it would be hard to argue that it wouldn't if we have a hypothetically infinite instruction manuel.
That is an interesting point – if the instruction manual were potentially infinite then I guess it is conceivable that it could in principle perfectly simulate understanding, and pass the Turing test.
TheStatutoryApe said:
Come to think of it though we should find that it is incapable of learning.
Yes, this follows from the absence of any dynamic variables. But “inability to learn” does not by itself equate with “inability to understand”.
TheStatutoryApe said:
Should we happen to find a piece of information that it is not privy to it will not understand and have valid output regarding it. You could adjust the CR so that it can take in the new words and apply it to the existing rules but without giving it the ability to actually learn it will only parrot the new information back utilizing the same contexts that it was initially given. Then there's an issue of memory. Will it remember my name when I tell it my name and then be able to answer me when I ask it later? Ofcourse once you introduce these elements we are no longer talking about Searle's CR and are coming much closer to what you and I would define as "understanding".
Yes, to do the above we would need to introduce dynamic variables
TheStatutoryApe said:
The two of us may not think alike and learn in the same fashion but I believe the very basic elements of how we "understand" are fundamentally the same.
In the case of the CR as built by Searle the process theoretically yields the same results in conversation but likely does not yield the same results in other areas.
If the CR does not contain dynamic variables or dynamic programming then I agree it would be simply a “translating machine” with no understanding. Whether such a room could pass the Turing test is something I’m still not sure about.
MF