PeterDonis
Mentor
- 49,263
- 25,312
No, the Chinese Room is not the same as the LLMs we're talking about here--because as the Chinese Room thought experiment is formulated, its answers have to actually be correct. They have to show actual world knowledge--not just "text snarfed from the Internet knowledge".javisot said:This is usually shown with the example of chinese room https://en.wikipedia.org/wiki/Chinese_room
His counterargument is https://en.m.wikipedia.org/wiki/Strong_AI_hypothesis
This point is overlooked by far too many discussions of the Chinese Room, because those discussions don't appreciate that you can ask the Chinese Room any question you want, including questions about real world experiences that no amount of just snarfing up text will let any kind of entity (including an actual human whose only "knowledge" comes from reading stuff on the Internet) give correct answers to. And of course when you do that with LLMs, you get all kinds of nonsense--no sane person should be fooled into thinking that the LLM is a person with actual real world knowledge of the topic being asked about.
But in the Chinese Room thought experiment, by hypothesis, the Chinese Room can convince people that it's a person with actual real world knowledge of all the topics it's asked about. In other words, the thought experiment states a performance standard that LLMs simply don't and can't meet.