PeroK said:
It's not clear the extent to which an LLM understands things. [...] But, it may have in practical terms an emergent understanding.
That's the serious debate we should be having on PF.
Yes, it is clear: it doesn't understand things, not even a little bit. LLMs only find patterns - what it was programmed for - and serves them formatted as output.
PeroK said:
The best human players learn from experienced players and coaches, who in turn have learned from the best players of the past. And nowadays, ironically, also learn from chess engines. Once you get past beginner level, you would learn about pawn structure, outposts, weak pawns, weak squares, bishop against knight etc. These considerations would have been programmed into the algorithm of a conventional chess engine. These are long-term positional ideas that do not lead to immediate victory. AlphaZero figured out the relative importance of all these ideas for itself in four hours.
AlphaZero developed its own "understanding" of all those strategic ideas.
It doesn't understand what it did. It was programmed to find patterns and it did. The patterns to look for are mathematically human-defined. It works so fast that it can find patterns that no human has found yet in human history. Then humans can now learn those new found patterns to better themselves. That is the point of making the machine in the first place.
gleem said:
You ask AI to do something that it has never been asked to do and it does it. Is that not understanding?
AI always does what it is asked to do. It does nothing more than finding patterns in the training data.
gleem said:
AI produces responses that are not in its training data.
That is the point: analyzing patterns in the training data to discover something common that will point us in the right direction, something we haven't seen, yet. The fact that humans can find it faster with an AI program than on their own, doesn't give any sign of intelligence to the machine, especially independent intelligence, an agency.
PeroK said:
Is your argument that a pocket calculator is not intelligent, therefore no computer systems can be intelligent?
No, the argument is that if you,
@PeroK , attribute some form of intelligence to LLMs - no matter how small, no matter how you define it - then you must attribute some form of intelligence to a pocket calculator as well.
Of course, me and others in this thread, are arguing that a pocket calculator does not have intelligence, it is just a dumb machine, and LLMs are just dumb machines as well, that are just more efficient than a pocket calculator (regarding their own tasks, that is).
You, on the other hand, make a lot of assumptions based on very wild and unfounded statements:
- AGI - which is still pure science-fiction at this point - is a threat to humans and will inherit the Earth;
- LLMs are the way to AGI, because:
- LLMs show signs of intelligence.
And your arguments revolve around that we must agree that LLMs are intelligent because we must also agree with your fear of AGI, something that any serious expert (when they stop being some vendor looking for venture capital) will say that we are far from it.
Here's what Yann LeCun has to say about it in the last few days:
"We can't even reproduce cat intelligence or rat intelligence," Yann LeCun told a room full of AI researchers in Paris recently. LeCun won the Turing Award—basically the Nobel Prize for computer science—for pioneering the neural networks that power today's AI.
LeCun puts it bluntly: "We're never going to get to human-level intelligence by just training on text." He points out that a four-year-old processes as much data through vision alone as the largest language models consume in text. Blind children achieve similar cognitive development through touch. The common thread isn't language—it's interaction with physical reality.
"LLMs are not a path to superintelligence or even human-level intelligence," LeCun argues. "I have said that from the beginning."
LeCun and Hassabis are more cautious. Hassabis puts genuine AGI at "five to 10 years" with a 50% probability, and only if researchers make "one or two more breakthroughs" beyond current approaches. He lists missing capabilities: learning from few examples, continuous learning, better long-term memory, improved reasoning and planning.
LeCun has abandoned the term AGI entirely. "The reason being that human intelligence is actually quite specialized," he explains. "So calling it AGI is kind of a misnomer." He prefers "advanced machine intelligence"—AMI, conveniently the name of his startup.
The disagreement isn't just semantic. It reflects fundamentally different views on whether current approaches can reach human-level intelligence or whether something entirely new is required.
And
even more:
Not long after ChatGPT was released, the two researchers who received the 2018 Turing Award with Dr. LeCun warned that A.I. was growing too powerful. Those scientists even warned that the technology could threaten the future of humanity. Dr. LeCun argued that was absurd.
“There was a lot of noise around the idea that A.I. systems were intrinsically dangerous and that putting them in the hands of everyone was a mistake,” he said. “But I have never believed in this.”
Subbarao Kambhampati, an Arizona State University professor who has been an A.I. researcher nearly as long as Dr. LeCun, agreed that today’s technologies don’t provide a path to true intelligence. But he also pointed out that they were increasingly useful in highly lucrative areas like computer coding. Dr. LeCun’s newer methods, he added, are unproven.