How intelligent are large language models (LLMs)?

AI Thread Summary
Large language models (LLMs) are compared to a Magic 8-Ball, suggesting they lack true intelligence despite their advanced outputs. François Chollet argues that LLMs will never achieve genuine intelligence, while some participants believe human reasoning is often flawed and biased, similar to LLM outputs. Concerns are raised about the potential for LLMs to surpass human decision-making capabilities, especially in critical areas like climate change and healthcare. The discussion highlights the limitations of LLMs in providing reliable diagnoses despite performing well on exams, emphasizing the need for caution regarding AI's role in society. Ultimately, the conversation underscores the complexity of defining intelligence and the risks associated with underestimating AI's potential impact.
  • #51
PeterDonis said:
You can't do this with just text input and output unless the "problems" are artificially limited to text and the "solutions" are artificially limited to producing text. In other words, by removing all connection with the real world. But of course the real world knows no such limitations. Plop your LLM down in the middle of a remote island and see how well it does at surviving using text, even if all the things it needs for survival are actually present on the island. Most real world problems are far more like the latter than they are like solving artificial textual "problems".
I have no idea why the ability to survive on a remote island is a prerequisite test for intelligence.

Stephen Hawking couldn't have survived plopped down on a remote island. His interface with the world was severely limited. And, yet, he maintained his intelligence.

There is nothing to be gained from debating against absurdities. I'm out, as they say.
 
Computer science news on Phys.org
  • #52
PeroK said:
I have no idea why the ability to survive on a remote island is a prerequisite test for intelligence.
The point is the ability to perceive one's environment and figure out how to get one's needs met in it.

PeroK said:
Stephen Hawking couldn't have survived plopped down on a remote island. His interface with the world was severely limited.
Yes, and he had to learn how to use that limited interface to get his needs met. Which does indeed count as intelligence. Try getting an LLM to do that.
 
  • #53
Is a pocket calculator intelligent? It can do one thing better than a person.
How is an LLM different? It can do one thing, not as well as a person.

For extra credit: was Clever Hans intelligent?
 
  • #54
russ_watters said:
I mean....a LLM figuring out a language seems like a task pretty well in its wheelhouse.
Yes, but many people have said that all the LLM's are doing is parroting back what they have been trained. The fact that have solved a problem which no one had solved before and which couldn't have been in their training data proves that they are doing more than that.
 
  • #55
phyzguy said:
There are cases where LLMs have decoded ancient text that no human had ever decoded before.
How is it known that the decoding is valid?
 
  • #56
PeterDonis said:
How is it known that the decoding is valid?
I don't think that means "translated a text that has eluded translation". I think it means "translated a text that no human bothered to translate and was not in the training set".
 
Back
Top