russ_watters
Mentor
- 23,746
- 11,192
I mean....a LLM figuring out a language seems like a task pretty well in its wheelhouse.phyzguy said:There are cases where LLMs have decoded ancient text that no human had ever decoded before. If they are just parroting back what they were trained on, that couldn't happen. What they are doing when training is building a model of the world inside their neural networks, just like the model of the world that you build when you train your neural network by interacting with the environment. So I think the continued cries of "they can't be intelligent because they are not human!!" are missing the mark.