Grinkle said:
Can you find any reference to Turing saying he intended his test to measure intelligence?
Maybe not in literature, but the test was brought up in this thread as proof that AI is intelligent:
PeroK said:
And, my broader argument is that the more things AI can do and the more emergent characteristics it develops, the more things are deemed to be not an indicator of intelligence.
For example, LLMs can comfortably pass the Turing test. So, for the AI skeptics the Turing test is no longer valid.
PeroK said:
Once a system is sophisticated enough to pass the Turing test, then I think its valid to start asking questions about it regarding intelligence and deception.
PeroK said:
I refuse to believe that Alan Turing overlooked this aspect of things. He must have imagined that the AI knew not to show off.
It would be ironic if we dismiss AI as intelligent precisely because it exhibits superhuman intelligence!
I must admit I worry about the direction this discussion takes.
Just to be clear: I don't think AI is self-aware, conscious, or has intentions. If a definition of intelligence refers to these, then it is not intelligent either.
Regarding the subject of this thread, I don't think that chatbots exhibit behaviours that can’t be predicted by simply analyzing the sum of their parts, either.
The results of LLMs are based on very simple math (addition, multiplication, etc.) repeated billions of times. If you use exactly the same inputs, you get exactly the same results. Nothing that is humanely impossible to do, except for the time it would require to do these calculations. The complexity and length of the process may make it harder to follow, but there is no magic.
Some people in this discussion seem to be open to the idea that there is something more in LLMs, or at least that we are on the way to something more. Lots of discussions about intelligence, based on not much, mostly ignoring
how AI researchers define intelligence:
- "The computational part of the ability to achieve goals in the world."
- "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."
Furthermore, when some people get excited about a book ("If Anyone Builds It, Everyone Dies") that was written by
a guy who is an autodidact who hasn't even attended high school, speculating about an apocalyptic future, I'm worried. He wants researchers to design a
Friendly AI 
, which physically makes no sense. Here's an opinion of this doomsdayer:
https://en.wikipedia.org/wiki/Eliezer_Yudkowsky#cite_note-:1-5 said:
But shutting it all down would call for draconian measures—perhaps even steps as extreme as those espoused by Yudkowsky, who recently wrote, in an editorial for Time, that we should "be willing to destroy a rogue datacenter by airstrike," even at the risk of sparking "a full nuclear exchange."
I still think LLMs are just dumb machines that need to be debugged and used appropriately, no need for a nuclear exchange or the like.