sbrothy said:
Hah! You're impossible! Debating with you is like asking to be sealioned!
Not at all. I think there are aspects of human thought that are objectively different from LLM's. The problem is that it's not a universal and fundamental difference.
I see the attempts on this thread (and previous similar threads) to settle the debate by a priori definitions and assumptions that exclude computer processes. So that there is nothing to debate.
I see the whole question of intelligence, creativity and understanding as fundamentally more subtle. It's not clear that a poet, for example, doesn't do something analagous to what an LLM does. You can wave your arms and invoke metaphysical arguments. But, ultimately, a poet's brain is made of neurons; and, neurons follow set patterns and must follow algorithmic processs at some level.
Creativity must be an emergent property of algorithmic processes of some description. Likewise, all our consciousness, self-awareness and intelligence must emerge from biological processes. A single cell cannot be self-aware. These properties cannot be inherent in the lowest-level biological processes.
In my view, there is a very serious open question of how close are LLM's to developing these emergent characteristics. The argument that they are not capable of developing any higher-level characteristics does not convince me. Until you can prove what are the minimum set of requirements for human-like intelligence, you cannot say for sure that LLM's are lacking. Or, more to the point (since LLM's are almost certainly lacking something of the human spark), you cannot say to what extent they are capable of human characteristics and to what extent they are not.
It's my understanding that the concern that they are capable of developing higher-level emergent characteristics (like an instinct for self preservation) is central to current research - especially in terms of safeguards against this.
If those who have taken the counter argument on this thread are correct, then there would be no need for any AI safeguards in this respect. AI could be developed as far as possible at this stage, with no risk whatsoever that they could act against human interests. They would simply be dumb machines, slavishly producing output of some description.
I've said this on previous threads. We at PF are supposed to be guided by the professional scientific literature. In this respect, what I am saying is in line with the scientific literature. And, what the others are saying is that they prefer their own personal theories and that in some sense the "experts" (Geoffrey Hinton et al) are all wrong.
With that, I don't intend to post any further on this thread. No disrepect to anyone, but I've said what I think.