Ken G
Gold Member
- 4,949
- 570
Yes, so it's all about the body of training data that it builds up. That is what is analogous to anything we could call "knowledge" on which to base its responses.Motore said:But we know how it functions conceptually which is word prediction based on the training data.
That logic does not necessarily follow. Many people do understand what they output, but their understanding is incorrect, so their output is not reliable. There are many causes of unreliability, it's not clear that ChatGPT's cause of unreliability is its lack of understanding of what it is saying. The problem for me is that it is still quite unclear what humans mean when they say they understand a set of words. We can agree that ChatGPT's approach lacks what we perceive as understanding, but we cannot agree on what our own perception means, so contrasts are vague. Sometimes we agree when our meanings are actually rather different, and sometimes we disagree when our meanings are actually rather similar!Motore said:Of course the programers needed to add some feedback and limitations so it is usable, but it still doesn't understand what is outputing, so it's still not reliable.
It might come down to figuring out the right way to use it, including an understanding (!) of what it is good at and not so good at, and how to interact with it to mitigate its limitations. I agree that it might never work like the Star Trek computer ("computer, calculate the probability that I will survive if I beam down and attack the Klingons") or like the Hitchhiker's Guide to the Galaxy's attempt to find the ultimate answer to life, the universe, and everything.Motore said:Can it be 100% reliable with more data? I don't think so.
The only way it can be reliable is with a completely different model in my opinion.