jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.
I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
Perhaps it seems like a minor quibble but basically every link/reference that google gives you will go to a real website/paper whereas many links/references that LLMs give you don't exist. But sure, all need to be checked to ensure they exist and are valid.
But I submit that people are often using LLM in place of real research and reading
because they are lazy, so they are less likely to be doing that checking.
For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer.
Like I said, for narrow, straightforward, mainstream questions (and
@Peter matas asked one), I don't have a problem with it. We already tell people they should read the wiki or google before asking us about such questions/issues.
Why would ChatGPT be treated differently from Google?
For the narrow use case of a specific, straightforward, mainstream question I agree that PF shouldn't
treat it differently than google. But this is an extremely limited use case and my concerns are about some of the bigger(functionally) ones. Perhaps a list of my opinion on different use cases:
1. Specific, straightforward, mainstream question = ok
2. Translation = ok
3. Explain a concept/summarize an article such as instead of reading a whole wiki article = ok (lazy, but ok)
4. "Help me articulate my question" = iffy
5. "Help me develop my new theory" = not ok.
6. "Help me down this rabbit hole" = not ok.
7. "I copied and pasted your response into ChatGPT and here's what it replied" = not ok.