- #1
Frabjous
Gold Member
- 1,044
- 1,173
In a technical thread, is “here what ChatGPT says” on the topic of the thread considered on or off topic
LOL, I reported it as off topic, but felt that the subject needed to get discussed here.In my opinion yes. If you're thinking of the thread I'm thinking of (Sabine on GR) I think it's only value is demonstrating how bad verbal reasoning is at physics. I reported it as not helping the thread.
Can ChatGPT even be trusted with queries in physics? It might give a confident-looking wrong answer. Posting what ChatGPT says regarding something technical should be strictly avoided, in my opinion. There is a difference between Google search and ChatGPT: in the latter, we don't know the source of information because it is predicting the next word based on its training. Google search is safer than ChatGPT, I think.OP could have asked ChatGPT themselves
I think so too. And seeing as how we don't allow lmgtfy replies, the case for disallowing ChatGPT replies is even stronger.Google search is still safer than ChatGPT, I think.
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ChatGPT is not an acceptable source for PF. Do not post further ChatGPT content as it will just continue to get you warned, and eventually banned.
I said that we're trying to formulate a policy statement here....https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/
I could not find it mentioned in the rules.
For how long?Can ChatGPT even be trusted with queries in physics? It might give a confident-looking wrong answer. Posting what ChatGPT says regarding something technical should be strictly avoided, in my opinion. There is a difference between Google search and ChatGPT: in the latter, we don't know the source of information because it is predicting the next word based on its training. Google search is safer than ChatGPT, I think.
I think it's as long as the human who started the query is at least interested in finding/checking/looking for the information among the various presented results.For how long?
Well. Duh. In some cases I do the googlework for members when a promising looking topic gets stuck without answers.we don't allow lmgtfy replies
Can ChatGPT even be trusted with queries in physics? It might give a confident-looking wrong answer. Posting what ChatGPT says regarding something technical should be strictly avoided, in my opinion. There is a difference between Google search and ChatGPT: in the latter, we don't know the source of information because it is predicting the next word based on its training. Google search is safer than ChatGPT, I think.
Even debunking pseudoscience of living people is not really encouraged here: should debunking of the output of linguistico-statistical neuro-gibberish con-engines be allowed?Isn't it possibly sometimes helpful to point it out when/how ChatGPT gives wrong answers
Maybe. Honestly, I don't know. But for now, at the very least I would limit this kind of activity for Insights, or: 'at mentor discretion only' like philosophy discussions.It is probably a matter of time...
I was following that thread. Your post changed the direction of the conversation so for me it was jarring and off topic.In the thread that triggered this discussion, the topic being discussed was the issue of potentially misleading or incorrect information being presented to the general public. And the particular physics topic was one that happened to be one that perfectly demonstrated how ChatGPT can mislead. I completely understand deleting it as a basic rule, but don't you think the world is less informed because of it rather than more informed?
Abridged by meIsn't it possibly sometimes helpful to point it out when/how ChatGPT gives wrong answers, especially in a forum where people are capable of knowing the difference?
...
Like it or not, a lot of people are trying to learn physics from ChatGPT in the wild and ignorantly assuming it is reliable. It is probably a matter of time as well before we start getting tons of posts from beginners asking whether what ChatGPT said is correct. If PF isolates itself from a large part of the modern defacto education system, it arguable would be missing out on making the positive impact that it could.
I think that a policy should also explicitly allow conversations about ChatGPT. It is a technology that is going to be around so is a legitimate topic of discussion.Abridged by me
It appears that we have several questions in front of us.
Should the use of ChatGPT for answering technical questions be allowed?
Definitely not. It should be treated as spam.
Should quoting ChatGPT output on the main topic of the thread be allowed?
No. In this case, ChatGPT, as a source of information, should not be allowed, just like we have a list of banned sources. It's of no use to waste time and energy behind correcting what ChatGPT has said because we cannot change it.
For many threads, we often ask the OP what research they have done. Should using ChatGPT qualify as a research effort?
This is a question that, I think, needs some discussion in the community before a policy is framed.
In any case, we can always be fooled if someone gets an answer from ChatGPT, reshapes it, and posts without mentioning the source.
I expect general agreement with that - IMO is is clearly appropriate.I think that a policy should also explicitly allow conversations about ChatGPT.