ChatGPT Policy: PF Developing Policies for ChatGPT?

  • Thread starter Thread starter Frabjous
  • Start date Start date
  • Tags Tags
    chatgpt
Click For Summary
Physics Forums is considering developing policies regarding the use of ChatGPT, particularly in relation to sourcing and moderation. There is a consensus that while ChatGPT can aid in generating content, it often lacks accuracy and may muddy discussions, necessitating clear guidelines for its use. The community is divided on whether to ban chatbot-generated text outright or to allow it with proper citations, recognizing the challenges in detecting such content. Concerns about the reliability of AI-generated information and the potential for misuse are prevalent, prompting discussions on how to balance innovation with maintaining the integrity of scientific discourse. Overall, the forum is actively reviewing its stance on AI-generated content to ensure clarity and accuracy in discussions.
  • #61
russ_watters said:
So if you ask it for a source it will make one up. Sometimes they match real ones but often(usually) they don't.
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.

I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.

For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer. If I ask it again for a source and it answers that The International System of Units 9th edition (2019) is one of them, then I can verify it and state that value on PF - with my source if anyone challenges me. I do not need to state ChatGPT helped me get the information quickly.

Of course, if I get a phony reference, I should investigate more - with ChatGPT or another tool - to make sure I have the correct information.

I can also ask Google and get a nice answer similar to what ChatGPT would give:
https://www.google.com/search?q=What+is+the+standard+acceleration+due+to+gravity%3F said:
9.80665 m/s²

A conventional standard value is defined exactly as 9.80665 m/s² (about 32.1740 ft/s²). Locations of significant variation from this value are known as gravity anomalies. This does not take into account other effects, such as buoyancy or drag.
The fun thing is that Google gives a reference for it which, in this case, is Wikipedia. I do not have to ask another question to get it. Wikipedia may not be a reliable source but you can explore it further and, going from one reference to another, find this final one.

Once I've done all of that, am I going to say "Google says the acceleration due to gravity is 9.80665 m/s²" or more directly "According to ISU 9th edition (2019), the acceleration due to gravity is 9.0665 m/s²"?

Why would ChatGPT be treated differently from Google?

jack action said:
change-my-mind-jpg.jpg
 
Physics news on Phys.org
  • #62
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
A common source of both understanding and confusion here is that we can often offer several slightly different answers or opinions simultaneously.

Also, we can offer heartfelt and sincere consolations, praises, understandings and facepalms.

Ps.: and punchlines too.
 
  • #63
jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.

I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
Perhaps it seems like a minor quibble but basically every link/reference that google gives you will go to a real website/paper whereas many links/references that LLMs give you don't exist. But sure, all need to be checked to ensure they exist and are valid.

But I submit that people are often using LLM in place of real research and reading because they are lazy, so they are less likely to be doing that checking.

For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer.
Like I said, for narrow, straightforward, mainstream questions (and @Peter matas asked one), I don't have a problem with it. We already tell people they should read the wiki or google before asking us about such questions/issues.

Why would ChatGPT be treated differently from Google?
For the narrow use case of a specific, straightforward, mainstream question I agree that PF shouldn't treat it differently than google. But this is an extremely limited use case and my concerns are about some of the bigger(functionally) ones. Perhaps a list of my opinion on different use cases:

1. Specific, straightforward, mainstream question = ok
2. Translation = ok
3. Explain a concept/summarize an article such as instead of reading a whole wiki article = ok (lazy, but ok)
4. "Help me articulate my question" = iffy
5. "Help me develop my new theory" = not ok.
6. "Help me down this rabbit hole" = not ok.
7. "I copied and pasted your response into ChatGPT and here's what it replied" = not ok.
 
Last edited:
  • Like
Likes jack action, Greg Bernhardt and Petr Matas
  • #64
Petr Matas said:
I'm sorry it looked like that. Obviously, one of my conflicting results had to be wrong and I did not know which. Once I understood it was the adiabatic profile, I wanted to know where I made the mistake in the application of the laws of motion. In the end I found that answer myself, but I wouldn't be able to do it without your help.
Since you are so interested in using ChatGPT as a technical reference, hopefully you are following the parallel thread here on PF on how many times AI chatbots/LLMs get their replies wrong. Are you doing that?

From a recent post of mine...

berkeman said:
Unfortunately, Google is using AI to "summarize" search results now, and returns that at the top of the search results list. Sigh.

I did a search today to see what percentile an 840 PGRE score is, and here is the AI summary. See any math issues in this?

View attachment 353294

1731375780986.png
 
  • #65
jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK.
You're OK if you give as your source the actual reliable source that you found, instead of ChatGPT.

IMO you're not OK if you say "ChatGPT told me this and I verified that it was right" with no other supporting information.

jack action said:
I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
It's an acceptable answer if you can substantiate it with an actual reliable source, which here at PF means a textbook or a peer-reviewed paper. Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer. You need to point us to the actual reliable source that you used to do the verification.
 
  • Like
Likes berkeman and russ_watters
  • #66
jack action said:
Why would ChatGPT be treated differently from Google?
Everything in my post #65 just now would apply to Google as well.
 
  • #67
PeterDonis said:
You're OK if you give as your source the actual reliable source that you found, instead of ChatGPT.

IMO you're not OK if you say "ChatGPT told me this and I verified that it was right" with no other supporting information.


It's an acceptable answer if you can substantiate it with an actual reliable source, which here at PF means a textbook or a peer-reviewed paper. Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer. You need to point us to the actual reliable source that you used to do the verification.
PeterDonis said:
Everything in my post #65 just now would apply to Google as well.
I think we are saying the same thing.

I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement. Introducing a special AI quote becomes just ridiculous. IMO, ChatGPT is not that special and is not different from any other tool that can gather information.
 
  • #68
russ_watters said:
6. "Help me down this rabbit hole" = not ok.
Why not? I can have a discussion with AI, which can be interesting, fun and enlightening. Or useless. Or it can lead me astray. Why can't I say: "Hey, I discussed this issue with ChatGPT, which lead me to this strange result. Here is my line of thought, i.e. the arguments leading to the result (not an AI-generated text) and here is the discussion with ChatGPT for your reference."

Of course, I should be required to summarize the result and the arguments myself, so that you don't need to read the discussion at all. Otherwise you would just argue with the AI, which really doesn't make sense.

berkeman said:
Since you are so interested in using ChatGPT as a technical reference,
I am not. I agree that "ChatGPT said so" is an invalid argument. Nevertheless, even though it can't be used as a reference, it can be used as a tool.

berkeman said:
hopefully you are following the parallel thread here on PF on how many times AI chatbots/LLMs get their replies wrong. Are you doing that?
I am not. I have already got a ton of wrong AI replies myself.

PeterDonis said:
Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer.
It is a de facto unsourced statement, like a majority of posts at PF. All these are OK until challenged. Then one has to provide a reference.
 
  • #69
Petr Matas said:
Why not? I can have a discussion with AI, which can be interesting, fun and enlightening. Or useless. Or it can lead me astray. Why can't I say: "Hey, I discussed this issue with ChatGPT, which lead me to this strange result. Here is my line of thought, i.e. the arguments leading to the result (not an AI-generated text) and here is the discussion with ChatGPT for your reference."
Humans tend not to like open-ended questions exactly because they don't end. They are time consuming(read: wasting) and never get to a conclusion. In the case of your initial question it was even more annoying because it had a clear/obvious end, then didn't. That's why people dropped out of the thread.

Or to put it another way: for me the "Eureka!" moment of teaching is enormously satisfying. To have it brushed aside is equally deflating.
 
  • #70
jack action said:
I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement. Introducing a special AI quote becomes just ridiculous. IMO, ChatGPT is not that special and is not different from any other tool that can gather information.
If I google an answer I pretty much always lead with "google tells me..." To me it is at the very least polite to tell people who they are speaking to, and it saves time if a discussion of that point is needed ("How did you calculate that?"). If the issue is "ChatGPT helped me formulate this question..." it can help us understand why the question is such a mess and respond accordingly: "That's gibberish so please tell us what you asked ChatGPT and what your actual understanding of the issue is..."

For some questions that are just crackpot nonsense indeed it doesn't matter if the question/post was generated by a person or LLM for the purpose of deciding what to do with them. But it may increase the moderator workload by making it easier to generate such posts.
 
  • #71
russ_watters said:
Humans tend not to like open-ended questions exactly because they don't end. They are time consuming(read: wasting) and never get to a conclusion. In the case of your initial question it was even more annoying because it had a clear/obvious end, then didn't. That's why people dropped out of the thread.

Or to put it another way: for me the "Eureka!" moment of teaching is enormously satisfying. To have it brushed aside is equally deflating.
Now I understand my mistake: When you finally managed to convince me that the profile is indeed isothermic, I didn't acknowledge, that it resolves the issue, before asking why it is isothermic.
 
Last edited:
  • #72
jack action said:
I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement.
What general statement in the current rules do you think covers this?
 
  • #73
PeterDonis said:
What general statement in the current rules do you think covers this?
Just this single line alone should be enough:
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature.
This covers ChatGPT, Google, your brother-in-law, or anything else that is not a standard textbook or peer-reviewed scientific literature.

I raised this point in posts #8 and #27, 2 years ago, but it did not seem to catch on in the discussion.
 
  • #74
jack action said:
Just this single line alone should be enough:

This covers ChatGPT, Google, your brother-in-law, or anything else that is not a standard textbook or peer-reviewed scientific literature.
I agree that line is very general and should cover ChatGPT and other LLMs.
 
  • Like
Likes Astronuc and jack action
  • #75
jack action said:
I'm just wondering why there should be a special policy for ChatGPT.
I am going to repeat myself: Without AI, one has to make a considerable effort to produce any text, but using AI, one can generate tons of nonsense with very little work and overload us easily. I have already mentioned this:
Petr Matas said:
The trouble with AI is the easiness of generating tons of BS.
Have you considered that?
 
  • #76
Petr Matas said:
using AI, one can generate tons of nonsense with very little work and overload us easily.
So far that hasn't happened. If it does, we already have general policies for dealing with spam, which is basically what it would be.
 
  • #77
Petr Matas said:
I am going to repeat myself: Without AI, one has to make a considerable effort to produce any text, but using AI, one can generate tons of nonsense with very little work and overload us easily. I have already mentioned this:

Have you considered that?
But you still need valid sources.

You can still generate your answer with AI to simplify your life. The unethical part is publishing it without proofreading it first. This lawyer citing fake cases generated by AI is a good example of the importance of proofreading your AI-generated statements.
 
  • #78
AI is an excellent tool for helping with disabilities such as blind or partially sighted people. Or people with dyslexia.
 
  • Like
Likes Petr Matas and russ_watters
  • #79
As noted above: A specific mainstream question - ok . For example identifying an image. I see no need to look farther than Chatgpt for this category *of query. Maybe PF can simply the policy with ok,not ok categories.
IMG_20250814_103113.webp
IMG_20250814_103616.webp
 
  • #80
PeterDonis said:
I agree that line is very general and should cover ChatGPT and other LLMs.
Rather than specifically identifying ChatGPT, it would be better to indicate "Generative Ai" or "GenAI" instead. ChatGPT was one of the earliest AI systems publicized.

Now there are Augmented AI and systems that are more complex. Nevertheless, garbage in => garbage out, and sometimes, LLMs and AI/ML cannot discern the veracity or quality of the input.
 
  • Agree
  • Like
Likes robphy and Greg Bernhardt
  • #81
  • #82
Personal opinion: Machine Learning is, at this time, not a reliable tool. It needs much more work by folks without a financial agenda. It has been released from its cage far too early despite being kept on a leash of spurious strength. No doubt it will always have some issues but, as I said, it is not yet a reliable tool.
 
  • Agree
  • Like
Likes Charles Link and Greg Bernhardt
  • #83
a copy and paste of mine from the thread "a trigonometry problem of interest":

I'm finding it somewhat amazing how the ai overview is able to summarize for me the steps that are needed for the method of finding rational roots of a conic section with linear parametrization using the rational root theorem. I would like to copy and paste their summary which I think even did a better job of it than I did in post 46, that I also used in posts 56 and 64 above. That is presently against PF rules though and I am ok with that. We really don't need modern technology replacing our skills more than it already has. With all its apparent looking smart on the surface, I don't think this ai stuff is capable of intelligent thought.

Quote Reply
Report Edit
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
17
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
Replies
4
Views
2K
  • · Replies 212 ·
8
Replies
212
Views
14K
  • · Replies 5 ·
Replies
5
Views
1K
Replies
18
Views
660
  • · Replies 94 ·
4
Replies
94
Views
4K
  • · Replies 21 ·
Replies
21
Views
3K