ChatGPT Policy: PF Developing Policies for ChatGPT?

  • Thread starter Thread starter Frabjous
  • Start date Start date
  • Tags Tags
    chatgpt
AI Thread Summary
Physics Forums is considering developing policies regarding the use of ChatGPT, particularly in relation to sourcing and moderation. There is a consensus that while ChatGPT can aid in generating content, it often lacks accuracy and may muddy discussions, necessitating clear guidelines for its use. The community is divided on whether to ban chatbot-generated text outright or to allow it with proper citations, recognizing the challenges in detecting such content. Concerns about the reliability of AI-generated information and the potential for misuse are prevalent, prompting discussions on how to balance innovation with maintaining the integrity of scientific discourse. Overall, the forum is actively reviewing its stance on AI-generated content to ensure clarity and accuracy in discussions.
  • #51
Rive said:
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
If there will ever be a version trained to explain science or be a teacher - yup. If it actually works then a different quote box is a good idea.
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
 
  • Like
  • Informative
Likes Hornbein and renormalize
Physics news on Phys.org
  • #52
Even if AI is perfected, we will always need a second opinion. Machines fail and we need to decide what to do so a second opinion can help.

The danger is who will be giving the second opinion a person or another AI or even the same AI under a different guise. I can imagine the day when there are a few major players and the AI service is provided to a lot of mom and pop services that are specialized in some way.

One dark example, is the funeral business. There are many "family" owned funeral parlors who sold their name to one of the major players from Canada. While they run the business, they do so using the "family" name.

Unsuspecting people who have used the funeral parlor for past funerals have no idea what transpired and think they are dealing with the same kind family.

https://www.memorials.com/info/largest-funeral-home-companies/#:~:text=Service Corporation International (SCI) is,industry for over 50 years.
 
Last edited:
  • #53
Rive said:
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
Even if it is now, we may not know what is coming in the near future. The trouble with AI is the easiness of generating tons of BS. How about requiring users to either understand and check the AI answers before posting them or explicitly declare that the answer has not been verified? Although attribution is crucial, I think that we should focus on quality of posts rather than their origin.
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
AI is just another tool and it is difficult to predict what uses it will have in the future. If you buy a new CNC milling machine, I am sure you won't throw away all the other tools in your workshop.
 
  • #54
Petr Matas said:
I think that appropriate AI use with attribution and ideally a link to the AI chat should be allowed and the current AI policy should be reviewed. In my recent thread, I got into conflict with it unknowingly twice (for a different reason each time), but I believe that my AI use was legitimate and transparent.
I'll be frank about this: yours overall was a difficult case that was on the fuzzy line of what I'd consider acceptable, with or without AI. We get a fair number of questions like "ChatGPT helped me develop this revolutionary new theory of gravity I'm calling '27-Dimensional Quantum Consciousness Gravity'. Where can I publish it and how long do I have to wait to get the Noble Prize?" Those are obviously bad.

Answering a straightforward semi-rhetorical question? Ehh, I'm ok with it. Interestingly, googles AI answers 155 nm, but it appears the ChatGPT answer was the right one.

My issue with the thread (and the vibe I got, several others had the same one), was that it appeared to be designed to be open-ended. Your initial paradox in fact wasn't (they never are), but the answer to the "paradox" was that you were correct about reality and just missed that physics does indeed point us there. You seemed to be very disappointed by that answer and sought further discussion of an issue that appeared to several contributors to be resolved (so we dropped out). ChatGPT is fine with an endless rabbit hole, but humans are not. I recognize that that will mean PF losing some traffic to that sort of question, but I'm ok with that (not sure if Greg is...).

And also; did ChatGPT steer you in the wrong direction in the OP? Did it help make the "paradox" into a bigger mess than necessary? If the answer is yes, then can you see how maybe it would have been better to come to us first? Both better for you and less annoying for us to clean up the mess?
 
  • Like
Likes Nugatory and PeterDonis
  • #55
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
There may not be one. But until then, do we want our role to just be ChatGPT's janitor?
 
  • #56
jedishrfu said:
Perhaps @Greg Bernhardt can get xenforo to add an AI quote feature in addition to the normal quote box where we can select the AI engine/version, the prompt used and the AI response.

It would be like a super quote box that clearly shows it to be an AI quote insert.

Some commercial editors (iaWriter as an example) allow you to mark text as coming from an AI vs a human and displays it in a different color.
A special AI quote feature is just telling others: "Verifying the exactitude of this quote is left to the reader as an exercise ... because I did not bother to do it myself."

At the risk of repeating myself:
jack action said:
I still don't understand what is this fixation about who - or what - wrote the text.

I'm asking the question again: If it makes sense and the information is verifiable, why would anyone want to delete it?
And - again - there is a very clear PF rule about what is an acceptable source:
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Acceptable Sources:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature. Usually, we accept references from journals that are listed in the Thomson/Reuters list (now Clarivate):

https://mjl.clarivate.com/home

If someone obtains a piece of information via an AI engine, why wouldn't they be able to corroborate this information with an acceptable source? Heck, Wikipedia usually can cite its sources, so why can't people ask the AI engine to find its sources for them as well? Once you have a reliable source, who cares if you found it first with Wikipedia or ChatGPT?

There is a limit to laziness.
 
  • #57
russ_watters said:
"ChatGPT helped me develop this revolutionary new theory of gravity I'm calling '27-Dimensional Quantum Consciousness Gravity'."
I see. Tons of easily generated BS.

russ_watters said:
My issue with the thread (and the vibe I got, several others had the same one), was that it appeared to be designed to be open-ended.
I'm sorry it looked like that. Obviously, one of my conflicting results had to be wrong and I did not know which. Once I understood it was the adiabatic profile, I wanted to know where I made the mistake in the application of the laws of motion. In the end I found that answer myself, but I wouldn't be able to do it without your help.

russ_watters said:
ChatGPT is fine with an endless rabbit hole, but humans are not.
I see. Each ChatGPT's answer spurs several new questions and it is happy to answer them. Oh, I like that so much... Isn't that a reason to avoid bothering people unless it is necessary?

russ_watters said:
And also; did ChatGPT steer you in the wrong direction in the OP? Did it help make the "paradox" into a bigger mess than necessary?
No. It was just unable to help me, unlike in several previous cases.
 
  • #58
jack action said:
why can't people ask the AI engine to find its sources for them as well?
ChatGPT cannot provide sources. It does not know where it got the information from. However, I read that Perplexity can do that, although I have not tried it yet.
 
  • Like
Likes russ_watters
  • #59
Petr Matas said:
ChatGPT cannot provide sources. It does not know where it got the information from. However, I read that Perplexity can do that, although I have not tried it yet.
It may not provide ITS source but it might provide A source.
 
  • #60
jack action said:
It may not provide ITS source but it might provide A source.
I'm not sure if you're understanding. ChatGPT itself is not searching the internet for sources. It has a "model" of what sources look like. So if you ask it for a source it will make one up. Sometimes they match real ones but often(usually) they don't.
 
  • Like
  • Skeptical
Likes Nugatory and PeroK
  • #61
russ_watters said:
So if you ask it for a source it will make one up. Sometimes they match real ones but often(usually) they don't.
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.

I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.

For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer. If I ask it again for a source and it answers that The International System of Units 9th edition (2019) is one of them, then I can verify it and state that value on PF - with my source if anyone challenges me. I do not need to state ChatGPT helped me get the information quickly.

Of course, if I get a phony reference, I should investigate more - with ChatGPT or another tool - to make sure I have the correct information.

I can also ask Google and get a nice answer similar to what ChatGPT would give:
https://www.google.com/search?q=What+is+the+standard+acceleration+due+to+gravity%3F said:
9.80665 m/s²

A conventional standard value is defined exactly as 9.80665 m/s² (about 32.1740 ft/s²). Locations of significant variation from this value are known as gravity anomalies. This does not take into account other effects, such as buoyancy or drag.
The fun thing is that Google gives a reference for it which, in this case, is Wikipedia. I do not have to ask another question to get it. Wikipedia may not be a reliable source but you can explore it further and, going from one reference to another, find this final one.

Once I've done all of that, am I going to say "Google says the acceleration due to gravity is 9.80665 m/s²" or more directly "According to ISU 9th edition (2019), the acceleration due to gravity is 9.0665 m/s²"?

Why would ChatGPT be treated differently from Google?

jack action said:
change-my-mind-jpg.jpg
 
  • #62
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
A common source of both understanding and confusion here is that we can often offer several slightly different answers or opinions simultaneously.

Also, we can offer heartfelt and sincere consolations, praises, understandings and facepalms.

Ps.: and punchlines too.
 
  • #63
jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK. If not, you still need to look further.

I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
Perhaps it seems like a minor quibble but basically every link/reference that google gives you will go to a real website/paper whereas many links/references that LLMs give you don't exist. But sure, all need to be checked to ensure they exist and are valid.

But I submit that people are often using LLM in place of real research and reading because they are lazy, so they are less likely to be doing that checking.

For example, I can ask ChatGPT what is the acceleration due to gravity and get 9.80665 m/s² as an answer.
Like I said, for narrow, straightforward, mainstream questions (and @Peter matas asked one), I don't have a problem with it. We already tell people they should read the wiki or google before asking us about such questions/issues.

Why would ChatGPT be treated differently from Google?
For the narrow use case of a specific, straightforward, mainstream question I agree that PF shouldn't treat it differently than google. But this is an extremely limited use case and my concerns are about some of the bigger(functionally) ones. Perhaps a list of my opinion on different use cases:

1. Specific, straightforward, mainstream question = ok
2. Translation = ok
3. Explain a concept/summarize an article such as instead of reading a whole wiki article = ok (lazy, but ok)
4. "Help me articulate my question" = iffy
5. "Help me develop my new theory" = not ok.
6. "Help me down this rabbit hole" = not ok.
7. "I copied and pasted your response into ChatGPT and here's what it replied" = not ok.
 
Last edited:
  • Like
Likes jack action, Greg Bernhardt and Petr Matas
  • #64
Petr Matas said:
I'm sorry it looked like that. Obviously, one of my conflicting results had to be wrong and I did not know which. Once I understood it was the adiabatic profile, I wanted to know where I made the mistake in the application of the laws of motion. In the end I found that answer myself, but I wouldn't be able to do it without your help.
Since you are so interested in using ChatGPT as a technical reference, hopefully you are following the parallel thread here on PF on how many times AI chatbots/LLMs get their replies wrong. Are you doing that?

From a recent post of mine...

berkeman said:
Unfortunately, Google is using AI to "summarize" search results now, and returns that at the top of the search results list. Sigh.

I did a search today to see what percentile an 840 PGRE score is, and here is the AI summary. See any math issues in this?

View attachment 353294

1731375780986.png
 
  • #65
jack action said:
I'm not saying ChatGPT is a good tool to find sources, just that you can ask it for them. You still have to verify it. (Like you would do with the Wikipedia sources.) If it is a reliable one, then you are OK.
You're OK if you give as your source the actual reliable source that you found, instead of ChatGPT.

IMO you're not OK if you say "ChatGPT told me this and I verified that it was right" with no other supporting information.

jack action said:
I'm just saying that if ChatGPT gives you the right answer - and you know it's right because you validated it - it is an acceptable answer.
It's an acceptable answer if you can substantiate it with an actual reliable source, which here at PF means a textbook or a peer-reviewed paper. Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer. You need to point us to the actual reliable source that you used to do the verification.
 
  • Like
Likes berkeman and russ_watters
  • #66
jack action said:
Why would ChatGPT be treated differently from Google?
Everything in my post #65 just now would apply to Google as well.
 
  • #67
PeterDonis said:
You're OK if you give as your source the actual reliable source that you found, instead of ChatGPT.

IMO you're not OK if you say "ChatGPT told me this and I verified that it was right" with no other supporting information.


It's an acceptable answer if you can substantiate it with an actual reliable source, which here at PF means a textbook or a peer-reviewed paper. Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer. You need to point us to the actual reliable source that you used to do the verification.
PeterDonis said:
Everything in my post #65 just now would apply to Google as well.
I think we are saying the same thing.

I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement. Introducing a special AI quote becomes just ridiculous. IMO, ChatGPT is not that special and is not different from any other tool that can gather information.
 
  • #68
russ_watters said:
6. "Help me down this rabbit hole" = not ok.
Why not? I can have a discussion with AI, which can be interesting, fun and enlightening. Or useless. Or it can lead me astray. Why can't I say: "Hey, I discussed this issue with ChatGPT, which lead me to this strange result. Here is my line of thought, i.e. the arguments leading to the result (not an AI-generated text) and here is the discussion with ChatGPT for your reference."

Of course, I should be required to summarize the result and the arguments myself, so that you don't need to read the discussion at all. Otherwise you would just argue with the AI, which really doesn't make sense.

berkeman said:
Since you are so interested in using ChatGPT as a technical reference,
I am not. I agree that "ChatGPT said so" is an invalid argument. Nevertheless, even though it can't be used as a reference, it can be used as a tool.

berkeman said:
hopefully you are following the parallel thread here on PF on how many times AI chatbots/LLMs get their replies wrong. Are you doing that?
I am not. I have already got a ton of wrong AI replies myself.

PeterDonis said:
Again, "ChatGPT told me this and I verified that it was right", by itself, is not an acceptable answer.
It is a de facto unsourced statement, like a majority of posts at PF. All these are OK until challenged. Then one has to provide a reference.
 
  • #69
Petr Matas said:
Why not? I can have a discussion with AI, which can be interesting, fun and enlightening. Or useless. Or it can lead me astray. Why can't I say: "Hey, I discussed this issue with ChatGPT, which lead me to this strange result. Here is my line of thought, i.e. the arguments leading to the result (not an AI-generated text) and here is the discussion with ChatGPT for your reference."
Humans tend not to like open-ended questions exactly because they don't end. They are time consuming(read: wasting) and never get to a conclusion. In the case of your initial question it was even more annoying because it had a clear/obvious end, then didn't. That's why people dropped out of the thread.

Or to put it another way: for me the "Eureka!" moment of teaching is enormously satisfying. To have it brushed aside is equally deflating.
 
  • #70
jack action said:
I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement. Introducing a special AI quote becomes just ridiculous. IMO, ChatGPT is not that special and is not different from any other tool that can gather information.
If I google an answer I pretty much always lead with "google tells me..." To me it is at the very least polite to tell people who they are speaking to, and it saves time if a discussion of that point is needed ("How did you calculate that?"). If the issue is "ChatGPT helped me formulate this question..." it can help us understand why the question is such a mess and respond accordingly: "That's gibberish so please tell us what you asked ChatGPT and what your actual understanding of the issue is..."

For some questions that are just crackpot nonsense indeed it doesn't matter if the question/post was generated by a person or LLM for the purpose of deciding what to do with them. But it may increase the moderator workload by making it easier to generate such posts.
 
  • #71
russ_watters said:
Humans tend not to like open-ended questions exactly because they don't end. They are time consuming(read: wasting) and never get to a conclusion. In the case of your initial question it was even more annoying because it had a clear/obvious end, then didn't. That's why people dropped out of the thread.

Or to put it another way: for me the "Eureka!" moment of teaching is enormously satisfying. To have it brushed aside is equally deflating.
Now I understand my mistake: When you finally managed to convince me that the profile is indeed isothermic, I didn't acknowledge, that it resolves the issue, before asking why it is isothermic.
 
Last edited:
  • #72
jack action said:
I'm just wondering why there should be a special policy for ChatGPT. It would just complicate the PF rules for no good reason since it is already taken care of with a more general statement.
What general statement in the current rules do you think covers this?
 
  • #73
PeterDonis said:
What general statement in the current rules do you think covers this?
Just this single line alone should be enough:
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature.
This covers ChatGPT, Google, your brother-in-law, or anything else that is not a standard textbook or peer-reviewed scientific literature.

I raised this point in posts #8 and #27, 2 years ago, but it did not seem to catch on in the discussion.
 
  • #74
jack action said:
Just this single line alone should be enough:

This covers ChatGPT, Google, your brother-in-law, or anything else that is not a standard textbook or peer-reviewed scientific literature.
I agree that line is very general and should cover ChatGPT and other LLMs.
 
  • Like
Likes Astronuc and jack action
  • #75
jack action said:
I'm just wondering why there should be a special policy for ChatGPT.
I am going to repeat myself: Without AI, one has to make a considerable effort to produce any text, but using AI, one can generate tons of nonsense with very little work and overload us easily. I have already mentioned this:
Petr Matas said:
The trouble with AI is the easiness of generating tons of BS.
Have you considered that?
 
  • #76
Petr Matas said:
using AI, one can generate tons of nonsense with very little work and overload us easily.
So far that hasn't happened. If it does, we already have general policies for dealing with spam, which is basically what it would be.
 
  • #77
Petr Matas said:
I am going to repeat myself: Without AI, one has to make a considerable effort to produce any text, but using AI, one can generate tons of nonsense with very little work and overload us easily. I have already mentioned this:

Have you considered that?
But you still need valid sources.

You can still generate your answer with AI to simplify your life. The unethical part is publishing it without proofreading it first. This lawyer citing fake cases generated by AI is a good example of the importance of proofreading your AI-generated statements.
 
  • #78
AI is an excellent tool for helping with disabilities such as blind or partially sighted people. Or people with dyslexia.
 
  • Like
Likes Petr Matas and russ_watters
  • #79
As noted above: A specific mainstream question - ok . For example identifying an image. I see no need to look farther than Chatgpt for this category *of query. Maybe PF can simply the policy with ok,not ok categories.
IMG_20250814_103113.webp
IMG_20250814_103616.webp
 
  • #80
PeterDonis said:
I agree that line is very general and should cover ChatGPT and other LLMs.
Rather than specifically identifying ChatGPT, it would be better to indicate "Generative Ai" or "GenAI" instead. ChatGPT was one of the earliest AI systems publicized.

Now there are Augmented AI and systems that are more complex. Nevertheless, garbage in => garbage out, and sometimes, LLMs and AI/ML cannot discern the veracity or quality of the input.
 
  • Agree
  • Like
Likes robphy and Greg Bernhardt
  • #81

Similar threads

Back
Top