ChatGPT Policy: PF Developing Policies for ChatGPT?

  • Thread starter Thread starter Frabjous
  • Start date Start date
  • Tags Tags
    chatgpt
Click For Summary
Physics Forums is considering developing policies regarding the use of ChatGPT, particularly in relation to sourcing and moderation. There is a consensus that while ChatGPT can aid in generating content, it often lacks accuracy and may muddy discussions, necessitating clear guidelines for its use. The community is divided on whether to ban chatbot-generated text outright or to allow it with proper citations, recognizing the challenges in detecting such content. Concerns about the reliability of AI-generated information and the potential for misuse are prevalent, prompting discussions on how to balance innovation with maintaining the integrity of scientific discourse. Overall, the forum is actively reviewing its stance on AI-generated content to ensure clarity and accuracy in discussions.
  • #31
mfb said:
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question. I would not recommend asking it to learn about anything either. For entertainment purposes: Sure, whatever.
So I had recently played with ChatGPT.

I started the thread about using so-called "cheek microphones" on musical theater productions. ChatGPT replied that these mics become commonplace in Broadway, but it boils down to director's preference and available budget. Then I tried to generalize to operas, for which ChatGPT also replied with almost the same text.

Yet, I have not virtually seen (from clips I have ever watched) that Broadway actors wear cheek mics. And from what I know, these mics are redundant in operas, since opera singers can generate loud sound with just the help of natural amplification from the opera house.

PS: In Indonesia cheek mics are always worn for musical numbers.
 
Physics news on Phys.org
  • #32
We are very very early in the game of forming policies and opinions about AI. In my opinion, it is not even a generational question. I think that it will take 400 years or so to sort out how to handle AI in societies. During those 400 years the AIs will change more rapidly than humans change.

As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
 
  • #33
400 years is a long time.
 
  • #34
anorlunda said:
As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
Humans are not flawed. They are the result of millions of years of evolution that gave them the capacity to adapt to an ever-changing environment.

The problem you are hoping to solve with your statement is to determine the thin line between boldness and recklessness.

The former is the courage to take risks, and the latter is taking unnecessary risks. One is a quality, the other is a flaw. The only true way of determining whether an action is part of one category or the other is to evaluate the end result. Personally, I think that as living beings get life experience, they classify more and more decisions as being reckless rather than bold, which reduces them to do less and less until they do nothing. And that is why one must die and get replaced by a new - inexperienced - being that will see life as an exciting adventure rather than an obstacle course, impossible to get by.

AI will not be able to determine better than us where that frontier is; better than any living being on this Earth, for that matter. It's the randomness of life that makes it impossible. Even if all probabilities say you will most likely die if you do an action, one must try it once in a while to verify that it is still true. The more people try and don't lose, the clearer the path becomes and more and more people can follow it. It's the only way you can adapt to an ever-changing environment.

This is the true reason why most of us don't want to ban drunk drivers: they don't always get it wrong even if they don't always get it right. An example of an action that always returns a bad consequence is drinking a cleanser made of pure ammonia. This is such a clear reckless act that we don't even feel the need to have a law to forbid it. Smoking cigarettes? Not every smoker dies of a smoking-related disease. Some smokers use it to help them cope with anxiety, which may give them a life that they wouldn't have dreamed of otherwise. Is there another way that could be better? Maybe. But nature doesn't care. As long as life finds its way through, it's good enough.
 
  • #35
Frabjous said:
The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
 
  • Like
Likes Petr Matas and Tom.G
  • #36
TeethWhitener said:
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
Just quantity. Besides, it is an important part of PF's mission statement to help educate people, even the ones that post silly stuff. We don't want to waste our time trying to train something that's not even human.
 
  • Like
Likes Hornbein, Petr Matas, PeroK and 2 others
  • #37
anorlunda said:
Just quantity. Besides, it is an important part of PF's mission statement to help educate people, even the ones that post silly stuff. We don't want to waste our time trying to train something that's not even human.
I guess I’m confused. Im trying to think of a use case where you’re worried about quantity that doesn’t fall afoul of preexisting prohibitions on spam posting.
 
  • #38
anorlunda said:
Almost all of us humans fear that driverless cars are flawed and should be banned.
I don't think that's true. Do you have a source for that claim?
TeethWhitener said:
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
Quantity: A user can "answer" tons of threads with long AI-generated nonsense, but they are unlikely to do that if they have to come up with answers on their own.
Future outlook: Tell the user that's wrong and they have a chance to learn and improve. You won't improve an AI with actions taken in a forum.
TeethWhitener said:
I guess I’m confused. Im trying to think of a use case where you’re worried about quantity that doesn’t fall afoul of preexisting prohibitions on spam posting.
It doesn't look like spam unless you check it closely. It looks like detailed answers.
 
  • Like
Likes berkeman and PeroK
  • #39
mfb said:
I don't think that's true. Do you have a source for that claim?
Alas no. Just a bit of hyperbole. I have been working on organizing a debate on driverless cars and I haven't been able to find a volunteer for the pro side. But the pool I have been drawing from is mostly older people.
 
  • #40
Screenshot 2023-01-17 at 9.02.56 AM.png
 
  • Like
  • Haha
Likes Petr Matas, dlgoff, russ_watters and 9 others
  • #41
There have been a number of barely-lucid posts that appear to be from ChatGPT - or possibly ChatLSD. Obviously, this creates a lot of work for the Mentors.

I have no trouble if PF considers this a DoS attack and responds accordingly.
 
  • Like
Likes Bystander, BillTre and russ_watters
  • #42
Vanadium 50 said:
There have been a number of barely-lucid posts that appear to be from ChatGPT - or possibly ChatLSD. Obviously, this creates a lot of work for the Mentors.
I like it when they bold their complaint against the moderators at the bottom of the post. It makes it easier to separate the bot from human created content!
 
  • Like
  • Haha
Likes berkeman and BillTre
  • #43
I got no problem with turning off various subnets in response. If you get a complaint from bullwinkle.moose@wossamotta.edu that he can't post any problems here, I have no problem replying "Someone with the email boris.badanov@wossamotta.edu was attempting to damage PF. We have been able to restrict the damage to wossamotta.edu. Unhappy with this? Perhaps you and your classmates should speak to Boris."
 
  • #44
Vanadium 50 said:
I got no problem with turning off various subnets in response. If you get a complaint from bullwinkle.moose@wossamotta.edu that he can't post any problems here, I have no problem replying "Someone with the email boris.badanov@wossamotta.edu was attempting to damage PF. We have been able to restrict the damage to wossamotta.edu. Unhappy with this? Perhaps you and your classmates should speak to Boris."
This was the standard approach back in the Usenet era when you had to be with an educational institution or decent-sized tech company to have internet access. When bad stuff hit the Usenet feed we would contact the sysadmin at the offender's institution, they would reply with something along the lines "thank you for the heads-up - his account is deactivated until he and I have had a conversation" and the problem would be gone.

I am skeptical that anything like that can be made to work in today's internet. Our leverage over, for example, gmail.com is exactly and precisely zero.
 
  • Like
Likes Petr Matas and russ_watters
  • #45
While I wouldn't say that things work this way today - they don't - and without divulging mentor tools, I think you have a little more knowledge and leverage than that. :smile:
 
  • #46
Vanadium 50 said:
While I wouldn't say that things work this way today - they don't - and without divulging mentor tools, I think you have a little more knowledge and leverage than that. :smile:
Shush. :wink:
 
  • #47
I think that appropriate AI use with attribution and ideally a link to the AI chat should be allowed and the current AI policy should be reviewed. In my recent thread, I got into conflict with it unknowingly twice (for a different reason each time), but I believe that my AI use was legitimate and transparent.

The first case appeared in my opening post:
Petr Matas said:
I've come across a paradox I can't resolve.

[...]

I considered the effect of gravitational red shift, but it doesn't seem to resolve the paradox, because [...].

I also tried to resolve the paradox using ChatGPT (in Czech), which concluded that the system only reaches a quasi-stationary state because it takes too long to reach equilibrium. However, I don't think this resolves the paradox either, because the paradox consists in the conclusion that no state of thermodynamic equilibrium exists whatsoever. However, an isolated system should have such a state, shouldn't it?
As you can see, I tried to resolve the paradox using ChatGPT (i.e. quickly and without bothering humans). It had worked for me many times before, but not this time. Therefore I had to ask humans. I felt that it would be useful to describe the unsuccessful approaches I took, including the result of the discussion with AI, to provide a link to the discussion, and to explicitly state that the answer was unsatisfactory.

After two hours and 10 posts of fruitful discussion, we were approaching the solution and at that moment the thread was locked for review due to possible conflict with AI policies for about 14 hours. I was quite worried that the members trying to help me could abandon the thread, but fortunately they didn't. They gave me food for thought, which allowed me even to compose a proof, which showed where exactly the intuition leading to the paradox went wrong.

The second case:
Chestermiller said:
Even in an ideal gas, the molecules collide with each other to exchange energy. What is the mean free path of an oxygen molecule in a gas that is at room temperature and a pressure of 1 bar?
Petr Matas said:
ChatGPT says 71 nm
Chestermiller said:
In other words, each molecule experiences a multitude of collisions and energy transfers per unit time, which translates into significant heat conduction within the gas.
Vanadium 50 said:
Tell us again how you're not using an AI?
Petr Matas said:
This was the second time 😉. I prefer leaving tedious but simple work to AI (while checking its answers of course) to save time for tasks which are currently beyond the AI capabilities. I mark the AI answers clearly whenever I use them. Or would you prefer me to conceal the use of AI? Or not use AI at all? What is the point?
As you can see, Chestermiller asked a rather rhetoric question. I saw no point in searching for the formula and values to be plugged in and in doing the calculation myself, but the result was needed to allow us to move forward. So I asked ChatGPT, verified that the answer was in agreement with my expectation, cited the numeric result with attribution and provided a link to the ChatGPT discussion.

Although the thread was not locked this time, later I found that my carefully-attributed four-word reply to the trivial question violated current explicit ban on AI-generated replies.

I am afraid that too strict rules won't prevent people from using AI, but rather to conceal its use and I am sure this is not what we want. In these days, AI-generated text is becoming indistinguishable from the human-written one, which makes our policies unenforceable. We should certainly avoid motivating people to conceal the use of AI.
 
  • #48
Perhaps @Greg Bernhardt can get xenforo to add an AI quote feature in addition to the normal quote box where we can select the AI engine/version, the prompt used and the AI response.

It would be like a super quote box that clearly shows it to be an AI quote insert.

Some commercial editors (iaWriter as an example) allow you to mark text as coming from an AI vs a human and displays it in a different color.
 
  • Like
  • Skeptical
  • Informative
Likes DaTario, pines-demon, russ_watters and 4 others
  • #49
Cam we have the same thing for the Magic 8 Ball?
 
  • #50
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
If there will ever be a version trained to explain science or be a teacher - yup. If it actually works then a different quote box is a good idea.
 
  • Like
Likes russ_watters
  • #51
Rive said:
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
If there will ever be a version trained to explain science or be a teacher - yup. If it actually works then a different quote box is a good idea.
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
 
  • Like
  • Informative
Likes Hornbein and renormalize
  • #52
Even if AI is perfected, we will always need a second opinion. Machines fail and we need to decide what to do so a second opinion can help.

The danger is who will be giving the second opinion a person or another AI or even the same AI under a different guise. I can imagine the day when there are a few major players and the AI service is provided to a lot of mom and pop services that are specialized in some way.

One dark example, is the funeral business. There are many "family" owned funeral parlors who sold their name to one of the major players from Canada. While they run the business, they do so using the "family" name.

Unsuspecting people who have used the funeral parlor for past funerals have no idea what transpired and think they are dealing with the same kind family.

https://www.memorials.com/info/largest-funeral-home-companies/#:~:text=Service Corporation International (SCI) is,industry for over 50 years.
 
Last edited:
  • #53
Rive said:
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
Even if it is now, we may not know what is coming in the near future. The trouble with AI is the easiness of generating tons of BS. How about requiring users to either understand and check the AI answers before posting them or explicitly declare that the answer has not been verified? Although attribution is crucial, I think that we should focus on quality of posts rather than their origin.
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
AI is just another tool and it is difficult to predict what uses it will have in the future. If you buy a new CNC milling machine, I am sure you won't throw away all the other tools in your workshop.
 
  • #54
Petr Matas said:
I think that appropriate AI use with attribution and ideally a link to the AI chat should be allowed and the current AI policy should be reviewed. In my recent thread, I got into conflict with it unknowingly twice (for a different reason each time), but I believe that my AI use was legitimate and transparent.
I'll be frank about this: yours overall was a difficult case that was on the fuzzy line of what I'd consider acceptable, with or without AI. We get a fair number of questions like "ChatGPT helped me develop this revolutionary new theory of gravity I'm calling '27-Dimensional Quantum Consciousness Gravity'. Where can I publish it and how long do I have to wait to get the Noble Prize?" Those are obviously bad.

Answering a straightforward semi-rhetorical question? Ehh, I'm ok with it. Interestingly, googles AI answers 155 nm, but it appears the ChatGPT answer was the right one.

My issue with the thread (and the vibe I got, several others had the same one), was that it appeared to be designed to be open-ended. Your initial paradox in fact wasn't (they never are), but the answer to the "paradox" was that you were correct about reality and just missed that physics does indeed point us there. You seemed to be very disappointed by that answer and sought further discussion of an issue that appeared to several contributors to be resolved (so we dropped out). ChatGPT is fine with an endless rabbit hole, but humans are not. I recognize that that will mean PF losing some traffic to that sort of question, but I'm ok with that (not sure if Greg is...).

And also; did ChatGPT steer you in the wrong direction in the OP? Did it help make the "paradox" into a bigger mess than necessary? If the answer is yes, then can you see how maybe it would have been better to come to us first? Both better for you and less annoying for us to clean up the mess?
 
  • Like
Likes Nugatory and PeterDonis
  • #55
PeroK said:
Once AI progresses to that level, what would be the purpose of posting a question on PF, or seeking any human response on the Internet?
There may not be one. But until then, do we want our role to just be ChatGPT's janitor?
 
  • #56
jedishrfu said:
Perhaps @Greg Bernhardt can get xenforo to add an AI quote feature in addition to the normal quote box where we can select the AI engine/version, the prompt used and the AI response.

It would be like a super quote box that clearly shows it to be an AI quote insert.

Some commercial editors (iaWriter as an example) allow you to mark text as coming from an AI vs a human and displays it in a different color.
A special AI quote feature is just telling others: "Verifying the exactitude of this quote is left to the reader as an exercise ... because I did not bother to do it myself."

At the risk of repeating myself:
jack action said:
I still don't understand what is this fixation about who - or what - wrote the text.

I'm asking the question again: If it makes sense and the information is verifiable, why would anyone want to delete it?
And - again - there is a very clear PF rule about what is an acceptable source:
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Acceptable Sources:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature. Usually, we accept references from journals that are listed in the Thomson/Reuters list (now Clarivate):

https://mjl.clarivate.com/home

If someone obtains a piece of information via an AI engine, why wouldn't they be able to corroborate this information with an acceptable source? Heck, Wikipedia usually can cite its sources, so why can't people ask the AI engine to find its sources for them as well? Once you have a reliable source, who cares if you found it first with Wikipedia or ChatGPT?

There is a limit to laziness.
 
  • #57
russ_watters said:
"ChatGPT helped me develop this revolutionary new theory of gravity I'm calling '27-Dimensional Quantum Consciousness Gravity'."
I see. Tons of easily generated BS.

russ_watters said:
My issue with the thread (and the vibe I got, several others had the same one), was that it appeared to be designed to be open-ended.
I'm sorry it looked like that. Obviously, one of my conflicting results had to be wrong and I did not know which. Once I understood it was the adiabatic profile, I wanted to know where I made the mistake in the application of the laws of motion. In the end I found that answer myself, but I wouldn't be able to do it without your help.

russ_watters said:
ChatGPT is fine with an endless rabbit hole, but humans are not.
I see. Each ChatGPT's answer spurs several new questions and it is happy to answer them. Oh, I like that so much... Isn't that a reason to avoid bothering people unless it is necessary?

russ_watters said:
And also; did ChatGPT steer you in the wrong direction in the OP? Did it help make the "paradox" into a bigger mess than necessary?
No. It was just unable to help me, unlike in several previous cases.
 
  • #58
jack action said:
why can't people ask the AI engine to find its sources for them as well?
ChatGPT cannot provide sources. It does not know where it got the information from. However, I read that Perplexity can do that, although I have not tried it yet.
 
  • Like
Likes russ_watters
  • #59
Petr Matas said:
ChatGPT cannot provide sources. It does not know where it got the information from. However, I read that Perplexity can do that, although I have not tried it yet.
It may not provide ITS source but it might provide A source.
 
  • #60
jack action said:
It may not provide ITS source but it might provide A source.
I'm not sure if you're understanding. ChatGPT itself is not searching the internet for sources. It has a "model" of what sources look like. So if you ask it for a source it will make one up. Sometimes they match real ones but often(usually) they don't.
 
  • Like
  • Skeptical
Likes Nugatory and PeroK

Similar threads

  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 10 ·
Replies
10
Views
2K
Replies
17
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
Replies
4
Views
2K
  • · Replies 212 ·
8
Replies
212
Views
14K
  • · Replies 5 ·
Replies
5
Views
1K
Replies
18
Views
661
  • · Replies 94 ·
4
Replies
94
Views
4K
  • · Replies 21 ·
Replies
21
Views
3K