Is PF developing policies for ChatGPT. I just saw an answer that used it with acknowledgement.
https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:Acceptable Sources:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature. Usually, we accept references from journals that are listed in the Thomson/Reuters list (now Clarivate):
Use the search feature to search for journals by words in their titles.
In recent years, there has been an increasing number of "fringe" and Internet-only journals that appear to have lax reviewing standards. We do not generally accept references from such journals. Note that some of these fringe journals are listed in Thomson Reuters. Just because a journal is listed in Thomson Reuters does not mean it is acceptable.
References that appear only on http://www.arxiv.org/ (which is not peer-reviewed) are subject to review by the Mentors. We recognize that in some fields this is the accepted means of professional communication, but in other fields we prefer to wait until formal publication elsewhere. References that appear only on viXra (http://www.vixra.org) are never allowed.
That presupposes a certain level of accuracy and transparency.By the way. Often on PF, we recommend that people do searches using Google or other before posting. I have seen ChatGBT described as a new way to search the Internet. I think that's accurate.
Perhaps we should start recommending that users do research using Google and/or ChatGBT before posting on PF.
In fact, doesn't it learn from what it reads? What's the ratio of garbage crackpottery to decent science on the internet? Unless it can cite sources I'd tend to suspect it of being more likely to talk rubbish than not on science topics.ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question.
That would be a very worthwhile improvement for chat bots if they could cite sources. But that may be easier said than done.Unless it can cite sources
For human posters too. Grumblegrumblemuttergrumble.That would be a very worthwhile improvement for chat bots if they could cite sources.
It seems to have some weights for how credible different inputs are. I don't think it can cite anything because the answer isn't coming from a small set of specific sources, it's coming from the overallIn fact, doesn't it learn from what it reads? What's the ratio of garbage crackpottery to decent science on the internet? Unless it can cite sources I'd tend to suspect it of being more likely to talk rubbish than not on science topics.
If a credible source cannot be provided, isn't that a good justification for acting?I think we should add to the rules an explicit ban on posting chatbot-generated text. That won't make it go away of course, but it will give us justification for acting when we do recognize it.
Would you delete an answer from a chatbot that is true and easily verifiable? What would be the point?It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection.
Stating a policy, or putting it in the guidelines is OK. Making it known that we don't like AI text is better than silence. But the policy should be, no AI generated text without citing the AI source.It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection.
I believe i recall instances when 'Feedback & Announcements' has been used, and Greg has used a PF-wide announcement referring to revision (perhaps substantial revisions/updates) to PF Guidelines in the past.By the way, if we change the guidelines, how is that announced to the entire PF membership? I don't recall ever receiving notice that the guidelines changed.
The reply said:I want to ask a physicist about the twin paradox, but I have trouble expressing the question. What is the best way to ask the question?
To ask a physicist about the twin paradox, you might try framing your question as follows:
"Could you explain the twin paradox in special relativity? How does the difference in time experienced by the two twins arise, and what does this tell us about the nature of time and motion?"
This question clearly states the topic you are interested in (the twin paradox) and asks for an explanation of the key concepts involved (time dilation and relative motion). It also invites the physicist to provide additional insights or elaboration on the significance of the paradox.
That is indeed a possibility, and one that I would welcome. If this silver lining does develop, it won't just be here at PF that, it will show up across the entire information ecosystem (BTW, I just made that term up and I don't particularly like it) - for example, ESL students will want to use it for schoolwork. So I think we can wait and see, there's no reason for PF to be a leading innovator here.It is entirely foreseeable that people who have trouble with English, will be able to paste their clumsy, but original, text into an AI and get well phrased text in return.
Well, that's the question, isn't it.and the information is verifiable
I think this is a straw man argument. The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.Maybe some of that promising future is already here. A quick test of ChatGPT's ability to assist with better writing is:
View attachment 319591
I think we need to enormously broaden our horizons about how people will use the bots and for what purposes. Factual questions and answers are just one of nearly infinite possibilities.
ChatGPT has only been out for a month. Jumping to conclusions now would be like saying, "I think there is a world market for maybe five computers."
Again, this is already applicable by actual PF rules. Here are reasons mentioned by actual posts, from actual closed threads:That's why I was suggesting a rule against abuse of AI generated text - so if people post verifiable text that's fine, but if they repeatedly post authoritative sounding nonsense then we explicitly note that "but ChatGPT said it" is not a defense.
https://www.physicsforums.com/threads/nuclear-fusion-and-anti-nuclear-technology.1047161/post-6820264 said:please keep in mind that thread starts in the technical PF forums need to be based on the mainstream literature, not opinions, and you should always include links to reputable sources when starting threads in the technical forums.
https://www.physicsforums.com/threads/hello-from-portugal.965451/post-6127308 said:Please do you own search, and if you can find sources that meet the criteria for PF threads (peer-reviewed mainstream scientific articles), then start a thread in the technical forums with a link to that article, and ask your specific question.
You're really missing the point. Banning AI Chatbot prose because it might be abused is like banning keyboards because they might type nasty words.I think this is a straw man argument. The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
I think that's a concern too. Online translators with the purpose of ... well: honest attempt of translation do exists. Post-editing the result of a clumsy human translation or a clumsy online translator with a chatbot just muddles the issue, but can't actually help the underlying problem.The concern is not people improving their english
So I had recently played with ChatGPT.ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question. I would not recommend asking it to learn about anything either. For entertainment purposes: Sure, whatever.
Humans are not flawed. They are the result of millions of years of evolution that gave them the capacity to adapt to an ever-changing environment.As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.