ChatGPT Policy: PF Developing Policies for ChatGPT?

  • Thread starter Thread starter Frabjous
  • Start date Start date
  • Tags Tags
    chatgpt
AI Thread Summary
Physics Forums is considering developing policies regarding the use of ChatGPT, particularly in relation to sourcing and moderation. There is a consensus that while ChatGPT can aid in generating content, it often lacks accuracy and may muddy discussions, necessitating clear guidelines for its use. The community is divided on whether to ban chatbot-generated text outright or to allow it with proper citations, recognizing the challenges in detecting such content. Concerns about the reliability of AI-generated information and the potential for misuse are prevalent, prompting discussions on how to balance innovation with maintaining the integrity of scientific discourse. Overall, the forum is actively reviewing its stance on AI-generated content to ensure clarity and accuracy in discussions.
Frabjous
Gold Member
Messages
1,944
Reaction score
2,373
Is PF developing policies for ChatGPT. I just saw an answer that used it with acknowledgement.
 
  • Like
  • Informative
Likes dextercioby, Demystifier, Wrichik Basu and 1 other person
Physics news on Phys.org
I'm pretty sure stackoverflow is attempting to ban it. I think we should discourage it, but I am unsure how to "ban" it here. Well worth a discussion. At a minimum content from ChatGPT should be quoted.
 
  • Like
Likes dextercioby, topsquark, Demystifier and 1 other person
I think that while it can be appropriate, it will be hard to moderate. There will be plenty of instances where it will only muddy the waters. I believe that there are several forbidden topics based on this reasoning.

On the other hand, detecting unattributed quotes will be a bear.
 
Last edited:
  • Like
Likes Wrichik Basu
By the way. Often on PF, we recommend that people do searches using Google or other before posting. I have seen ChatGBT described as a new way to search the Internet. I think that's accurate.

Perhaps we should start recommending that users do research using Google and/or ChatGBT before posting on PF.
 
  • Like
Likes jedishrfu and Demystifier
Isn't there already one?

https://www.physicsforums.com/threads/physics-forums-global-guidelines.414380/ said:
Acceptable Sources:
Generally, discussion topics should be traceable to standard textbooks or to peer-reviewed scientific literature. Usually, we accept references from journals that are listed in the Thomson/Reuters list (now Clarivate):

https://mjl.clarivate.com/home

Use the search feature to search for journals by words in their titles.

In recent years, there has been an increasing number of "fringe" and Internet-only journals that appear to have lax reviewing standards. We do not generally accept references from such journals. Note that some of these fringe journals are listed in Thomson Reuters. Just because a journal is listed in Thomson Reuters does not mean it is acceptable.

References that appear only on http://www.arxiv.org/ (which is not peer-reviewed) are subject to review by the Mentors. We recognize that in some fields this is the accepted means of professional communication, but in other fields we prefer to wait until formal publication elsewhere. References that appear only on viXra (http://www.vixra.org) are never allowed.

  • Specifying ChatGPT as a source is against this policy;
  • Using ChatGPT without stating it and not being able to specify a valid source is also against this policy.
@Demystifier simply broke PF rules in the example thread. Normally people ask for sources, but somehow @Greg Bernhardt chose to only emphasize the fact that it came from ChatGPT. I wonder if he would have done the same thing if the quote came from the Bible?

I can answer most questions asked on this forum by using Google. Without finding another valid source to back up my answer, it is invalid.
 
Last edited:
  • Like
  • Skeptical
  • Informative
Likes apostolosdt, dextercioby, Greg Bernhardt and 3 others
anorlunda said:
By the way. Often on PF, we recommend that people do searches using Google or other before posting. I have seen ChatGBT described as a new way to search the Internet. I think that's accurate.

Perhaps we should start recommending that users do research using Google and/or ChatGBT before posting on PF.
That presupposes a certain level of accuracy and transparency.
 
  • #10
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question. I would not recommend asking it to learn about anything either. For entertainment purposes: Sure, whatever.
 
  • Like
Likes bagasme, apostolosdt, Filip Larsen and 4 others
  • #11
mfb said:
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question.
In fact, doesn't it learn from what it reads? What's the ratio of garbage crackpottery to decent science on the internet? Unless it can cite sources I'd tend to suspect it of being more likely to talk rubbish than not on science topics.
 
  • #12
Ibix said:
Unless it can cite sources
That would be a very worthwhile improvement for chat bots if they could cite sources. But that may be easier said than done.

Note that Google has also started synthesizing answers shown higher up on the page than the links. That blurs the boundaries between chat bots and Internet searches.

I'll stick out my neck and predict the near future: The boundaries between AI and non-AI resources will continue to be blurred making it increasingly difficult to make any pro or anti AI policies, or even to define what is AI is and is not.
 
  • #13
anorlunda said:
That would be a very worthwhile improvement for chat bots if they could cite sources.
For human posters too. Grumblegrumblemuttergrumble.
 
  • Like
  • Haha
Likes Astronuc, topsquark, Nugatory and 1 other person
  • #14
Ibix said:
In fact, doesn't it learn from what it reads? What's the ratio of garbage crackpottery to decent science on the internet? Unless it can cite sources I'd tend to suspect it of being more likely to talk rubbish than not on science topics.
It seems to have some weights for how credible different inputs are. I don't think it can cite anything because the answer isn't coming from a small set of specific sources, it's coming from the overall
knowledge it gained over time.
 
  • #15
I think we should add to the rules an explicit ban on posting chatbot-generated text. That won't make it go away of course, but it will give us justification for acting when we do recognize it.
(There would an exception when a chatbot itself is the topic of discussion, and I can imagine a long-running "look at what the bot did THIS time!" sort of humor thread).

It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection. I am not long-term optimistic about watermarking because there will be more bots, not all will choose to watermark, and I'd expect that watermarks created by one bot can be removed by another. Nonetheless, the information ecosystem (that's everything from letters to the editor to letters to your legislative representative, from student essays to journal submissions, and pretty much every online discussion everywhere) will eventually evolve some way of dealing with this stuff, just as it did with spam. This is, I think, an argument for putting the policy in place early even with imperfect enforcement.
 
  • Like
Likes dextercioby
  • #16
I'm not likely to say anything that hasn't already been said but I just looked at the post Frabjous made with the ChatGPT reference.

The problem I have with using this as some kind of "official" reference is that, well, it might be wrong. I mean, it sounds really good. And, mind you, I'm not saying that humans can't make mistakes when they post. (Heaven knows I do that all too often!) But we expect the people here to make the occasional mistake. That's why we want references that have some kind of (Science-based) community approval. I might be willing to look something up on it, but as we can't know just how much ChatGPT already knows I wouldn't want to trust it as a recognized source.

Just my two cents.

-Dan
 
  • #17
Nugatory said:
I think we should add to the rules an explicit ban on posting chatbot-generated text. That won't make it go away of course, but it will give us justification for acting when we do recognize it.
If a credible source cannot be provided, isn't that a good justification for acting?

Nugatory said:
It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection.
Would you delete an answer from a chatbot that is true and easily verifiable? What would be the point?
 
  • #18
Just to be clear, we're talking about replies as well as OPs.

Nugatory said:
It's easy to deal with recognized chatbot text - just delete it. The hard part is going to be detection.
Stating a policy, or putting it in the guidelines is OK. Making it known that we don't like AI text is better than silence. But the policy should be, no AI generated text without citing the AI source.

Even then, I'm skeptical. It is entirely foreseeable that people who have trouble with English, will be able to paste their clumsy, but original, text into an AI and get well phrased text in return. The bot might even throw in translation at the same time as quality improvement. That's just like a spell checker or grammar checker on steroids. So it is also foreseeable that the line between admirable and deplorable AI bots can itself get blurred. For sure, someone will claim that the spell checker is an AI.

But any attempt at enforcement an AI policy and deleting posts will likely be disastrous unless there is powerful evidence like a watermark that the text is from one of the "deplorable" AIs.

By the way, if we change the guidelines, how is that announced to the entire PF membership? I don't recall ever receiving notice that the guidelines changed.
 
  • #19
anorlunda said:
By the way, if we change the guidelines, how is that announced to the entire PF membership? I don't recall ever receiving notice that the guidelines changed.
I believe i recall instances when 'Feedback & Announcements' has been used, and Greg has used a PF-wide announcement referring to revision (perhaps substantial revisions/updates) to PF Guidelines in the past.
 
  • #20
I just tried an experiment. I asked ChatGBT the following.

I want to ask a physicist about the twin paradox, but I have trouble expressing the question. What is the best way to ask the question?
The reply said:
To ask a physicist about the twin paradox, you might try framing your question as follows:

"Could you explain the twin paradox in special relativity? How does the difference in time experienced by the two twins arise, and what does this tell us about the nature of time and motion?"

This question clearly states the topic you are interested in (the twin paradox) and asks for an explanation of the key concepts involved (time dilation and relative motion). It also invites the physicist to provide additional insights or elaboration on the significance of the paradox.

Should that use be banned?
 
  • #21
anorlunda said:
It is entirely foreseeable that people who have trouble with English, will be able to paste their clumsy, but original, text into an AI and get well phrased text in return.
That is indeed a possibility, and one that I would welcome. If this silver lining does develop, it won't just be here at PF that, it will show up across the entire information ecosystem (BTW, I just made that term up and I don't particularly like it) - for example, ESL students will want to use it for schoolwork. So I think we can wait and see, there's no reason for PF to be a leading innovator here.
 
  • #22
Perhaps a short term approach is an explicit mention in the rules that we understand that AI support can be a good thing, but it can also be abused and can produce fluent garbage (etc). Therefore we aren't banning it, but we explicitly note that what mentors judge to be abuse of AI generated content may be deleted and possibly infracted. And this policy is under active review and we see how it all pans out?
 
  • Like
Likes Petr Matas, TeethWhitener and topsquark
  • #23
I still don't understand what is this fixation about who - or what - wrote the text.

I'm asking the question again: If it makes sense and the information is verifiable, why would anyone want to delete it?
 
  • #24
jack action said:
and the information is verifiable
Well, that's the question, isn't it.

I think the problem with AI generated text is that it makes it very easy for me to produce something written confidently and clearly and completely wrong. An automated and improved way of copying and pasting from random papers I found via keyword search, perhaps. That's why I was suggesting a rule against abuse of AI generated text - so if people post verifiable text that's fine, but if they repeatedly post authoritative sounding nonsense then we explicitly note that "but ChatGPT said it" is not a defense.
 
  • #25
Maybe some of that promising future is already here. A quick test of ChatGPT's ability to assist with better writing is:
1672522186982.png


I think we need to enormously broaden our horizons about how people will use the bots and for what purposes. Factual questions and answers are just one of nearly infinite possibilities.

ChatGPT has only been out for a month. Jumping to conclusions now would be like saying, "I think there is a world market for maybe five computers."
 
  • #26
anorlunda said:
Maybe some of that promising future is already here. A quick test of ChatGPT's ability to assist with better writing is:
View attachment 319591

I think we need to enormously broaden our horizons about how people will use the bots and for what purposes. Factual questions and answers are just one of nearly infinite possibilities.

ChatGPT has only been out for a month. Jumping to conclusions now would be like saying, "I think there is a world market for maybe five computers."
I think this is a straw man argument. The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
 
Last edited:
  • #27
Ibix said:
That's why I was suggesting a rule against abuse of AI generated text - so if people post verifiable text that's fine, but if they repeatedly post authoritative sounding nonsense then we explicitly note that "but ChatGPT said it" is not a defense.
Again, this is already applicable by actual PF rules. Here are reasons mentioned by actual posts, from actual closed threads:

https://www.physicsforums.com/threads/nuclear-fusion-and-anti-nuclear-technology.1047161/post-6820264 said:
please keep in mind that thread starts in the technical PF forums need to be based on the mainstream literature, not opinions, and you should always include links to reputable sources when starting threads in the technical forums.
https://www.physicsforums.com/threads/hello-from-portugal.965451/post-6127308 said:
Please do you own search, and if you can find sources that meet the criteria for PF threads (peer-reviewed mainstream scientific articles), then start a thread in the technical forums with a link to that article, and ask your specific question.

Why wouldn't these reasons apply to a ChatGPT post?

As far as I know, ChatGPT is not "peer-reviewed mainstream scientific literature".
 
  • #28
Frabjous said:
I think this is a straw man argument. The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
You're really missing the point. Banning AI Chatbot prose because it might be abused is like banning keyboards because they might type nasty words.

Rules must focus on the abuse, not the tools.
 
  • Like
Likes Petr Matas and jedishrfu
  • #29
Frabjous said:
The concern is not people improving their english
I think that's a concern too. Online translators with the purpose of ... well: honest attempt of translation do exists. Post-editing the result of a clumsy human translation or a clumsy online translator with a chatbot just muddles the issue, but can't actually help the underlying problem.

In such case I would rather prefer to work with the clumsy original.

Regarding ChatGPT usage in answers/questions: I too think that the general citation and source rules are applicable and for some time being are sufficient (if kept in mind and actually applied if/as needed/possible).
 
Last edited:
  • #30
I’ve used ChatGPT here for some frivolous poetry writing and for answering a react.js question. It’s responses were quite impressive.

In the react.js case, I asked for citations as well. It answered the question and provided reasonable citations. I added some commentary and my own citation To show I did some research as well.

I don’t know if ChatGPT responses are guaranteed to complement the citations or not but I have to say it is a great search feature reminiscent of the computers on the original Star Trek.

I also have to say it’s a great search tool so far. I shudder to think how big business will muddy the waters with adverts and other such nonsense embedded within its answers.

Look what Google has done with search going the route of ATT yellow pages search and it’s sad that results are often intermixed with other nonsense based on what key words trigger what advertisements, a key part of googles money making operation.
 
  • #31
mfb said:
ChatGPT doesn't understand science. It writes confident-sounding answers that might or might not be right, and it's wrong too often to be used as answer to a science question. I would not recommend asking it to learn about anything either. For entertainment purposes: Sure, whatever.
So I had recently played with ChatGPT.

I started the thread about using so-called "cheek microphones" on musical theater productions. ChatGPT replied that these mics become commonplace in Broadway, but it boils down to director's preference and available budget. Then I tried to generalize to operas, for which ChatGPT also replied with almost the same text.

Yet, I have not virtually seen (from clips I have ever watched) that Broadway actors wear cheek mics. And from what I know, these mics are redundant in operas, since opera singers can generate loud sound with just the help of natural amplification from the opera house.

PS: In Indonesia cheek mics are always worn for musical numbers.
 
  • #32
We are very very early in the game of forming policies and opinions about AI. In my opinion, it is not even a generational question. I think that it will take 400 years or so to sort out how to handle AI in societies. During those 400 years the AIs will change more rapidly than humans change.

As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
 
  • #33
400 years is a long time.
 
  • #34
anorlunda said:
As evidence, I think of driverless cars. Almost all of us humans fear that driverless cars are flawed and should be banned. But almost none of us would ban human drivers who are also flawed, and who may drive drunk or drowsy. That's not fully rational, and it will take centuries for us to figure out what is rational.
Humans are not flawed. They are the result of millions of years of evolution that gave them the capacity to adapt to an ever-changing environment.

The problem you are hoping to solve with your statement is to determine the thin line between boldness and recklessness.

The former is the courage to take risks, and the latter is taking unnecessary risks. One is a quality, the other is a flaw. The only true way of determining whether an action is part of one category or the other is to evaluate the end result. Personally, I think that as living beings get life experience, they classify more and more decisions as being reckless rather than bold, which reduces them to do less and less until they do nothing. And that is why one must die and get replaced by a new - inexperienced - being that will see life as an exciting adventure rather than an obstacle course, impossible to get by.

AI will not be able to determine better than us where that frontier is; better than any living being on this Earth, for that matter. It's the randomness of life that makes it impossible. Even if all probabilities say you will most likely die if you do an action, one must try it once in a while to verify that it is still true. The more people try and don't lose, the clearer the path becomes and more and more people can follow it. It's the only way you can adapt to an ever-changing environment.

This is the true reason why most of us don't want to ban drunk drivers: they don't always get it wrong even if they don't always get it right. An example of an action that always returns a bad consequence is drinking a cleanser made of pure ammonia. This is such a clear reckless act that we don't even feel the need to have a law to forbid it. Smoking cigarettes? Not every smoker dies of a smoking-related disease. Some smokers use it to help them cope with anxiety, which may give them a life that they wouldn't have dreamed of otherwise. Is there another way that could be better? Maybe. But nature doesn't care. As long as life finds its way through, it's good enough.
 
  • #35
Frabjous said:
The concern is not people improving their english, but AI creating plausible sounding incorrect arguments/answers.
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
 
  • Like
Likes Petr Matas and Tom.G
  • #36
TeethWhitener said:
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
Just quantity. Besides, it is an important part of PF's mission statement to help educate people, even the ones that post silly stuff. We don't want to waste our time trying to train something that's not even human.
 
  • Like
Likes Hornbein, Petr Matas, PeroK and 2 others
  • #37
anorlunda said:
Just quantity. Besides, it is an important part of PF's mission statement to help educate people, even the ones that post silly stuff. We don't want to waste our time trying to train something that's not even human.
I guess I’m confused. Im trying to think of a use case where you’re worried about quantity that doesn’t fall afoul of preexisting prohibitions on spam posting.
 
  • #38
anorlunda said:
Almost all of us humans fear that driverless cars are flawed and should be banned.
I don't think that's true. Do you have a source for that claim?
TeethWhitener said:
What’s special about an AI doing that? There are already plenty of real live people on PF who do that without an AI’s help.
Quantity: A user can "answer" tons of threads with long AI-generated nonsense, but they are unlikely to do that if they have to come up with answers on their own.
Future outlook: Tell the user that's wrong and they have a chance to learn and improve. You won't improve an AI with actions taken in a forum.
TeethWhitener said:
I guess I’m confused. Im trying to think of a use case where you’re worried about quantity that doesn’t fall afoul of preexisting prohibitions on spam posting.
It doesn't look like spam unless you check it closely. It looks like detailed answers.
 
  • Like
Likes berkeman and PeroK
  • #39
mfb said:
I don't think that's true. Do you have a source for that claim?
Alas no. Just a bit of hyperbole. I have been working on organizing a debate on driverless cars and I haven't been able to find a volunteer for the pro side. But the pool I have been drawing from is mostly older people.
 
  • #40
Screenshot 2023-01-17 at 9.02.56 AM.png
 
  • Like
  • Haha
Likes Petr Matas, dlgoff, russ_watters and 9 others
  • #41
There have been a number of barely-lucid posts that appear to be from ChatGPT - or possibly ChatLSD. Obviously, this creates a lot of work for the Mentors.

I have no trouble if PF considers this a DoS attack and responds accordingly.
 
  • Like
Likes Bystander, BillTre and russ_watters
  • #42
Vanadium 50 said:
There have been a number of barely-lucid posts that appear to be from ChatGPT - or possibly ChatLSD. Obviously, this creates a lot of work for the Mentors.
I like it when they bold their complaint against the moderators at the bottom of the post. It makes it easier to separate the bot from human created content!
 
  • Like
  • Haha
Likes berkeman and BillTre
  • #43
I got no problem with turning off various subnets in response. If you get a complaint from bullwinkle.moose@wossamotta.edu that he can't post any problems here, I have no problem replying "Someone with the email boris.badanov@wossamotta.edu was attempting to damage PF. We have been able to restrict the damage to wossamotta.edu. Unhappy with this? Perhaps you and your classmates should speak to Boris."
 
  • #44
Vanadium 50 said:
I got no problem with turning off various subnets in response. If you get a complaint from bullwinkle.moose@wossamotta.edu that he can't post any problems here, I have no problem replying "Someone with the email boris.badanov@wossamotta.edu was attempting to damage PF. We have been able to restrict the damage to wossamotta.edu. Unhappy with this? Perhaps you and your classmates should speak to Boris."
This was the standard approach back in the Usenet era when you had to be with an educational institution or decent-sized tech company to have internet access. When bad stuff hit the Usenet feed we would contact the sysadmin at the offender's institution, they would reply with something along the lines "thank you for the heads-up - his account is deactivated until he and I have had a conversation" and the problem would be gone.

I am skeptical that anything like that can be made to work in today's internet. Our leverage over, for example, gmail.com is exactly and precisely zero.
 
  • Like
Likes Petr Matas and russ_watters
  • #45
While I wouldn't say that things work this way today - they don't - and without divulging mentor tools, I think you have a little more knowledge and leverage than that. :smile:
 
  • #46
Vanadium 50 said:
While I wouldn't say that things work this way today - they don't - and without divulging mentor tools, I think you have a little more knowledge and leverage than that. :smile:
Shush. :wink:
 
  • #47
I think that appropriate AI use with attribution and ideally a link to the AI chat should be allowed and the current AI policy should be reviewed. In my recent thread, I got into conflict with it unknowingly twice (for a different reason each time), but I believe that my AI use was legitimate and transparent.

The first case appeared in my opening post:
Petr Matas said:
I've come across a paradox I can't resolve.

[...]

I considered the effect of gravitational red shift, but it doesn't seem to resolve the paradox, because [...].

I also tried to resolve the paradox using ChatGPT (in Czech), which concluded that the system only reaches a quasi-stationary state because it takes too long to reach equilibrium. However, I don't think this resolves the paradox either, because the paradox consists in the conclusion that no state of thermodynamic equilibrium exists whatsoever. However, an isolated system should have such a state, shouldn't it?
As you can see, I tried to resolve the paradox using ChatGPT (i.e. quickly and without bothering humans). It had worked for me many times before, but not this time. Therefore I had to ask humans. I felt that it would be useful to describe the unsuccessful approaches I took, including the result of the discussion with AI, to provide a link to the discussion, and to explicitly state that the answer was unsatisfactory.

After two hours and 10 posts of fruitful discussion, we were approaching the solution and at that moment the thread was locked for review due to possible conflict with AI policies for about 14 hours. I was quite worried that the members trying to help me could abandon the thread, but fortunately they didn't. They gave me food for thought, which allowed me even to compose a proof, which showed where exactly the intuition leading to the paradox went wrong.

The second case:
Chestermiller said:
Even in an ideal gas, the molecules collide with each other to exchange energy. What is the mean free path of an oxygen molecule in a gas that is at room temperature and a pressure of 1 bar?
Petr Matas said:
ChatGPT says 71 nm
Chestermiller said:
In other words, each molecule experiences a multitude of collisions and energy transfers per unit time, which translates into significant heat conduction within the gas.
Vanadium 50 said:
Tell us again how you're not using an AI?
Petr Matas said:
This was the second time 😉. I prefer leaving tedious but simple work to AI (while checking its answers of course) to save time for tasks which are currently beyond the AI capabilities. I mark the AI answers clearly whenever I use them. Or would you prefer me to conceal the use of AI? Or not use AI at all? What is the point?
As you can see, Chestermiller asked a rather rhetoric question. I saw no point in searching for the formula and values to be plugged in and in doing the calculation myself, but the result was needed to allow us to move forward. So I asked ChatGPT, verified that the answer was in agreement with my expectation, cited the numeric result with attribution and provided a link to the ChatGPT discussion.

Although the thread was not locked this time, later I found that my carefully-attributed four-word reply to the trivial question violated current explicit ban on AI-generated replies.

I am afraid that too strict rules won't prevent people from using AI, but rather to conceal its use and I am sure this is not what we want. In these days, AI-generated text is becoming indistinguishable from the human-written one, which makes our policies unenforceable. We should certainly avoid motivating people to conceal the use of AI.
 
  • #48
Perhaps @Greg Bernhardt can get xenforo to add an AI quote feature in addition to the normal quote box where we can select the AI engine/version, the prompt used and the AI response.

It would be like a super quote box that clearly shows it to be an AI quote insert.

Some commercial editors (iaWriter as an example) allow you to mark text as coming from an AI vs a human and displays it in a different color.
 
  • Like
  • Skeptical
  • Informative
Likes DaTario, pines-demon, russ_watters and 4 others
  • #49
Cam we have the same thing for the Magic 8 Ball?
 
  • #50
IMHO for specialized subjects that thing is just a sophisticated BS generator and this should be clearly displayed.
If there will ever be a version trained to explain science or be a teacher - yup. If it actually works then a different quote box is a good idea.
 
  • Like
Likes russ_watters

Similar threads

Back
Top