Jarvis323
- 1,247
- 988
Essentially, what it is trying to do is write something like you might expect a human to write as a follow up to the prompt based on the examples it has.sbrothy said:TIL that the language "AI" ChatGPT, perhaps not surprising, is pretty good at spewing technobabble:
"[...] We find that it is effective at paraphrasing and explaining concepts in a variety of styles, but not at genuinely connecting concepts. It will provide false information with full confidence and make up statements when necessary. [...]" -- https://arxiv.org/abs/2301.08155
Admittedly I didn't read the paper and the above quote is taken out of context (For comedic value. YMMV.) There may well be something of value here but at first glance it sure sounds like they made a crank-bot.
Some of the humor is lost when you read some of the more serious discussions here about whether it can pass exams though. Still, GPT-3 is impressive but maybe they need a math-bot to go with it. :)
EDIT:
In fact, if you read the text under the heading “jailbreaks” in the first link I’m not sure it’s funny at all. So it goes.
The crucial point is that this means it isn't trying to get the "correct" answer. And if it were to be trained with technobabble (it probably is) along with real scientific source material, then it might be difficult for it to know the difference sometimes. Similarly, it may be trained based on things written by crackpots or popular science articles and things like that. If your prompt resembles a question that has a lot of bad answers in the training data, then it will do a good job (according to its "goal") by giving you a bad answer.
Assuming it is trained really really well with such an assortment of sources (so that it could delineate what kind of answer to give perfectly), then still which kind of answer will depend, perhaps subtly, on the prompt.
It already can successfully take some cues about style really well. You might try something like "Explain <x>, accurately, at the research level, in the style of Albert Einstein, using mathematics, and format your answer in latex." or something like that, and it may do better.
In the long run, it might help to have some settings which automatically insert optimal cues for getting the best answers to scientific questions. It might help as well, if the the goal is writing scientific articles, to train it or fine tune it purely with text books and research articles.
But to really make sure it gives accurate answers in science and math, you may need to dedicate a lot of qualified human effort to "grade" its answers to help train it.