The correct interpretation of QM.... according to a "language model" AI

AI Thread Summary
Recent discussions highlight the capabilities of GPT-3 and its less powerful counterpart, GPT-J, in generating human-like text across various genres, including essays on complex topics like quantum mechanics. While these models can produce coherent and original-sounding content, concerns arise regarding their potential misuse in academic settings, particularly in peer-reviewed journals. The fear is that algorithmically generated texts could be mistaken for legitimate research, leading to the publication of nonsensical or misleading information. Instances of this happening have been noted, including a controversial case where a professor faced consequences for submitting a politically charged, nonsensical paper. The conversation emphasizes the need for transparency in AI-generated content, suggesting that proper attribution or disclaimers may be necessary to inform readers about the origins of such works. The potential for AI to produce high-quality academic papers raises questions about the future of peer review and the integrity of scholarly publishing.
mitchell porter
Gold Member
Messages
1,495
Reaction score
777
In the last few years, many of us have heard of GPT-3, a "language model" trained on terabytes of Internet prose, with a remarkable ability to generate essays, dialogues, and other original works. There is a similar but less powerful model called GPT-J which can be accessed freely at the following website:

https://6b.eleuther.ai/

On a whim, I just asked it to write a work about the "correct interpretation of quantum mechanics". The result may be seen here:



There's no breakthrough in the philosophy of quantum theory here. The really significant, even sinister, thing is that we now have these models that can produce imitations of human genres of writing (in this case the genre is, "opinionated essay about physics that doesn't contain much physical detail"), based on patterns distilled from the human originals.

So maybe it's actually off-topic for this sub-forum. Although I am curious what the regulars think about it...
 
  • Like
Likes PeroK
Physics news on Phys.org
Yes, this is quite interesting. We moved your thread to the GD section for wider audience appeal as its part QM and part AI.

One thing to be aware of is that GPT-3 is basically grabbing bits and pieces of established work and generating new content which may or may not be valid. Its scary to think that a human peer reviewer might think its original work and publish it in some journal for other like minded folks to read.

I think at some point there will need to be a byline added to any GPT-X generated content to say where it came from for future readers to know what to expect.
 
Last edited:
  • Like
Likes mitchell porter
jedishrfu said:
Its scary to think that a human peer reviewer might think its original work and publish it in some journal for other like minded folks to read.
I don't remember details (names, titles), but people have shown several times that it is pretty easy to abuse peer review, especially in smaller journals (and I don't mean predatory ones).

I believe at least one case was with use of an algorithmically generated text (accepted to some conference proceedings?).
 
Borek said:
I don't remember details (names, titles), but people have shown several times that it is pretty easy to abuse peer review, especially in smaller journals (and I don't mean predatory ones).

I believe at least one case was with use of an algorithmically generated text (accepted to some conference proceedings?).

Recently a professor got some nonsense in the style of political correctness published. He was fired for engaging in unethical experiments on the editors of the journal.

I'd say that that AI would get an A in a high school English class.
 
Last edited:
  • Like
Likes jedishrfu
I have just conducted the experiment of using a fake hep-th abstract, generated at snarxiv.org, as an input for GPT-J. The result is not a coherent paper, but it's shockingly meaningful, even original in places. If the network were tuned further, perhaps by a competitive "GAN"-style process, I really wonder how closely it could approach a real arxiv paper in quality.
 
mitchell porter said:
I really wonder how closely it could approach a real arxiv paper in quality.
Not sure about GPT-J but GPT-3 has an interesting example at https://www.gwern.net/GPT-3-nonfiction#arxiv-paper If your not familiar with the site, the bold type is a prompt and the small type is where GPT-3 takes over.
 
Back
Top