The correct interpretation of QM.... according to a "language model" AI

Click For Summary

Discussion Overview

The discussion revolves around the implications of AI language models, particularly GPT-3 and GPT-J, in generating content related to the interpretation of quantum mechanics. Participants explore the potential for these models to produce text that mimics human writing, raising concerns about originality and the integrity of peer review in academic publishing.

Discussion Character

  • Debate/contested
  • Exploratory
  • Technical explanation

Main Points Raised

  • Some participants express concern that AI-generated content may be mistaken for original work, potentially leading to its acceptance in academic journals.
  • Others highlight the ease of abusing peer review processes, citing past incidents where algorithmically generated texts were accepted for publication.
  • A participant shares an experiment using a fake abstract to generate text with GPT-J, noting that the output was surprisingly meaningful and original.
  • There is mention of a specific case where a professor published nonsensical work in a journal, which raises ethical questions about the use of AI in academic writing.
  • Some participants speculate on the potential for AI models to produce high-quality academic papers if further developed.

Areas of Agreement / Disagreement

Participants generally agree on the potential risks associated with AI-generated content in academic contexts, but there is no consensus on the implications for the future of peer review or the quality of AI-generated texts.

Contextual Notes

Participants reference specific cases and examples without providing detailed citations, which may limit the clarity of their arguments. The discussion also reflects a range of opinions on the ethical considerations of using AI in academic writing.

mitchell porter
Gold Member
Messages
1,522
Reaction score
815
In the last few years, many of us have heard of GPT-3, a "language model" trained on terabytes of Internet prose, with a remarkable ability to generate essays, dialogues, and other original works. There is a similar but less powerful model called GPT-J which can be accessed freely at the following website:

https://6b.eleuther.ai/

On a whim, I just asked it to write a work about the "correct interpretation of quantum mechanics". The result may be seen here:



There's no breakthrough in the philosophy of quantum theory here. The really significant, even sinister, thing is that we now have these models that can produce imitations of human genres of writing (in this case the genre is, "opinionated essay about physics that doesn't contain much physical detail"), based on patterns distilled from the human originals.

So maybe it's actually off-topic for this sub-forum. Although I am curious what the regulars think about it...
 
  • Like
Likes   Reactions: PeroK
Physics news on Phys.org
Yes, this is quite interesting. We moved your thread to the GD section for wider audience appeal as its part QM and part AI.

One thing to be aware of is that GPT-3 is basically grabbing bits and pieces of established work and generating new content which may or may not be valid. Its scary to think that a human peer reviewer might think its original work and publish it in some journal for other like minded folks to read.

I think at some point there will need to be a byline added to any GPT-X generated content to say where it came from for future readers to know what to expect.
 
Last edited:
  • Like
Likes   Reactions: mitchell porter
jedishrfu said:
Its scary to think that a human peer reviewer might think its original work and publish it in some journal for other like minded folks to read.
I don't remember details (names, titles), but people have shown several times that it is pretty easy to abuse peer review, especially in smaller journals (and I don't mean predatory ones).

I believe at least one case was with use of an algorithmically generated text (accepted to some conference proceedings?).
 
Borek said:
I don't remember details (names, titles), but people have shown several times that it is pretty easy to abuse peer review, especially in smaller journals (and I don't mean predatory ones).

I believe at least one case was with use of an algorithmically generated text (accepted to some conference proceedings?).

Recently a professor got some nonsense in the style of political correctness published. He was fired for engaging in unethical experiments on the editors of the journal.

I'd say that that AI would get an A in a high school English class.
 
Last edited:
  • Like
Likes   Reactions: jedishrfu
I have just conducted the experiment of using a fake hep-th abstract, generated at snarxiv.org, as an input for GPT-J. The result is not a coherent paper, but it's shockingly meaningful, even original in places. If the network were tuned further, perhaps by a competitive "GAN"-style process, I really wonder how closely it could approach a real arxiv paper in quality.
 
mitchell porter said:
I really wonder how closely it could approach a real arxiv paper in quality.
Not sure about GPT-J but GPT-3 has an interesting example at https://www.gwern.net/GPT-3-nonfiction#arxiv-paper If your not familiar with the site, the bold type is a prompt and the small type is where GPT-3 takes over.
 

Similar threads

Replies
10
Views
5K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 37 ·
2
Replies
37
Views
7K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
8K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 50 ·
2
Replies
50
Views
9K