The correct interpretation of QM.... according to a "language model" AI

Click For Summary
SUMMARY

The discussion centers on the capabilities and implications of AI language models, specifically GPT-3 and GPT-J, in generating content related to quantum mechanics. Users express concerns about the potential for these models to produce misleading or unoriginal academic work, particularly in peer-reviewed journals. The conversation highlights the need for transparency regarding AI-generated content and references instances of algorithmically generated texts being accepted in academic settings, raising ethical questions about peer review processes.

PREREQUISITES
  • Understanding of AI language models, specifically GPT-3 and GPT-J
  • Familiarity with quantum mechanics terminology
  • Knowledge of academic peer review processes
  • Awareness of ethical considerations in publishing
NEXT STEPS
  • Research the capabilities of GPT-3 and GPT-J in content generation
  • Explore the implications of AI in academic publishing
  • Investigate the Sokal affair and its relevance to AI-generated texts
  • Learn about ethical guidelines for using AI in research and publication
USEFUL FOR

Researchers, academics, and anyone involved in the intersection of artificial intelligence and scholarly publishing will benefit from this discussion, particularly those concerned with the integrity of peer review and the authenticity of academic work.

mitchell porter
Gold Member
Messages
1,522
Reaction score
814
In the last few years, many of us have heard of GPT-3, a "language model" trained on terabytes of Internet prose, with a remarkable ability to generate essays, dialogues, and other original works. There is a similar but less powerful model called GPT-J which can be accessed freely at the following website:

https://6b.eleuther.ai/

On a whim, I just asked it to write a work about the "correct interpretation of quantum mechanics". The result may be seen here:



There's no breakthrough in the philosophy of quantum theory here. The really significant, even sinister, thing is that we now have these models that can produce imitations of human genres of writing (in this case the genre is, "opinionated essay about physics that doesn't contain much physical detail"), based on patterns distilled from the human originals.

So maybe it's actually off-topic for this sub-forum. Although I am curious what the regulars think about it...
 
  • Like
Likes   Reactions: PeroK
Physics news on Phys.org
Yes, this is quite interesting. We moved your thread to the GD section for wider audience appeal as its part QM and part AI.

One thing to be aware of is that GPT-3 is basically grabbing bits and pieces of established work and generating new content which may or may not be valid. Its scary to think that a human peer reviewer might think its original work and publish it in some journal for other like minded folks to read.

I think at some point there will need to be a byline added to any GPT-X generated content to say where it came from for future readers to know what to expect.
 
Last edited:
  • Like
Likes   Reactions: mitchell porter
jedishrfu said:
Its scary to think that a human peer reviewer might think its original work and publish it in some journal for other like minded folks to read.
I don't remember details (names, titles), but people have shown several times that it is pretty easy to abuse peer review, especially in smaller journals (and I don't mean predatory ones).

I believe at least one case was with use of an algorithmically generated text (accepted to some conference proceedings?).
 
Borek said:
I don't remember details (names, titles), but people have shown several times that it is pretty easy to abuse peer review, especially in smaller journals (and I don't mean predatory ones).

I believe at least one case was with use of an algorithmically generated text (accepted to some conference proceedings?).

Recently a professor got some nonsense in the style of political correctness published. He was fired for engaging in unethical experiments on the editors of the journal.

I'd say that that AI would get an A in a high school English class.
 
Last edited:
  • Like
Likes   Reactions: jedishrfu
I have just conducted the experiment of using a fake hep-th abstract, generated at snarxiv.org, as an input for GPT-J. The result is not a coherent paper, but it's shockingly meaningful, even original in places. If the network were tuned further, perhaps by a competitive "GAN"-style process, I really wonder how closely it could approach a real arxiv paper in quality.
 
mitchell porter said:
I really wonder how closely it could approach a real arxiv paper in quality.
Not sure about GPT-J but GPT-3 has an interesting example at https://www.gwern.net/GPT-3-nonfiction#arxiv-paper If your not familiar with the site, the bold type is a prompt and the small type is where GPT-3 takes over.
 

Similar threads

Replies
10
Views
5K
  • · Replies 33 ·
2
Replies
33
Views
4K
  • · Replies 21 ·
Replies
21
Views
4K
  • · Replies 37 ·
2
Replies
37
Views
7K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 14 ·
Replies
14
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 33 ·
2
Replies
33
Views
8K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 50 ·
2
Replies
50
Views
9K