AI Used In Peer Review

  • Thread starter Thread starter Hornbein
  • Start date Start date
Click For Summary
SUMMARY

The discussion centers on the integration of AI tools in the peer review process of scientific papers. Participants express mixed opinions, with some advocating for the inclusion of generative AI critiques in review packages, while emphasizing that only peer-generated material should constitute the actual review. The conversation highlights the effectiveness of static code analysis tools like Coverity and LDRA in software engineering, noting their limitations in assessing whether code meets specific requirements. The consensus is that while AI can assist in managing the increasing volume of submissions, the final review decision must remain with human reviewers.

PREREQUISITES
  • Understanding of generative AI and its applications in academic publishing
  • Familiarity with peer review processes in scientific research
  • Knowledge of static code analysis tools, specifically Coverity and LDRA
  • Awareness of the challenges in the peer review system due to increasing submission volumes
NEXT STEPS
  • Research the role of generative AI in academic peer review processes
  • Explore the functionalities and limitations of Coverity and LDRA in software development
  • Investigate the impact of AI on the future of scientific publishing and peer review
  • Learn about the ethical considerations of using AI in academic evaluations
USEFUL FOR

Researchers, academic publishers, software engineers, and anyone involved in the peer review process seeking to understand the implications of AI integration in scientific evaluations.

Computer science news on Phys.org
Hornbein said:
https://physics.aps.org/articles/v18/194

Some are for it, others against it.
In my opinion:
It would be useful for a critique from a generative AI (or perhaps reports from a few Generative AI products) to be included in the review package - and available to the author (before the review) and the peers (during the review).
But only material generated by peers should constitute the actual review.
Of course, individual peer reviewers are free to use AI tools in whatever way they find useful.

In the field of software engineering, static code analysis tools have gotten remarkably good. Coverity and LDRA come to mind as examples. They are thorough to the extreme. But, they do identify "issues" that bear on the core purpose of the code - and which either cannot be "corrected" or should not be corrected. And these tools cannot directly address whether the code is actually meeting requirements - only whether it is self-consistent and meets broadly-accepted and/or industry-specific coding standards.
 
Last edited:
  • Informative
Likes   Reactions: berkeman
This will happen in future, that is for sure. It could be useful for the first selection process of the huge amount of papers. Soon, we will have not enough peer-reviewers to look at them. The final decision however will always be at the human side. At least as long as an AI is not a publisher of a science-journal itself.
 
I get all sorts of mail telling me that I'm the author of some "scientific" paper I have nothing to do with. I suspect it started when I began frequenting PF. Should I believe the spam I have an Erdős number of -3.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
760
  • · Replies 2 ·
Replies
2
Views
739
Replies
2
Views
475
Replies
15
Views
3K
  • · Replies 13 ·
Replies
13
Views
4K
Replies
6
Views
645
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 10 ·
Replies
10
Views
1K
Replies
18
Views
1K