News Can Crowdsourcing Enhance the Quality of Scientific Peer Review?

AI Thread Summary
The discussion centers on the effectiveness of crowdsourced peer reviews in academic publishing, highlighting both potential benefits and challenges. While initial results show promise, questions arise regarding the sustainability of enthusiasm for this new approach and how it scales across different fields. Concerns include the possibility of unreviewed sections in papers if reviewers feel less accountable in a forum setting, and whether editors can effectively manage this process. Comparisons to Wikipedia suggest that while crowdsourcing can work, it may not be suitable for all contexts due to varying levels of engagement and the complexity of the editing process. The conversation also touches on the variability in responsiveness among peer reviewers across different academic disciplines, indicating that some fields may be more conducive to this model than others. Ultimately, the success of a crowdsourced peer review system hinges on the willingness of volunteers to contribute meaningfully and consistently.
Physics news on Phys.org
Is the effectiveness that List has seen simply enthusiasm for something new?
That is one of the important questions.

Another question: What happens to parts nobody wants to review? If the paper has 3 reviewers, they feel responsible for it. If the paper has a forum, everyone could hope that someone else will do it.
Does that even get noted if some part is not reviewed?

How is this supposed to scale to multiple fields? A separate forum for each subfield, and editors invite the scientists?
 
I guess the only model we have is wikipedia which has been pretty successful using the crowd sourcing model with backing discussion talks and moderators to manage the edits.
 
mfb said:
Another question: What happens to parts nobody wants to review? If the paper has 3 reviewers, they feel responsible for it. If the paper has a forum, everyone could hope that someone else will do it.
Does that even get noted if some part is not reviewed?

Remember that editors are overseeing the process. So yes, it would get noted if a part was not reviewed.

About 10 years ago, when I was still a freelance writer/editor, I had become very interested in writing about a particular model of behavioral psychology (Relational Frame Theory and its chief clinical application, Acceptance and Commitment Therapy). The sponsoring organization, ACBS, has a large membership & a large research community with lots of books in process at any given time. At one point the guy who had been the founder asked me if I thought it would be feasible to set up & operate an online site for crowd-sourcing technical editing of forthcoming books; somewhat similar to peer reviews in that commenters would be peers & their goal would be to assess arguments & claims and point out any weak spots, etc. Back then the technology for multiple commenting/editing was not nearly as robust as it is today, so that was a major barrier. I researched what various university presses were experimenting with back then & eventually said no, it wasn't really feasible for ACBS yet.

I am guessing that the tech platforms must be much better now. However as I remember it, I had a lot of concerns about process as well; as a technical editor I created several different small-group editing setups for various projects & clients, and I always found that human interactions and expectations were both more important & more challenging to get right than the technology. This is one reason (there are others) that "single-sourcing of content", a.k.a. content management, has a poor rate of adoption at places like ad agencies: content management requires not only a special tech setup, but special rules as well; non-writer staff, when asked to contribute content, typically resent such rules as a burden & refuse to follow them.

In this regard I think the concerns of the writer (Chris Lee) are valid, e.g.
There are big questions left. Will it scale? Is the effectiveness that List has seen simply enthusiasm for something new? If so, the anonymous forum may become a ghost town, forcing editors back to having to nag reviewers to respond.
 
Last edited:
jedishrfu said:
I guess the only model we have is wikipedia which has been pretty successful using the crowd sourcing model with backing discussion talks and moderators to manage the edits.

However the results reinforce the concern raised by @mfb and perhaps raise others. You will find nearly as many poorly-edited Wikipedia articles as good ones. Many topics languish, i.e. are mere stubs; others are "more like advertisements" and this is apparently hard to weed out; some topics become battlegrounds in which editors with different views can become very antagonistic.

Also the Wikipedia editing interface, and even more so the process, demand a lot of time if you are to learn the ropes - I was briefly an editor on the few topics I have much knowledge about, and the learning curve was steep given the low volume of content I could actually contribute. That itself forms a barrier that wouldn't work for large-scale peer review, I don't think.

So Wikipedia works as Wikipedia - but may not be a very good model for anything that can't suffer such incidents and must be more easily & quickly learnable.
 
Last edited:
The current peer review system has so many bugs relating to time and quality that I can't see how venues with alternative approaches can hurt.

But any peer review system depending on volunteer labor will ultimately depend on the willingness of the volunteers to do a good and timely job. Some fields have cultures that lend themselves more to that with certain organizational approaches than others.

Of the fields I've published in atomic physics and brain injury have the most responsive referees and thoughtful editors with the traditional approach used my most journals today. Education and fisheries science have the least responsive referees and least thoughtful editors.
 
  • Like
Likes jedishrfu
I'm not surprised a system with 100 reviewers does better than a system with 3.
 
Vanadium 50 said:
I'm not surprised a system with 100 reviewers does better than a system with 3.
They used 10 papers, so a fairer comparison would be 100 vs 30. That's still more manpower, but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.
 
Vanadium 50 said:
I'm not surprised a system with 100 reviewers does better than a system with 3.
The 100 would have to review 30 papers to make the ratio realistic again, and spend 1/30 of the time on each paper. That is the novelty factor mentioned in the article - maybe the peers spent more time per paper than they would if the system is used frequently.
Bandersnatch said:
but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.
Not necessarily. Peer review doesn't take long because the reviewers spend weeks reviewing this single paper. It takes long because everyone has other tasks. We would have to measure the time actually spend on reviewing the article.
 
Back
Top