Crowd $ourcing for Peer Review

In summary, the effectiveness of List's crowd-sourced peer review system has been seen as excellent, with few complaints about quality or time taken to respond. However, there are many questions left about how the system will scale and whether it is a good model for anything that cannot suffer from poor quality or slow response times.
Physics news on Phys.org
  • #2
Is the effectiveness that List has seen simply enthusiasm for something new?
That is one of the important questions.

Another question: What happens to parts nobody wants to review? If the paper has 3 reviewers, they feel responsible for it. If the paper has a forum, everyone could hope that someone else will do it.
Does that even get noted if some part is not reviewed?

How is this supposed to scale to multiple fields? A separate forum for each subfield, and editors invite the scientists?
 
  • #3
I guess the only model we have is wikipedia which has been pretty successful using the crowd sourcing model with backing discussion talks and moderators to manage the edits.
 
  • #4
mfb said:
Another question: What happens to parts nobody wants to review? If the paper has 3 reviewers, they feel responsible for it. If the paper has a forum, everyone could hope that someone else will do it.
Does that even get noted if some part is not reviewed?

Remember that editors are overseeing the process. So yes, it would get noted if a part was not reviewed.

About 10 years ago, when I was still a freelance writer/editor, I had become very interested in writing about a particular model of behavioral psychology (Relational Frame Theory and its chief clinical application, Acceptance and Commitment Therapy). The sponsoring organization, ACBS, has a large membership & a large research community with lots of books in process at any given time. At one point the guy who had been the founder asked me if I thought it would be feasible to set up & operate an online site for crowd-sourcing technical editing of forthcoming books; somewhat similar to peer reviews in that commenters would be peers & their goal would be to assess arguments & claims and point out any weak spots, etc. Back then the technology for multiple commenting/editing was not nearly as robust as it is today, so that was a major barrier. I researched what various university presses were experimenting with back then & eventually said no, it wasn't really feasible for ACBS yet.

I am guessing that the tech platforms must be much better now. However as I remember it, I had a lot of concerns about process as well; as a technical editor I created several different small-group editing setups for various projects & clients, and I always found that human interactions and expectations were both more important & more challenging to get right than the technology. This is one reason (there are others) that "single-sourcing of content", a.k.a. content management, has a poor rate of adoption at places like ad agencies: content management requires not only a special tech setup, but special rules as well; non-writer staff, when asked to contribute content, typically resent such rules as a burden & refuse to follow them.

In this regard I think the concerns of the writer (Chris Lee) are valid, e.g.
There are big questions left. Will it scale? Is the effectiveness that List has seen simply enthusiasm for something new? If so, the anonymous forum may become a ghost town, forcing editors back to having to nag reviewers to respond.
 
Last edited:
  • #5
jedishrfu said:
I guess the only model we have is wikipedia which has been pretty successful using the crowd sourcing model with backing discussion talks and moderators to manage the edits.

However the results reinforce the concern raised by @mfb and perhaps raise others. You will find nearly as many poorly-edited Wikipedia articles as good ones. Many topics languish, i.e. are mere stubs; others are "more like advertisements" and this is apparently hard to weed out; some topics become battlegrounds in which editors with different views can become very antagonistic.

Also the Wikipedia editing interface, and even more so the process, demand a lot of time if you are to learn the ropes - I was briefly an editor on the few topics I have much knowledge about, and the learning curve was steep given the low volume of content I could actually contribute. That itself forms a barrier that wouldn't work for large-scale peer review, I don't think.

So Wikipedia works as Wikipedia - but may not be a very good model for anything that can't suffer such incidents and must be more easily & quickly learnable.
 
Last edited:
  • #6
The current peer review system has so many bugs relating to time and quality that I can't see how venues with alternative approaches can hurt.

But any peer review system depending on volunteer labor will ultimately depend on the willingness of the volunteers to do a good and timely job. Some fields have cultures that lend themselves more to that with certain organizational approaches than others.

Of the fields I've published in atomic physics and brain injury have the most responsive referees and thoughtful editors with the traditional approach used my most journals today. Education and fisheries science have the least responsive referees and least thoughtful editors.
 
  • Like
Likes jedishrfu
  • #7
I'm not surprised a system with 100 reviewers does better than a system with 3.
 
  • #8
Vanadium 50 said:
I'm not surprised a system with 100 reviewers does better than a system with 3.
They used 10 papers, so a fairer comparison would be 100 vs 30. That's still more manpower, but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.
 
  • #9
Vanadium 50 said:
I'm not surprised a system with 100 reviewers does better than a system with 3.
The 100 would have to review 30 papers to make the ratio realistic again, and spend 1/30 of the time on each paper. That is the novelty factor mentioned in the article - maybe the peers spent more time per paper than they would if the system is used frequently.
Bandersnatch said:
but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.
Not necessarily. Peer review doesn't take long because the reviewers spend weeks reviewing this single paper. It takes long because everyone has other tasks. We would have to measure the time actually spend on reviewing the article.
 

1. What is crowd-sourcing for peer review?

Crowd-sourcing for peer review is a process in which a large group of people, typically from diverse backgrounds, are invited to review and provide feedback on a scientific research paper or project. This approach aims to involve a wider range of perspectives and expertise in the evaluation of scientific work.

2. How does crowd-sourcing for peer review work?

Crowd-sourcing for peer review typically involves posting the research paper or project on an open platform or website, where interested individuals can access and review it. Reviewers may be asked to provide feedback, suggestions, and critiques through comments or a structured feedback form. The author then incorporates this feedback into the final version of the paper.

3. What are the potential benefits of crowd-sourcing for peer review?

Crowd-sourcing for peer review has the potential to improve the quality and diversity of feedback received by the author. It also allows for a more transparent and open review process, as anyone can participate and view the feedback provided. Additionally, this approach can help identify potential biases or blind spots in the research, leading to more robust and well-rounded conclusions.

4. Are there any drawbacks to using crowd-sourcing for peer review?

One potential drawback of crowd-sourcing for peer review is the lack of expertise and qualifications of some reviewers. This can lead to inconsistent or unreliable feedback. Additionally, the open nature of this approach may make it vulnerable to manipulation or bias from certain individuals or groups. It is important to carefully consider and filter the feedback received through crowd-sourcing for peer review.

5. Is crowd-sourcing for peer review a widely accepted practice?

Crowd-sourcing for peer review is a relatively new approach and is not yet widely accepted in the scientific community. However, it is gaining attention and some researchers and journals have started to experiment with this method. Further research and evaluation are needed to determine its effectiveness and potential impact on the traditional peer review process.

Similar threads

  • General Discussion
Replies
5
Views
668
  • General Discussion
Replies
6
Views
1K
  • General Discussion
Replies
6
Views
1K
  • General Discussion
Replies
34
Views
3K
  • General Discussion
Replies
32
Views
6K
Replies
9
Views
2K
  • General Discussion
Replies
3
Views
894
  • General Discussion
Replies
3
Views
894
  • General Discussion
Replies
1
Views
4K
  • General Discussion
Replies
4
Views
1K
Back
Top