- #1
That is one of the important questions.Is the effectiveness that List has seen simply enthusiasm for something new?
mfb said:Another question: What happens to parts nobody wants to review? If the paper has 3 reviewers, they feel responsible for it. If the paper has a forum, everyone could hope that someone else will do it.
Does that even get noted if some part is not reviewed?
There are big questions left. Will it scale? Is the effectiveness that List has seen simply enthusiasm for something new? If so, the anonymous forum may become a ghost town, forcing editors back to having to nag reviewers to respond.
jedishrfu said:I guess the only model we have is wikipedia which has been pretty successful using the crowd sourcing model with backing discussion talks and moderators to manage the edits.
They used 10 papers, so a fairer comparison would be 100 vs 30. That's still more manpower, but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.Vanadium 50 said:I'm not surprised a system with 100 reviewers does better than a system with 3.
The 100 would have to review 30 papers to make the ratio realistic again, and spend 1/30 of the time on each paper. That is the novelty factor mentioned in the article - maybe the peers spent more time per paper than they would if the system is used frequently.Vanadium 50 said:I'm not surprised a system with 100 reviewers does better than a system with 3.
Not necessarily. Peer review doesn't take long because the reviewers spend weeks reviewing this single paper. It takes long because everyone has other tasks. We would have to measure the time actually spend on reviewing the article.Bandersnatch said:but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.
Crowd-sourcing for peer review is a process in which a large group of people, typically from diverse backgrounds, are invited to review and provide feedback on a scientific research paper or project. This approach aims to involve a wider range of perspectives and expertise in the evaluation of scientific work.
Crowd-sourcing for peer review typically involves posting the research paper or project on an open platform or website, where interested individuals can access and review it. Reviewers may be asked to provide feedback, suggestions, and critiques through comments or a structured feedback form. The author then incorporates this feedback into the final version of the paper.
Crowd-sourcing for peer review has the potential to improve the quality and diversity of feedback received by the author. It also allows for a more transparent and open review process, as anyone can participate and view the feedback provided. Additionally, this approach can help identify potential biases or blind spots in the research, leading to more robust and well-rounded conclusions.
One potential drawback of crowd-sourcing for peer review is the lack of expertise and qualifications of some reviewers. This can lead to inconsistent or unreliable feedback. Additionally, the open nature of this approach may make it vulnerable to manipulation or bias from certain individuals or groups. It is important to carefully consider and filter the feedback received through crowd-sourcing for peer review.
Crowd-sourcing for peer review is a relatively new approach and is not yet widely accepted in the scientific community. However, it is gaining attention and some researchers and journals have started to experiment with this method. Further research and evaluation are needed to determine its effectiveness and potential impact on the traditional peer review process.