Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

News Crowd $ourcing for Peer Review

  1. Jun 9, 2017 #1

    jedishrfu

    Staff: Mentor

  2. jcsd
  3. Jun 9, 2017 #2

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    That is one of the important questions.

    Another question: What happens to parts nobody wants to review? If the paper has 3 reviewers, they feel responsible for it. If the paper has a forum, everyone could hope that someone else will do it.
    Does that even get noted if some part is not reviewed?

    How is this supposed to scale to multiple fields? A separate forum for each subfield, and editors invite the scientists?
     
  4. Jun 9, 2017 #3

    jedishrfu

    Staff: Mentor

    I guess the only model we have is wikipedia which has been pretty successful using the crowd sourcing model with backing discussion talks and moderators to manage the edits.
     
  5. Jun 10, 2017 #4
    Remember that editors are overseeing the process. So yes, it would get noted if a part was not reviewed.

    About 10 years ago, when I was still a freelance writer/editor, I had become very interested in writing about a particular model of behavioral psychology (Relational Frame Theory and its chief clinical application, Acceptance and Commitment Therapy). The sponsoring organization, ACBS, has a large membership & a large research community with lots of books in process at any given time. At one point the guy who had been the founder asked me if I thought it would be feasible to set up & operate an online site for crowd-sourcing technical editing of forthcoming books; somewhat similar to peer reviews in that commenters would be peers & their goal would be to assess arguments & claims and point out any weak spots, etc. Back then the technology for multiple commenting/editing was not nearly as robust as it is today, so that was a major barrier. I researched what various university presses were experimenting with back then & eventually said no, it wasn't really feasible for ACBS yet.

    I am guessing that the tech platforms must be much better now. However as I remember it, I had a lot of concerns about process as well; as a technical editor I created several different small-group editing setups for various projects & clients, and I always found that human interactions and expectations were both more important & more challenging to get right than the technology. This is one reason (there are others) that "single-sourcing of content", a.k.a. content management, has a poor rate of adoption at places like ad agencies: content management requires not only a special tech setup, but special rules as well; non-writer staff, when asked to contribute content, typically resent such rules as a burden & refuse to follow them.

    In this regard I think the concerns of the writer (Chris Lee) are valid, e.g.
     
    Last edited: Jun 10, 2017
  6. Jun 10, 2017 #5
    However the results reinforce the concern raised by @mfb and perhaps raise others. You will find nearly as many poorly-edited Wikipedia articles as good ones. Many topics languish, i.e. are mere stubs; others are "more like advertisements" and this is apparently hard to weed out; some topics become battlegrounds in which editors with different views can become very antagonistic.

    Also the Wikipedia editing interface, and even more so the process, demand a lot of time if you are to learn the ropes - I was briefly an editor on the few topics I have much knowledge about, and the learning curve was steep given the low volume of content I could actually contribute. That itself forms a barrier that wouldn't work for large-scale peer review, I don't think.

    So Wikipedia works as Wikipedia - but may not be a very good model for anything that can't suffer such incidents and must be more easily & quickly learnable.
     
    Last edited: Jun 10, 2017
  7. Jun 10, 2017 #6
    The current peer review system has so many bugs relating to time and quality that I can't see how venues with alternative approaches can hurt.

    But any peer review system depending on volunteer labor will ultimately depend on the willingness of the volunteers to do a good and timely job. Some fields have cultures that lend themselves more to that with certain organizational approaches than others.

    Of the fields I've published in atomic physics and brain injury have the most responsive referees and thoughtful editors with the traditional approach used my most journals today. Education and fisheries science have the least responsive referees and least thoughtful editors.
     
  8. Jun 10, 2017 #7

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    I'm not surprised a system with 100 reviewers does better than a system with 3.
     
  9. Jun 10, 2017 #8

    Bandersnatch

    User Avatar
    Science Advisor

    They used 10 papers, so a fairer comparison would be 100 vs 30. That's still more manpower, but as long as the traditional route took more than 10 days, then the crowdsourcing system came out on top.
     
  10. Jun 10, 2017 #9

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    The 100 would have to review 30 papers to make the ratio realistic again, and spend 1/30 of the time on each paper. That is the novelty factor mentioned in the article - maybe the peers spent more time per paper than they would if the system is used frequently.
    Not necessarily. Peer review doesn't take long because the reviewers spend weeks reviewing this single paper. It takes long because everyone has other tasks. We would have to measure the time actually spend on reviewing the article.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Crowd $ourcing for Peer Review
  1. Peer review reviewed (Replies: 5)

  2. Peer review (Replies: 14)

Loading...