Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Impact Factors/Peer Review in Physics Journals

  1. Nov 26, 2013 #1
    Hey everyone,
    I'm interested in people's opinions about Impact Factors and how useful they are in ranking scientific journals. For extra information I've made an infographic about the top physics journals as measured by Thomson Reuters in their journal citation reports and it's accessible here or here

    Edit by mentor: removed links

    But really I am wondering, what do you think about the publication scheme in general? Should more journals be open access? Does the peer review system work?

    I'd like to start some discussion so any comments/opinions are welcomed.

    Thanks :)
     
    Last edited by a moderator: Nov 26, 2013
  2. jcsd
  3. Nov 26, 2013 #2

    Evo

    User Avatar

    Staff: Mentor

    You can format your post to be like this previous thread.

    https://www.physicsforums.com/showthread.php?t=9822

    Remember, not every journal listed in Thompson Reuters is acceptable, they have started listing open, online only, pop-sci, magazines, etc... Those don't meet our criteria.
     
  4. Nov 27, 2013 #3

    Ryan_m_b

    User Avatar

    Staff: Mentor

    There are definitely ways the peer review system could be improved; a simple change would be to remove author names and institutions from manuscripts when they get sent to reviewers. They aren't relevant and would stop bias in the form of sexism, racism, snow-ball effect on certain institutes publications etc. I'm not saying these things are big problems (though I did hear of a study into sexism in peer-review recently) but we might as well completely rule it out.

    In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

    Whether or not it works as good as it could doesn't change the fact that peer-review by a panel of experts is the best system we have.
     
  5. Nov 27, 2013 #4

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Someone should make a journal that focuses exclusively on publishing negative results in science.
     
  6. Nov 27, 2013 #5

    ZapperZ

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Actually, negative results DO get published. The Morley-Michealson experiment is one easy example. The latest one is the search for the electron dipole moment, which none was found.

    There are many other examples where negative results have been published.

    As a referee, I very seldom look at who the authors are until after I've read the manuscript the first time and have formed my opinion of it. Only then do I look at the names of the authors.

    Zz.
     
  7. Nov 28, 2013 #6

    Pythagorean

    User Avatar
    Gold Member

    It's probably field-specific how publishable negative results are.
     
  8. Nov 28, 2013 #7

    f95toli

    User Avatar
    Science Advisor
    Gold Member

    There are a few problems wth this:

    *As a referee you are also supposed to make sure the authors haven't published the same work anywhere else; meaning you do have to know who wrote the paper. That said, this could in principle be done by editors.

    *For most papers it is pretty obvious who the authors are; mostly you will only referee work that is in your own field and it is a small word meaning you will have a pretty good idea of who is doing what, so at least in my field the methods, materials etc used would serve as a "fingerprint" so I would probably be able to guess quite easily which group wrote a given paper.

    *Most papers are uploaded to pre-print servers before being submitted. These days I've already seen most of the papers I am asked to referee because they have already been posted on the arXiv (and then the authors are obviously listed).

    *Last but not least you also have the fact that nearly all papers refer to previous work by the same group. I don't see how one could get round this because it would mean that one would have to put every detail about an experiment in every single paper, this is simply not possible if you are writing a 4 page letter; it is much better to refer to a "methods" paper.
     
  9. Nov 28, 2013 #8

    Pythagorean

    User Avatar
    Gold Member

    What's funny is when you can tell who's refereeing your papers by their numerous suggestions that you cite theirs.
     
  10. Dec 2, 2013 #9
    I would be very interested to read this study, could you please point me in it's direction?

    I have also spoken to a food policy professor who highlighted similar problems to the ones being raised on here. He suggested that a lot of academics do not fully understand the peer review process; do you think this is the case in physics, too?
     
    Last edited by a moderator: Dec 2, 2013
  11. Dec 2, 2013 #10

    Choppy

    User Avatar
    Science Advisor
    Education Advisor

    I think in most cases, the people doing the reviews have at least a functional understanding of the process and what they're required to do. You have associate editors and editors to make final decisions and mentor the reviewers through the process.

    That doesn't mean there aren't flaws. Reviewers are human and most of the time they haven't received any formal training in how to conduct a review. They're simply asked by the associate editor to give the journal feedback on a manuscript that's similar to work they have published.

    It can be very tempting in a review to disagree with how the authors carried out or presented their investigation. I will occasionally find myself thinking "That's not how I would have done this." And this means that some reviewers will make comments along these lines when really they should be focussed on whether what the authors actually did and the conclusions they derived from that were valid.
     
  12. Dec 2, 2013 #11

    Vanadium 50

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor

    What problem are we trying to solve here?
     
  13. Dec 2, 2013 #12

    Choppy

    User Avatar
    Science Advisor
    Education Advisor

    An impact factor gives you an objective metric for assessing the relative importance of a journal within a given field. This can be important in academic circles - say if you're on a review committee for someone who is not in your field and you have to make a judgement on how productive a given person has been.

    In my experience though, within the field itself, people know what the important journals are. Those are the ones that they read on a regular basis, the ones that are on their "favourites" bar. And so when I look at someone's publications within my own field I can tell pretty quickly the level of work that they have published.

    There are problems with the process, no doubt, and there certainly is room for improvement.

    I take issue with the open access process as I understand it. Journals need to make money to survive and they typically do that by either (a) charging for subscriptions, or (b) charging the authors to print (open access).

    The problem with (a) is that so many journals charge so much that university libraries can't afford subscriptions to everything and eventually start cutting subscriptions out. Hence there is an issue with access.

    The open access process solves that problem, but introduces new problems in that the independence and impartiality of the peer-review process can easily come into question. In order to increase its profitability, what's to stop a journal from simply accepting everything that's submitted? One potential solution to this is publishing a full financial disclosure along with each article - maybe that would work - but I barely have time to read the articles in my field as they are. I certainly don't need more to read.
     
  14. Dec 2, 2013 #13
    I'm just trying to gather a general opinion from the physics academic community on the peer review process.
     
  15. Dec 2, 2013 #14

    Cthugha

    User Avatar
    Science Advisor

    The problem of publishing negative results is mostly a problem of journals aiming at some minimal estimated impact which the articles published should have. In many cases referees or editors will just state that the negative result will not be important enough for publication in journal XYZ.

    However, the possible profit that could be made from publishing low-impact work has led some publishers to change minds and create journals like PLOS One or Scientific Reports which only require correctness and novelty, but not large estimated impact for publication. These journals indeed accept and publish negative results.


    One problem I see about peer reviewing are the different review standards in different fields. For some journals on the upper end of the impact factor scale, the referee's estimate of the relative importance of some manuscript within some field or physics in general is almost more important than the question of validity of its scientific content. Within some subfields (in particular those with strong competition for funding) referee reports are often quite hostile and there is some non-negligible density of referees who accept or reject manuscripts based on the authors who contributed to it rather than on the scientific content.

    On the other hand in some other subfields (in particular rather small ones) some referees have some tendency to overrate the importance of manuscripts from their field just to keep that field in high-impact journals on a regular basis. That keeps the field "hot" and may help to direct more funding into that field.

    While journals which require just scientific correctness for publication and let the readership decide what is important and what is not, might seem like a solution to that problem, I am not sure that it will work. In some fields the constant inflow of new publications is way too large to read all of them and high impact journals (as well as field-specific moderate impact journals and publications by people you know are important in your field) act as a "must-read-filter".
     
  16. Dec 2, 2013 #15

    AlephZero

    User Avatar
    Science Advisor
    Homework Helper

    I think the problem with "negative results" is filtering out the globally important ones. if you go to conferences, you often get papers that can be summarized as "there is something wrong here but we haven't figured out what it is yet" - which is very interesting to the members of the project team in the short term, but probably not to the wider scientific community in the long term, and that's who journals should be published for IMO.
     
  17. Dec 3, 2013 #16

    chiro

    User Avatar
    Science Advisor

    Some journals in fields like Biostatistics (and related areas like Public Health) do not accept negative results as far as I am aware (haven taken specific graduate coursework in this area myself).
     
  18. Dec 3, 2013 #17

    ZapperZ

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    Negative results are very important in physics, especially when either (i) there have been previous positive results and/or (ii) there is a theoretical prediction of a positive result.

    For example, before we found the Higgs, both Fermilab and CERN published many papers on results that exclude large energy ranges where they did NOT find the Higgs. These were quite important because there were numerous exotic models (you know who you are, String and Supersymmetry) that either predicted or made use of Higgs model in those ranges. These negative result experiments are crucial to weed out theories that are incorrect.

    Negative result experiments are not unusual in physics.

    Zz.
     
  19. Dec 3, 2013 #18
    About publishing negative results, I'm not sure that's a peer review issue. Consider that in order to show if a given idea is possible you only need to get the expected signal/behavior and it will act as proof of the entire concept. However, in order to show that the same idea is not possible (the negative outcome), you will need to eliminate every single experimental error and be sure that you are limited only by fundamental flaws, and this is generally orders of magnitude more work, and is extremely difficult to do.


    Another issue with the publishing system which is also not a direct flaw of peer review, is that there is produced way too many papers today. Even within a subfield it's almost impossible to read everything relevant, and it would be a lot better if groups focused on publishing more seldom but with more significant results. However, because of the way the funding system works, rewarding many publications, this is in practice not possible, and people are forced to push through as small increments as they can get into good journals. Not a good system, but it's also not easy to see how it can be improved without other trade-offs.
     
  20. Dec 3, 2013 #19

    ZapperZ

    User Avatar
    Staff Emeritus
    Science Advisor
    Education Advisor
    2016 Award

    But this is why there are different tiers oh journals. The Natures, the Sciences and the PRLs require contents that are more significant and high impact. And for many of us, we tend to know the few journals that publish papers relevant to our fields. So it is not as if we have to browse every single journals out there.

    Zz
     
  21. Jan 9, 2014 #20

    Student100

    User Avatar
    Education Advisor
    Gold Member

    I would suggest impact factor is far more subjective than objective. There are numerous criticisms just in its use as a journal metric and it's my opinion that it shouldn't be used at all to determine the worth of individual papers or more importantly the authors worth.


    The numbers of citations differ between review and primary manuscripts, with review lit being citied far more often than primary research. It can lead to editor and reviewer bias; an example, editors seeking to manipulate the journals impact factor by excluding certain publications from calculating into the number of citable publications by the journal. For some reason beyond me, when these publications are cited it still counts favorably toward the impact factor calculation.


    Quoting http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1475651/

    Paper published in science dealing with coercive citations: http://www.sciencemag.org/content/335/6068/542

    Obviously the impact factor metric exists for a reason, and maybe it's useful in context. I don't really know.

    As far as open journals, or pay to publish journals, they do little to remedy the problems that currently exist in the organizational efforts of scientific publications.

    Anyway, thought I'd add my opinion in relation to the OP's question.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook