Impact Factors/Peer Review in Physics Journals

In summary, the conversation discusses the use of Impact Factors in ranking scientific journals and the opinions about them, as well as the publication scheme, open access journals, and the effectiveness of the peer review system. Some potential improvements to the peer review process are also mentioned, such as removing author names from manuscripts and prioritizing the reproduction of studies. The conversation also touches on the issue of negative results not being publishable and a suggestion for a journal that focuses on publishing negative results. There is a brief mention of a study on sexism in peer review and a question about how well academics understand the peer review process. The overall consensus is that while there may be flaws in the peer review system, it is still considered the best system we have and reviewers are
  • #1
abbybeall
Hey everyone,
I'm interested in people's opinions about Impact Factors and how useful they are in ranking scientific journals. For extra information I've made an infographic about the top physics journals as measured by Thomson Reuters in their journal citation reports and it's accessible here or here

Edit by mentor: removed links

But really I am wondering, what do you think about the publication scheme in general? Should more journals be open access? Does the peer review system work?

I'd like to start some discussion so any comments/opinions are welcomed.

Thanks :)
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
You can format your post to be like this previous thread.

https://www.physicsforums.com/showthread.php?t=9822

Remember, not every journal listed in Thompson Reuters is acceptable, they have started listing open, online only, pop-sci, magazines, etc... Those don't meet our criteria.
 
  • #3
There are definitely ways the peer review system could be improved; a simple change would be to remove author names and institutions from manuscripts when they get sent to reviewers. They aren't relevant and would stop bias in the form of sexism, racism, snow-ball effect on certain institutes publications etc. I'm not saying these things are big problems (though I did hear of a study into sexism in peer-review recently) but we might as well completely rule it out.

In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

Whether or not it works as good as it could doesn't change the fact that peer-review by a panel of experts is the best system we have.
 
  • #4
Ryan_m_b said:
In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

Someone should make a journal that focuses exclusively on publishing negative results in science.
 
  • #5
Ryan_m_b said:
In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

Whether or not it works as good as it could doesn't change the fact that peer-review by a panel of experts is the best system we have.

Actually, negative results DO get published. The Morley-Michealson experiment is one easy example. The latest one is the search for the electron dipole moment, which none was found.

There are many other examples where negative results have been published.

As a referee, I very seldom look at who the authors are until after I've read the manuscript the first time and have formed my opinion of it. Only then do I look at the names of the authors.

Zz.
 
  • #6
It's probably field-specific how publishable negative results are.
 
  • #7
Ryan_m_b said:
a simple change would be to remove author names and institutions from manuscripts when they get sent to reviewers. They aren't relevant

There are a few problems wth this:

*As a referee you are also supposed to make sure the authors haven't published the same work anywhere else; meaning you do have to know who wrote the paper. That said, this could in principle be done by editors.

*For most papers it is pretty obvious who the authors are; mostly you will only referee work that is in your own field and it is a small word meaning you will have a pretty good idea of who is doing what, so at least in my field the methods, materials etc used would serve as a "fingerprint" so I would probably be able to guess quite easily which group wrote a given paper.

*Most papers are uploaded to pre-print servers before being submitted. These days I've already seen most of the papers I am asked to referee because they have already been posted on the arXiv (and then the authors are obviously listed).

*Last but not least you also have the fact that nearly all papers refer to previous work by the same group. I don't see how one could get round this because it would mean that one would have to put every detail about an experiment in every single paper, this is simply not possible if you are writing a 4 page letter; it is much better to refer to a "methods" paper.
 
  • #8
What's funny is when you can tell who's refereeing your papers by their numerous suggestions that you cite theirs.
 
  • #9
Ryan_m_b said:
did hear of a study into sexism in peer-review recently.

I would be very interested to read this study, could you please point me in it's direction?

I have also spoken to a food policy professor who highlighted similar problems to the ones being raised on here. He suggested that a lot of academics do not fully understand the peer review process; do you think this is the case in physics, too?
 
Last edited by a moderator:
  • #10
abbybeall said:
I have also spoken to a food policy professor who highlighted similar problems to the ones being raised on here. He suggested that a lot of academics do not fully understand the peer review process; do you think this is the case in physics, too?

I think in most cases, the people doing the reviews have at least a functional understanding of the process and what they're required to do. You have associate editors and editors to make final decisions and mentor the reviewers through the process.

That doesn't mean there aren't flaws. Reviewers are human and most of the time they haven't received any formal training in how to conduct a review. They're simply asked by the associate editor to give the journal feedback on a manuscript that's similar to work they have published.

It can be very tempting in a review to disagree with how the authors carried out or presented their investigation. I will occasionally find myself thinking "That's not how I would have done this." And this means that some reviewers will make comments along these lines when really they should be focussed on whether what the authors actually did and the conclusions they derived from that were valid.
 
  • #11
What problem are we trying to solve here?
 
  • #12
abbybeall said:
Hey everyone,
I'm interested in people's opinions about Impact Factors and how useful they are in ranking scientific journals.
An impact factor gives you an objective metric for assessing the relative importance of a journal within a given field. This can be important in academic circles - say if you're on a review committee for someone who is not in your field and you have to make a judgement on how productive a given person has been.

In my experience though, within the field itself, people know what the important journals are. Those are the ones that they read on a regular basis, the ones that are on their "favourites" bar. And so when I look at someone's publications within my own field I can tell pretty quickly the level of work that they have published.

But really I am wondering, what do you think about the publication scheme in general? Should more journals be open access? Does the peer review system work?

There are problems with the process, no doubt, and there certainly is room for improvement.

I take issue with the open access process as I understand it. Journals need to make money to survive and they typically do that by either (a) charging for subscriptions, or (b) charging the authors to print (open access).

The problem with (a) is that so many journals charge so much that university libraries can't afford subscriptions to everything and eventually start cutting subscriptions out. Hence there is an issue with access.

The open access process solves that problem, but introduces new problems in that the independence and impartiality of the peer-review process can easily come into question. In order to increase its profitability, what's to stop a journal from simply accepting everything that's submitted? One potential solution to this is publishing a full financial disclosure along with each article - maybe that would work - but I barely have time to read the articles in my field as they are. I certainly don't need more to read.
 
  • #13
Vanadium 50 said:
What problem are we trying to solve here?

I'm just trying to gather a general opinion from the physics academic community on the peer review process.
 
  • #14
Ryan_m_b said:
In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

The problem of publishing negative results is mostly a problem of journals aiming at some minimal estimated impact which the articles published should have. In many cases referees or editors will just state that the negative result will not be important enough for publication in journal XYZ.

However, the possible profit that could be made from publishing low-impact work has led some publishers to change minds and create journals like PLOS One or Scientific Reports which only require correctness and novelty, but not large estimated impact for publication. These journals indeed accept and publish negative results.


One problem I see about peer reviewing are the different review standards in different fields. For some journals on the upper end of the impact factor scale, the referee's estimate of the relative importance of some manuscript within some field or physics in general is almost more important than the question of validity of its scientific content. Within some subfields (in particular those with strong competition for funding) referee reports are often quite hostile and there is some non-negligible density of referees who accept or reject manuscripts based on the authors who contributed to it rather than on the scientific content.

On the other hand in some other subfields (in particular rather small ones) some referees have some tendency to overrate the importance of manuscripts from their field just to keep that field in high-impact journals on a regular basis. That keeps the field "hot" and may help to direct more funding into that field.

While journals which require just scientific correctness for publication and let the readership decide what is important and what is not, might seem like a solution to that problem, I am not sure that it will work. In some fields the constant inflow of new publications is way too large to read all of them and high impact journals (as well as field-specific moderate impact journals and publications by people you know are important in your field) act as a "must-read-filter".
 
  • #15
I think the problem with "negative results" is filtering out the globally important ones. if you go to conferences, you often get papers that can be summarized as "there is something wrong here but we haven't figured out what it is yet" - which is very interesting to the members of the project team in the short term, but probably not to the wider scientific community in the long term, and that's who journals should be published for IMO.
 
  • #16
Some journals in fields like Biostatistics (and related areas like Public Health) do not accept negative results as far as I am aware (haven taken specific graduate coursework in this area myself).
 
  • #17
Negative results are very important in physics, especially when either (i) there have been previous positive results and/or (ii) there is a theoretical prediction of a positive result.

For example, before we found the Higgs, both Fermilab and CERN published many papers on results that exclude large energy ranges where they did NOT find the Higgs. These were quite important because there were numerous exotic models (you know who you are, String and Supersymmetry) that either predicted or made use of Higgs model in those ranges. These negative result experiments are crucial to weed out theories that are incorrect.

Negative result experiments are not unusual in physics.

Zz.
 
  • #18
About publishing negative results, I'm not sure that's a peer review issue. Consider that in order to show if a given idea is possible you only need to get the expected signal/behavior and it will act as proof of the entire concept. However, in order to show that the same idea is not possible (the negative outcome), you will need to eliminate every single experimental error and be sure that you are limited only by fundamental flaws, and this is generally orders of magnitude more work, and is extremely difficult to do.


Another issue with the publishing system which is also not a direct flaw of peer review, is that there is produced way too many papers today. Even within a subfield it's almost impossible to read everything relevant, and it would be a lot better if groups focused on publishing more seldom but with more significant results. However, because of the way the funding system works, rewarding many publications, this is in practice not possible, and people are forced to push through as small increments as they can get into good journals. Not a good system, but it's also not easy to see how it can be improved without other trade-offs.
 
  • #19
But this is why there are different tiers oh journals. The Natures, the Sciences and the PRLs require contents that are more significant and high impact. And for many of us, we tend to know the few journals that publish papers relevant to our fields. So it is not as if we have to browse every single journals out there.

Zz
 
  • #20
Choppy said:
An impact factor gives you an objective metric for assessing the relative importance of a journal within a given field. This can be important in academic circles - say if you're on a review committee for someone who is not in your field and you have to make a judgement on how productive a given person has been.

I would suggest impact factor is far more subjective than objective. There are numerous criticisms just in its use as a journal metric and it's my opinion that it shouldn't be used at all to determine the worth of individual papers or more importantly the authors worth.The numbers of citations differ between review and primary manuscripts, with review lit being citied far more often than primary research. It can lead to editor and reviewer bias; an example, editors seeking to manipulate the journals impact factor by excluding certain publications from calculating into the number of citable publications by the journal. For some reason beyond me, when these publications are cited it still counts favorably toward the impact factor calculation. Quoting http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1475651/

Small wonder, then, that authors care so much about journals' impact factors and take them into consideration when submitting papers. Should we, as the editors of PLoS Medicine, also care about our impact factor and do all we can to increase it? This is not a theoretical question; it is well known that editors at many journals plan and implement strategies to massage their impact factors. Such strategies include attempting to increase the numerator in the above equation by encouraging authors to cite articles published in the journal or by publishing reviews that will garner large numbers of citations. Alternatively, editors may decrease the denominator by attempting to have whole article types removed from it (by making such articles superficially less substantial, such as by forcing authors to cut down on the number of references or removing abstracts) or by decreasing the number of research articles published. These are just a few of the many ways of “playing the impact factor game.”
...

We conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive.

Paper published in science dealing with coercive citations: http://www.sciencemag.org/content/335/6068/542

Obviously the impact factor metric exists for a reason, and maybe it's useful in context. I don't really know.

As far as open journals, or pay to publish journals, they do little to remedy the problems that currently exist in the organizational efforts of scientific publications.

Anyway, thought I'd add my opinion in relation to the OP's question.
 
  • #21
It's absolutely true that citations are not everything. I wrote a paper once reporting on a measurement that killed one of the theories explaining a particular phenomena deader than a doornail. Closed all the possible loopholes at the same time. This is one of my least cited papers, because once it hit the journals, theorists stopped writing papers on that theory.

That said, they do matter. In general, highly cited papers are more important than papers that are not so highly cited.
 

1. What is an impact factor and why is it important in physics journals?

An impact factor is a measure of how frequently articles published in a particular journal are cited in other research papers. It is important in physics journals because it reflects the influence and relevance of the journal within the scientific community. Journals with higher impact factors are generally considered to be more prestigious and their articles are more widely read and cited.

2. How is the impact factor calculated?

The impact factor is calculated by dividing the total number of citations received by articles published in a journal within a certain time period (usually two years) by the total number of articles published in that journal during the same time period. This calculation is then standardized to adjust for differences in citation practices across different fields of research.

3. What is the role of peer review in physics journals?

Peer review is a critical process in which experts in the same field as the submitted research evaluate its quality, validity, and significance before it is published in a journal. This helps to ensure the accuracy, credibility, and originality of the research being published.

4. How are articles selected for publication in physics journals?

Articles are typically selected for publication in physics journals based on their quality, relevance, and originality. They must also go through a rigorous peer review process and meet the journal's specific criteria and standards. Editors may also consider the potential impact and interest of the research to the scientific community.

5. Are impact factors the only measure of a journal's quality?

No, impact factors are not the only measure of a journal's quality. Other factors such as the reputation and expertise of the editorial board, the scope and coverage of the journal, and the average citation rate of articles within the journal may also be considered. It is important to evaluate a journal based on multiple criteria rather than relying solely on the impact factor.

Similar threads

  • STEM Academic Advising
Replies
7
Views
409
  • STEM Academic Advising
Replies
21
Views
6K
Replies
11
Views
4K
  • General Discussion
Replies
15
Views
2K
Replies
12
Views
1K
  • STEM Academic Advising
Replies
2
Views
1K
  • General Discussion
Replies
2
Views
3K
Replies
9
Views
1K
  • STEM Educators and Teaching
Replies
13
Views
3K
  • General Discussion
Replies
9
Views
7K
Back
Top