Impact Factors/Peer Review in Physics Journals

  • Thread starter abbybeall
  • Start date

abbybeall

Hey everyone,
I'm interested in people's opinions about Impact Factors and how useful they are in ranking scientific journals. For extra information I've made an infographic about the top physics journals as measured by Thomson Reuters in their journal citation reports and it's accessible here or here

Edit by mentor: removed links

But really I am wondering, what do you think about the publication scheme in general? Should more journals be open access? Does the peer review system work?

I'd like to start some discussion so any comments/opinions are welcomed.

Thanks :)
 
Last edited by a moderator:

Evo

Mentor
22,863
2,340
You can format your post to be like this previous thread.

https://www.physicsforums.com/showthread.php?t=9822

Remember, not every journal listed in Thompson Reuters is acceptable, they have started listing open, online only, pop-sci, magazines, etc... Those don't meet our criteria.
 

Ryan_m_b

Staff Emeritus
Science Advisor
5,754
706
There are definitely ways the peer review system could be improved; a simple change would be to remove author names and institutions from manuscripts when they get sent to reviewers. They aren't relevant and would stop bias in the form of sexism, racism, snow-ball effect on certain institutes publications etc. I'm not saying these things are big problems (though I did hear of a study into sexism in peer-review recently) but we might as well completely rule it out.

In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

Whether or not it works as good as it could doesn't change the fact that peer-review by a panel of experts is the best system we have.
 

Office_Shredder

Staff Emeritus
Science Advisor
Gold Member
3,734
98
In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.
Someone should make a journal that focuses exclusively on publishing negative results in science.
 

ZapperZ

Staff Emeritus
Science Advisor
Education Advisor
Insights Author
2018 Award
35,045
3,881
In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.

Whether or not it works as good as it could doesn't change the fact that peer-review by a panel of experts is the best system we have.
Actually, negative results DO get published. The Morley-Michealson experiment is one easy example. The latest one is the search for the electron dipole moment, which none was found.

There are many other examples where negative results have been published.

As a referee, I very seldom look at who the authors are until after I've read the manuscript the first time and have formed my opinion of it. Only then do I look at the names of the authors.

Zz.
 

Pythagorean

Gold Member
4,132
251
It's probably field-specific how publishable negative results are.
 

f95toli

Science Advisor
Gold Member
2,926
419
a simple change would be to remove author names and institutions from manuscripts when they get sent to reviewers. They aren't relevant
There are a few problems wth this:

*As a referee you are also supposed to make sure the authors haven't published the same work anywhere else; meaning you do have to know who wrote the paper. That said, this could in principle be done by editors.

*For most papers it is pretty obvious who the authors are; mostly you will only referee work that is in your own field and it is a small word meaning you will have a pretty good idea of who is doing what, so at least in my field the methods, materials etc used would serve as a "fingerprint" so I would probably be able to guess quite easily which group wrote a given paper.

*Most papers are uploaded to pre-print servers before being submitted. These days I've already seen most of the papers I am asked to referee because they have already been posted on the arXiv (and then the authors are obviously listed).

*Last but not least you also have the fact that nearly all papers refer to previous work by the same group. I don't see how one could get round this because it would mean that one would have to put every detail about an experiment in every single paper, this is simply not possible if you are writing a 4 page letter; it is much better to refer to a "methods" paper.
 

Pythagorean

Gold Member
4,132
251
What's funny is when you can tell who's refereeing your papers by their numerous suggestions that you cite theirs.
 

abbybeall

did hear of a study into sexism in peer-review recently.
I would be very interested to read this study, could you please point me in it's direction?

I have also spoken to a food policy professor who highlighted similar problems to the ones being raised on here. He suggested that a lot of academics do not fully understand the peer review process; do you think this is the case in physics, too?
 
Last edited by a moderator:

Choppy

Science Advisor
Education Advisor
Insights Author
4,493
1,579
I have also spoken to a food policy professor who highlighted similar problems to the ones being raised on here. He suggested that a lot of academics do not fully understand the peer review process; do you think this is the case in physics, too?
I think in most cases, the people doing the reviews have at least a functional understanding of the process and what they're required to do. You have associate editors and editors to make final decisions and mentor the reviewers through the process.

That doesn't mean there aren't flaws. Reviewers are human and most of the time they haven't received any formal training in how to conduct a review. They're simply asked by the associate editor to give the journal feedback on a manuscript that's similar to work they have published.

It can be very tempting in a review to disagree with how the authors carried out or presented their investigation. I will occasionally find myself thinking "That's not how I would have done this." And this means that some reviewers will make comments along these lines when really they should be focussed on whether what the authors actually did and the conclusions they derived from that were valid.
 

Vanadium 50

Staff Emeritus
Science Advisor
Education Advisor
23,093
5,382
What problem are we trying to solve here?
 

Choppy

Science Advisor
Education Advisor
Insights Author
4,493
1,579
Hey everyone,
I'm interested in people's opinions about Impact Factors and how useful they are in ranking scientific journals.
An impact factor gives you an objective metric for assessing the relative importance of a journal within a given field. This can be important in academic circles - say if you're on a review committee for someone who is not in your field and you have to make a judgement on how productive a given person has been.

In my experience though, within the field itself, people know what the important journals are. Those are the ones that they read on a regular basis, the ones that are on their "favourites" bar. And so when I look at someone's publications within my own field I can tell pretty quickly the level of work that they have published.

But really I am wondering, what do you think about the publication scheme in general? Should more journals be open access? Does the peer review system work?
There are problems with the process, no doubt, and there certainly is room for improvement.

I take issue with the open access process as I understand it. Journals need to make money to survive and they typically do that by either (a) charging for subscriptions, or (b) charging the authors to print (open access).

The problem with (a) is that so many journals charge so much that university libraries can't afford subscriptions to everything and eventually start cutting subscriptions out. Hence there is an issue with access.

The open access process solves that problem, but introduces new problems in that the independence and impartiality of the peer-review process can easily come into question. In order to increase its profitability, what's to stop a journal from simply accepting everything that's submitted? One potential solution to this is publishing a full financial disclosure along with each article - maybe that would work - but I barely have time to read the articles in my field as they are. I certainly don't need more to read.
 

abbybeall

What problem are we trying to solve here?
I'm just trying to gather a general opinion from the physics academic community on the peer review process.
 

Cthugha

Science Advisor
1,883
228
In addition there's the problem that negative results aren't publishable. I kind of get why but sometimes it would be extremely useful to know that X doesn't work on Y. Lastly reproduction of various studies should be prioritised more but that's slightly tangential to peer review.
The problem of publishing negative results is mostly a problem of journals aiming at some minimal estimated impact which the articles published should have. In many cases referees or editors will just state that the negative result will not be important enough for publication in journal XYZ.

However, the possible profit that could be made from publishing low-impact work has led some publishers to change minds and create journals like PLOS One or Scientific Reports which only require correctness and novelty, but not large estimated impact for publication. These journals indeed accept and publish negative results.


One problem I see about peer reviewing are the different review standards in different fields. For some journals on the upper end of the impact factor scale, the referee's estimate of the relative importance of some manuscript within some field or physics in general is almost more important than the question of validity of its scientific content. Within some subfields (in particular those with strong competition for funding) referee reports are often quite hostile and there is some non-negligible density of referees who accept or reject manuscripts based on the authors who contributed to it rather than on the scientific content.

On the other hand in some other subfields (in particular rather small ones) some referees have some tendency to overrate the importance of manuscripts from their field just to keep that field in high-impact journals on a regular basis. That keeps the field "hot" and may help to direct more funding into that field.

While journals which require just scientific correctness for publication and let the readership decide what is important and what is not, might seem like a solution to that problem, I am not sure that it will work. In some fields the constant inflow of new publications is way too large to read all of them and high impact journals (as well as field-specific moderate impact journals and publications by people you know are important in your field) act as a "must-read-filter".
 

AlephZero

Science Advisor
Homework Helper
6,953
291
I think the problem with "negative results" is filtering out the globally important ones. if you go to conferences, you often get papers that can be summarized as "there is something wrong here but we haven't figured out what it is yet" - which is very interesting to the members of the project team in the short term, but probably not to the wider scientific community in the long term, and that's who journals should be published for IMO.
 

chiro

Science Advisor
4,783
127
Some journals in fields like Biostatistics (and related areas like Public Health) do not accept negative results as far as I am aware (haven taken specific graduate coursework in this area myself).
 

ZapperZ

Staff Emeritus
Science Advisor
Education Advisor
Insights Author
2018 Award
35,045
3,881
Negative results are very important in physics, especially when either (i) there have been previous positive results and/or (ii) there is a theoretical prediction of a positive result.

For example, before we found the Higgs, both Fermilab and CERN published many papers on results that exclude large energy ranges where they did NOT find the Higgs. These were quite important because there were numerous exotic models (you know who you are, String and Supersymmetry) that either predicted or made use of Higgs model in those ranges. These negative result experiments are crucial to weed out theories that are incorrect.

Negative result experiments are not unusual in physics.

Zz.
 
204
26
About publishing negative results, I'm not sure that's a peer review issue. Consider that in order to show if a given idea is possible you only need to get the expected signal/behavior and it will act as proof of the entire concept. However, in order to show that the same idea is not possible (the negative outcome), you will need to eliminate every single experimental error and be sure that you are limited only by fundamental flaws, and this is generally orders of magnitude more work, and is extremely difficult to do.


Another issue with the publishing system which is also not a direct flaw of peer review, is that there is produced way too many papers today. Even within a subfield it's almost impossible to read everything relevant, and it would be a lot better if groups focused on publishing more seldom but with more significant results. However, because of the way the funding system works, rewarding many publications, this is in practice not possible, and people are forced to push through as small increments as they can get into good journals. Not a good system, but it's also not easy to see how it can be improved without other trade-offs.
 

ZapperZ

Staff Emeritus
Science Advisor
Education Advisor
Insights Author
2018 Award
35,045
3,881
But this is why there are different tiers oh journals. The Natures, the Sciences and the PRLs require contents that are more significant and high impact. And for many of us, we tend to know the few journals that publish papers relevant to our fields. So it is not as if we have to browse every single journals out there.

Zz
 

Student100

Education Advisor
Gold Member
1,646
416
An impact factor gives you an objective metric for assessing the relative importance of a journal within a given field. This can be important in academic circles - say if you're on a review committee for someone who is not in your field and you have to make a judgement on how productive a given person has been.
I would suggest impact factor is far more subjective than objective. There are numerous criticisms just in its use as a journal metric and it's my opinion that it shouldn't be used at all to determine the worth of individual papers or more importantly the authors worth.


The numbers of citations differ between review and primary manuscripts, with review lit being citied far more often than primary research. It can lead to editor and reviewer bias; an example, editors seeking to manipulate the journals impact factor by excluding certain publications from calculating into the number of citable publications by the journal. For some reason beyond me, when these publications are cited it still counts favorably toward the impact factor calculation.


Quoting http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1475651/

Small wonder, then, that authors care so much about journals' impact factors and take them into consideration when submitting papers. Should we, as the editors of PLoS Medicine, also care about our impact factor and do all we can to increase it? This is not a theoretical question; it is well known that editors at many journals plan and implement strategies to massage their impact factors. Such strategies include attempting to increase the numerator in the above equation by encouraging authors to cite articles published in the journal or by publishing reviews that will garner large numbers of citations. Alternatively, editors may decrease the denominator by attempting to have whole article types removed from it (by making such articles superficially less substantial, such as by forcing authors to cut down on the number of references or removing abstracts) or by decreasing the number of research articles published. These are just a few of the many ways of “playing the impact factor game.”
...

We conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive.
Paper published in science dealing with coercive citations: http://www.sciencemag.org/content/335/6068/542

Obviously the impact factor metric exists for a reason, and maybe it's useful in context. I don't really know.

As far as open journals, or pay to publish journals, they do little to remedy the problems that currently exist in the organizational efforts of scientific publications.

Anyway, thought I'd add my opinion in relation to the OP's question.
 

Vanadium 50

Staff Emeritus
Science Advisor
Education Advisor
23,093
5,382
It's absolutely true that citations are not everything. I wrote a paper once reporting on a measurement that killed one of the theories explaining a particular phenomena deader than a doornail. Closed all the possible loopholes at the same time. This is one of my least cited papers, because once it hit the journals, theorists stopped writing papers on that theory.

That said, they do matter. In general, highly cited papers are more important than papers that are not so highly cited.
 

Related Threads for: Impact Factors/Peer Review in Physics Journals

Replies
1
Views
3K
  • Posted
Replies
8
Views
107K
Replies
47
Views
7K
  • Posted
Replies
5
Views
3K
Replies
4
Views
2K
  • Posted
Replies
14
Views
808

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top