The Journal Impact Factor and the Lazy Scientist

Several weeks ago, two Nobel Prize winning scientists addressed the American Society of Cell Biology meeting in New Orleans and used that platform to promote the boycott of the top three scientific journals: Science, Nature and Cell. The boycott was based on the 2012 San Francisco Declaration on Research Assessment (DORA), calling for scientists to turn their backs on JIFs and find new measures of individual research value. In an article published in the Guardian, Steve Caplan gave his take on the situation and explained why the scientific world is not quite ready yet for a boycott of the big three.

Caplan notes three main reasons why the journal impact factor is an unfair metric of scientific success:

  1. While top tier journals contain more highly cited papers than their lower ranking counterparts, most papers are not highly-cited, yet their authors unfairly receive brownie points for publishing in a high JIF journal despite the fact that their article has not been cited by others.
  2. Top JIF journals often contain review articles that, by their very nature, have higher citation rates than original research article. Yet nobody would argue that original research is much more “impactful” than a review of other people’s research
  3. Negative citations contribute in an equal way to the JIF as positive citations

In what is probably the most controversial, yet interesting part of his analysis, Caplan claims that the JIF has been given unfair weighting in scientific merit reviews due to the fact that the world prefers to have a quantitative analysis of a scientist’s performance over a qualitative one. As such, rating scientists based on their JIF score (a quantitative measure) is preferable to rating them based on the quality of their work. Furthermore, our preference for quantitative over qualitative ranking stems from the fact that the system does not have the time to screen candidates qualitatively, which is an attribute that I like to term “laziness.”

Caplan concludes that despite being called the journal impact factor, it is actually the perceived impact of one’s research that should be considered when evaluating a scientist. Thus, since high impact journals do indeed encourage a peer review system based on perceived impact, in the absence of a better alternative, the JIF ranking as an evaluation tool is here to stay.

Steve Caplan’s article Why we are not ready for radical changes in science publishing can be found at

Tags: ,

Leave a Reply