:: Posted by American Biotechnologist on 07-28-2014
It’s not that we are obsessed with the Journal Impact Factor, (OK so we’ve written about it at least 7 times on this blog), however, we do feel that it plays an important role in the life of budding scientists and we strongly identify with DORA’s call to abandon its use in evaluating scientific merit.
You can read more about our opinion on the JIF factor in the links provided below. The intention of this post, however, is to draw attention to DORA’s call for research scientists to provide examples of JIF-less metrics and methods that can be used in lieu of the JIF as a metric for scientific accomplishment.
Some examples include:
- The University of Texas Southwestern Medical Center’s recruiting policy which encourages candidates to discuss their most significant scientific accomplishment without referring to their JIF ranking
- Germany’s Max Planck Society is asking its recruits to provide full copies of the three papers which they consider to be their best ones-independent of their JIF ranking
- The American Society for Cell Biology has moved away from the JIF and now evaluates candidates for the prestigious ASCB Kaluza Prizes based-upon the significance of discoveries they have made
To learn more about DORA’s call to abandon reliance on Journal Impact factors (JIFs) and adopt more enlightened approaches visit the DORA website.
For more information see:
Exploring scientific productivity
The Ugly Side of the Journal Impact Factor
Don’t Judge Me-I’m a Scientist
A Journal Impact Factor Scandal
Nobel Prize Winners Address Brutal Cuts in Federal Funding
The Journal Impact Factor and the Lazy Scientist
A YouTube Rebellion to the Journal Impact Factor
:: Posted by American Biotechnologist on 01-08-2014
We have written extensively about the drawbacks of the Journal Impact factor and the DORA initiative on this site (see: The Journal Impact Factor and the Lazy Scientist, Nobel Prize Winners Address Brutal Cuts in Scientific Funding, A Journal Impact Factor Scandal)
Here is on graduates student’s reaction to the pressures of publishing in one of the “big three.”
:: Posted by American Biotechnologist on 01-06-2014
Several weeks ago, two Nobel Prize winning scientists addressed the American Society of Cell Biology meeting in New Orleans and used that platform to promote the boycott of the top three scientific journals: Science, Nature and Cell. The boycott was based on the 2012 San Francisco Declaration on Research Assessment (DORA), calling for scientists to turn their backs on JIFs and find new measures of individual research value. In an article published in the Guardian, Steve Caplan gave his take on the situation and explained why the scientific world is not quite ready yet for a boycott of the big three.
Caplan notes three main reasons why the journal impact factor is an unfair metric of scientific success:
- While top tier journals contain more highly cited papers than their lower ranking counterparts, most papers are not highly-cited, yet their authors unfairly receive brownie points for publishing in a high JIF journal despite the fact that their article has not been cited by others.
- Top JIF journals often contain review articles that, by their very nature, have higher citation rates than original research article. Yet nobody would argue that original research is much more “impactful” than a review of other people’s research
- Negative citations contribute in an equal way to the JIF as positive citations
In what is probably the most controversial, yet interesting part of his analysis, Caplan claims that the JIF has been given unfair weighting in scientific merit reviews due to the fact that the world prefers to have a quantitative analysis of a scientist’s performance over a qualitative one. As such, rating scientists based on their JIF score (a quantitative measure) is preferable to rating them based on the quality of their work. Furthermore, our preference for quantitative over qualitative ranking stems from the fact that the system does not have the time to screen candidates qualitatively, which is an attribute that I like to term “laziness.”
Caplan concludes that despite being called the journal impact factor, it is actually the perceived impact of one’s research that should be considered when evaluating a scientist. Thus, since high impact journals do indeed encourage a peer review system based on perceived impact, in the absence of a better alternative, the JIF ranking as an evaluation tool is here to stay.
Steve Caplan’s article Why we are not ready for radical changes in science publishing can be found at theguardian.com.