The Ugly Side of the Journal Impact Factor

Are you obsessed with publishing in high ranking journals such as Cell or Science? Do you gloss over your works that have been published in low ranked journals when talking with colleagues or attending a job interview? If the answer is yes, you are not alone.

Since its invention approximately 60 years ago, the Journal Impact Factor (JIF) has been used to assess the quality of academic literature and the influence of scientific papers on the scientific community. The JIF was proposed by Eugene Garfield in the early 1950s and originally published by his Institute for Scientific Information (ISI) as a subscription buying tool for academic and medical librarians. The JIF assigns a score to scientific journals based on the average number of citations received in a year per paper published in the journal during the two preceding years. It has since become the authority on which journals are considered top tier publications are therefore premiere space for scientists wishing to best publicize their work and gain notoriety.

Unfortunately, the JIF has also become a tool used to ascertain a scientist’s worth and can often be a determining factor in the levels of funding they are to receive. This is an unfortunate turn of events since the JIF contains many deficiencies such as glossing over differences between fields, and lumping primary research articles in with much more easily cited review articles. As such, researchers that publish quality work in lower ranked journals are often at a disadvantage compared to those publishing secondary research in higher ranking journals.

In order to “protest” and counter this phenomenon, a group of publishers from both high impact and low impact journals have formed the Declaration on Research Assessment (DORA) which aims to lower the influence of the JIF on assessing scientific merit.

Dora has released 18 principles which are geared towards accomplishing these goals. Some of the recommendations that stand out the most include:

  • JIF should not be used to measure quality of individual articles or to asses an individual scientist’s contributions, or in hiring, promotion or funding decisions
  • Funding agencies should place more weight on the scientific content of a paper than its JIF
  • Scientific content of a paper should be considered a more important hiring decision than the JIF
  • A call for organizations to be open and transparent by providing data and methods used to calculate all metrics
  • Researchers should challenge research assessment practices that rely inappropriately on JIF

To download the full list of recommendations visit http://am.ascb.org/dora/files/SFDeclarationFINAL.pdf.

Tags:

23 Responses to “The Ugly Side of the Journal Impact Factor”

  1. Shabirul Haque says:

    This move is sensible. Published work in top tier journals are not only science.

    • American Biotechnologist says:

      Thanks for your comment Shabirul. Publish or perish shouldn’t be confounded by JIF.

  2. Malini Rajagopalan says:

    Papers in highly specialized areas do not get as many citations and does this mean their work is work less?

    • American Biotechnologist says:

      Malini-I totally agree with you. And what about negative findings? These are just as important as positive results. But you won’t see negative findings published in a journal with a high JIF!

  3. stanley60wang says:

    I fully sopport it. However, these big institutions, famed scientists, and government agency is not going to listen to the view of regular scientists. it is comonly known that most of these high impact articles in Nature, Science and Cell are extrememely difficult to repeat. The stroy they tell are so pefect so that is is only occured in day dream.

  4. Javadov, S says:

    I completely agree with the San Francisco Declaration on Research Assessment. I believe that scientific merit must be considered o the basis of real scientific significance of the paper rather than journal’s IF which becomes more subjective over time.

    • American Biotechnologist says:

      Thanks for your comment Javdov. Now to figure out how to properly rate “scientific significance.” What’s your suggestion?

  5. Pingwei Li says:

    The introduction of JIF is a disaster for the research community, especially in developing world like China. Publishing in high impact factor Js seems the only thing people care there.

    Publishing is a commercial practice at the publishers side. More papers published means more profit. That’s why published care about JIF so much.

    The impact of original research papers should be measured otherwise to make sense.

    • American Biotechnologist says:

      Pingwei-it would be interesting to see the percentage of corporate publications published in high impact factor journals versus academic papers. My guess is that there are still many more academic papers. China’s problem is likely related to their level of funding in the past. As research funding increases in China (and it has, big time), I am sure you will see more Chinese papers in high JIF journals.

  6. Smita Mohanty says:

    I absolutely agree with DORA.

    First, I have direct experience of such practice in my professional life. The administrators in position of power who were not in the specific field of specialization (as the candidate), wrote negatively about the candidate’s publications using solely impact factor.

    Second, I had once reviewed an article for a high impact journal where I recommended major revision before publication. I think other reviewers for that article possibly had same/similar reservations. Surprisingly, the article “as it is” was published in another higher impact journal as a “contribution”.

    This was not an isolated experience but only one of many. I therefore agree with what is written above. DORA must take momentum possibly in the form of a petition with signatures of scientists /academicians and publishers and should be sent to all concerned authorities.

    Of course, there are excellent articles in both high impact and not so high impact journals. However, assessing quality of a work not based on the work itself but on journal impact factor is absolutely misleading.

    • American Biotechnologist says:

      Smita-a very well thought out comment. What’s your opinion of open access journals? Do you think that might be going too far?

  7. Stefan Grebe says:

    Another insidious consequence of this quest for higher impact factors is that an increasing number of journals are starting to “game the system”. This involves typically one or several of the following editorial policies:
    1. Giving increasing weight to likely impact of a manuscript that is being submitted, rather than its intrinsic novelty or quality. Catchphrase “is it sexy” – this increases the numerator of the equation.
    2. Increasing the number of review articles, while reducing the number of full length original articles – this increases the numerator.
    3. Decreasing drastically (or abolishing entirely) the number of short communications/rapid communications/preliminary reports that are published – such reports are often not cited very frequently. This decreases the denominator.
    4. Shifting material that would have formerly been submitted as short communications into Letter-to-the-Editor or Research-Letter categories. These do not count as “articles” for the purpose of journal impact factor calculations, but if they are cited they are counted as a citation. This increases (potentially) the numerator, while not impacting the denominator at all (whether the letters get cited or not).
    A perusal of journals, which have shown significant and sustained increases in impact factor over short and medium term, will frequently reveal that they have used one or several of these strategies to inflate their impact factor. It is in particular not uncommon to find, for example, that short communications have been abolished and that the number of original articles per year has all but halved over a period of a few years.
    This type of editorial behavior is probably highly detrimental to good science publishing in the long run.

    • American Biotechnologist says:

      Thanks for your thoughts Stefan. Yet most of us still dream of publishing in Nature, Cell or JBC. And…most of what I’ve seen in these publications seems to be of pretty high quality. “Sexy” is a matter of marketing. Even in science, if it doesn’t sell, nobody will buy it. Nonetheless, you are suggesting that the journals actually inflate their JIF using mathematical formulas. A very interesting thought indeed.

  8. Tong says:

    We urgently need a transparent article reviewing process and grant judging process that can decrease the politics to minimum! Science by itself should speak based on its own value, as what was like in 1940s or 1950s. We should put all efforts in discoveries, not making friends and play politics in so-called circles. And anything we discovered should publish in the most efficient way so that it can be shared quickly among peers. Now, not many people are interested in doing real science, but busy making friends and publish “high-profile” papers to get money and fame. How sad is that!

    • American Biotechnologist says:

      Thanks for your comments Tong. I would imagine that with today’s technology, spreading the word about a truly “high-impact” scientific finding or article shouldn’t be too difficult. With a computer and a twitter account, anyone can increase the reach of their paper.

  9. Bill says:

    Sadly, our administration has for several years used JIF to decide on promotions. Two faculty from the same department, same time frame for promotion. One had 18 publications in high impact journals but no extramural funding; the other 15 publications in field-specific but lower impact journals, but also brought in over $700K in extramural funding. The promotion went to the first. Now (same department), a faculty going up for promotion published several papers in a 2.1 JIF, and is being criticized even though it is the #1 journal in that field!

  10. rkg says:

    Necessity is the mother of invention, Scientific inventions made and ongoing motivation of research is to solve the problems and make the human mankind live simpler and better lives. Measuring the quality of scientific invention make NO progress NOR lead to groundbreaking discoveries. Major discoveries thus far published in various scientific journals looked into specific contributions made to move scientific field forward and asking next question. Research Content should judged based on the advances made in a particular field and to solve a critical problem. It should not be measured on humungous volume of work, which do not address a simple question or speculation.

  11. Klaus Pechhold says:

    I agree, it’s an important discussion. I do, however, agree on only some of what’s been proposed. Scientist must (and always will) be judged, preferentially by their peers. The rules however have to more transparent, fair and above all meaningful.

    1. Scientific contribution, as a measure of “productivity” must be gleaned from a original scientific contributions, primarily through lead- or corresponding authorship. Reviews and such should be listed (and judged) separately. Co-authorship must be accompanied by detailing the specific contribution to that manuscript.

    2. While impact factors ranking among journals cannot reflect well the particular value of an individual manuscript, there has to be some way to acknowledge the “value” of those papers (and scientists) that went through an often much more demanding review process, including exhaustive quantitative requirements and advanced qualitative requirements. JIF in the absence of a better alternative can and must serve this purpose.

    3. Track records must take into account the environment. A scientist who succeeds almost comparably in a variety of laboratories and environments, is in itself more likely to be scientifically successful than another who only had tremendous success in a single environment but failed to continue in a different environment.

    4. There will never be a fair judgement of the impact of scientific work. First, solidly performed studies that yielded “negative” results are very important yet dramatically undervalued (e.g. not-publishable) to a degree that the sheer attempt to continue examining results conflicting with current beliefs are being avoided by many. Secondly, new results published in prestigious (aka high impact) journals that could not be confirmed subsequently, should be identified as such. Perhaps the journal that had the highest ratio between reproducible (accepted) and non-reproducible (rejected) findings should influence the journals’ ranking. While scientist can be wrong about their works’ assessment, it is important (and resource-conscious) for the scientific community, especially young scientist to be made aware of such conflict.

    • American Biotechnologist says:

      Klaus-Thank you for a very well thought out reply. I agree that we need a better method of judging scientific productivity. Especially since the funding system is supposedly based on merit and achievement. What do you think about the open access system of publication and a less formal “peer review” process? With social media today, peer review could come from the masses rather than a select group of reviewers. Think twitter for scientists?

  12. Nibu Basu says:

    I think the best judgement of the paper should be from the experts of the same field of science who knows better the quality of the work not just the impact factor. Even sometimes in low impact factor journal paper could describe the similar quality of works compare to very high impact of one! Is there any hard and fast rule for reviewers who can justify the acceptance of one paper in low vs high impact journals? Sometimes, it could be some sort of willingness of the particular reviewer to support the publication!

    • American Biotechnologist says:

      Nibu-or worse yet. A review may reject a publication based on conflict of interest. How’s that for fairness?

Leave a Reply