Archive for the ‘Deep Thoughts’ Category
In a wildly popular post published several months ago, we took a controversial stance in discussing the ugly side of the journal impact factor. In that article we argued that the journal impact factor (JIF) is a useless tool for measuring the productivity of scientists and that it is being used unfairly to grant merit increases to scientists based solely on their JIF ranking. The article generated dozens of comments, with many readers avidly agreeing with our opinion and enthusiastically sharing their stories of colleagues who have been cheated out of deserved promotions due to their dearth of publications in journals with high JIF rankings.
In a rather ironic twist of fate, the journal Nature, (probably one of the highest ranking JIF journals around), broke a story on how several Brazilian scientific journals have been suspended from Thomson Reuters’ JIF service for inappropriately manipulating the journal’s content to falsely increase their JIF rating. The accused Brazilian journal editors encouraged scientists to cite other Brazilian journal articles in their publication in order to help increase the JIF ranking of the cited journal. In an even more egregious move, the editors created a Brazilian cartel and agreed to stack their publications with citations from their peer’s journals, falsely inflating the Brazilian journals’ JIF.
The editors defended their action by claiming that many Brazilian scientists are hesitant to publish in local journals due to the governmental policy of preferentially funding scientists that publish in high JIF journals. This has created a Pandora’s box for Brazilian journals looking to improve their JIF score since local scientists are unwilling to publish in the local journals, (which in the long-term would help increase the journal’s JIF), making it even more difficult for Brazilian journals to improve their ranking.
So as you see, not only are academic institutions using JIF the wrong way, governments are as well. Moral of the story, get rid of JIF and find a better way to evaluate scientific contribution!
A very unique way to teach science.
Over the past few weeks, we have explored the question of what constitutes scientific success and several important “comandments” for achieving this holy grail. In this post, we will discuss a presentation given by a young scientist at Delft University of Technology, who has expressed frustration with the common use of publication rate for defining scientific achievement. The presentation is especially noteworthy as it comes from a young scientist, Guenevere Prawiroatmodjo, who has yet to been tainted by years of politicking to climb the academic ladder. Nonetheless she is clearly bothered by the importance that is attached to an end-result that doesn’t pay tribute to, or encourage sharing of the entire scientific process. Not just results.
As Dr. Richard Feynmen so eloquently stated:
There isn’t any place to publish what you actually did in order to get to do the work
So what is Dr. Prawiroatmodjo’s solution to this problem? To create more openness and to share more parts of the scientific process. Furthermore, she postulates that it is critical to stimulate scientific motivation by encouraging entrepreneurship and commercialization.
In this vein, Dr. Prawiroatmodjo has come up with the “p” index which ranks scientific success as the number of times a scientist’s techniques or scientific tools have been used by the scientific community. In other words, if the “h” index ranks scientists by the number of citations their publication has received in the scientific literature, the “p” index ranks scientists by the impact their scientific methodology has had on the scientific community.