Exploring scientific productivity

Hot on the heels of our last blog post rethinking peer reviewed science, which questioned the value, (or more appropriately “opportunity cost”), of reviewer criticism in peer reviewed research comes an article on ways of measuring scientists’ performance and their individual return on investment.

Writing on the arstechnia website, Chris Lee summarizes a recent paper published in Physical Review E and follows up with an intriguing and complex analysis which I believe touches upon some key issues facing academic researchers today.

In the following post, I will try to summarize Chris’ thoughts as clearly as I can with the hope that you will gain enough of an appreciation to read Chris’ original article and share your thoughts with us.

In his write-up, Chris agrees with the Physical Review E paper that raw citation count is not the best way of ranking a scientist’s importance. If so, what alternative metrics that can be used to solve this problem?

  1. publication count won’t work because scientists will just break up big papers into smaller bites
  2. impact factor doesn’t work because complex papers tend to be published in low-impact journals
  3. citation rate doesn’t work because it varies by discipline
  4. normalized citation rate doesn’t work because it penalizes paper with many authors (even when the # of authors is justified)

Ultimately, Chris believes that a scientist’s worth be determined using a number of factors and not via any individual metric alone. I agree. Science is complex. Why should our attempt to rank a scientist’s achievement be any less complicated?

Thanks to @Boraz for pointing out Chris’ article via his twitter feed.

Leave a Reply