Time to remodel the journal impact factor
Nature and the Nature journals are diversifying their presentation of performance indicators.
27 July 2016
Source/Fonte: Journal Metrics
Metrics are intrinsically reductive and, as such, can be dangerous. Relying on them as a yardstick of performance, rather than as a pointer to underlying achievements and challenges, usually leads to pathological behaviour. The journal impact factor is just such a metric.
During a talk just over a decade ago, its co-creator, Eugene Garfield, compared his invention to nuclear energy. “I expected it to be used constructively while recognizing that in the wrong hands it might be abused,” he said. “It did not occur to me that ‘impact’ would one day become so controversial.”
As readers of Nature probably know, journal impact factors measure the average number of citations, per published article, for papers published over a two-year period. Journals do not calculate their impact factor directly — it is calculated and published by Thomson Reuters.
Publishers have long celebrated strong impact factors. It is, after all, one of the measures of their output’s significance — as far as it goes.
But the impact factor is crude and also misleading. It effectively undervalues papers in disciplines that are slow-burning or have lower characteristic citation rates. Being an arithmetic mean, it gives disproportionate significance to a few very highly cited papers, and it falsely implies that papers with only a few citations are relatively unimportant.
...
FREE PDF GRATIS: Natur