Relative Citation Ratio (RCR): nova métrica usa taxas de citação para medir a influência ao nível de artigo

sexta-feira, setembro 09, 2016

Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level

B. Ian Hutchins, Xin Yuan, James M. Anderson, George M. Santangelo 




Abstract

Despite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method to quantify the influence of a research article by making novel use of its co-citation network to field-normalize the number of citations it has received. Article citation rates are divided by an expected citation rate that is derived from performance of articles in the same field and benchmarked to a peer comparison group. The resulting Relative Citation Ratio is article level and field independent and provides an alternative to the invalid practice of using journal impact factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010 and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov.

Author Summary

Academic researchers convey their discoveries to the scientific community by publishing papers in scholarly journals. In the biomedical sciences alone, this process now generates more than one million new reports each year. The sheer volume of available information, together with the increasing specialization of many scientists, has contributed to the adoption of metrics, including journal impact factor and h-index, as signifiers of a researcher’s productivity or the significance of his or her work. Scientists and administrators agree that the use of these metrics is problematic, but in spite of this strong consensus, such judgments remain common practice, suggesting the need for a valid alternative. We describe here an improved method to quantify the influence of a research article by making novel use of its co-citation network—that is, the other papers that appear alongside it in reference lists—to field-normalize the number of times it has been cited, generating a Relative Citation Ratio (RCR). Since choosing to cite is the long-standing way in which scholars acknowledge the relevance of each other’s work, RCR can provide valuable supplemental information, either to decision makers at funding agencies or to others who seek to understand the relative outcomes of different groups of research investments.

Abstract

Despite their recognized limitations, bibliometric assessments of scientific productivity have been widely adopted. We describe here an improved method to quantify the influence of a research article by making novel use of its co-citation network to field-normalize the number of citations it has received. Article citation rates are divided by an expected citation rate that is derived from performance of articles in the same field and benchmarked to a peer comparison group. The resulting Relative Citation Ratio is article level and field independent and provides an alternative to the invalid practice of using journal impact factors to identify influential papers. To illustrate one application of our method, we analyzed 88,835 articles published between 2003 and 2010 and found that the National Institutes of Health awardees who authored those papers occupy relatively stable positions of influence across all disciplines. We demonstrate that the values generated by this method strongly correlate with the opinions of subject matter experts in biomedical research and suggest that the same approach should be generally applicable to articles published in all areas of science. A beta version of iCite, our web tool for calculating Relative Citation Ratios of articles listed in PubMed, is available at https://icite.od.nih.gov.

Author Summary

Academic researchers convey their discoveries to the scientific community by publishing papers in scholarly journals. In the biomedical sciences alone, this process now generates more than one million new reports each year. The sheer volume of available information, together with the increasing specialization of many scientists, has contributed to the adoption of metrics, including journal impact factor and h-index, as signifiers of a researcher’s productivity or the significance of his or her work. Scientists and administrators agree that the use of these metrics is problematic, but in spite of this strong consensus, such judgments remain common practice, suggesting the need for a valid alternative. We describe here an improved method to quantify the influence of a research article by making novel use of its co-citation network—that is, the other papers that appear alongside it in reference lists—to field-normalize the number of times it has been cited, generating a Relative Citation Ratio (RCR). Since choosing to cite is the long-standing way in which scholars acknowledge the relevance of each other’s work, RCR can provide valuable supplemental information, either to decision makers at funding agencies or to others who seek to understand the relative outcomes of different groups of research investments.

FREE PDF GRATIS: PLoS