Counting research ⇒ directing research. The hazard of using simple metrics to evaluate scientific contributions. An EU experience

Kai A. Olsen, Alessio Malizia

Research output: Contribution to journalArticlepeer-review

2 Citations (Scopus)

Abstract

In many EU countries there is a requirement to count research, i.e., to measure and prove its value. These numbers, often produced automatically based on the impact of journals, are used to rank universities, to determine fund distribution, to evaluate research proposals, and to determine the scientific merit of each researcher. While the real value of research may be difficult to measure, one avoids this problem by counting papers and citations in well-known journals. That is, the measured impact of a paper (and the scientific contribution) is defined to be equal to the impact of the journal that publishes it. The journal impact (and its scientific value) is then based on the references to papers in this journal. This ignores the fact that there may be huge differences between papers in the same journal; that there are significant discrepancies between impact values of different scientific areas; that research results may be offered outside the journals; and that citations may not be a good index for value. Since research is a collaborative activity, it may also be difficult to measure the contributions of each individual scientist. However, the real danger is not that the contributions may be counted wrongly, but that the measuring systems will also have a strong influence on the way we perform research.

Original languageEnglish
Article number3
JournalJournal of Electronic Publishing
Volume20
Issue number1
DOIs
Publication statusPublished - 2017

Keywords

  • Counting research
  • H-index
  • JCR
  • Journal publications
  • Ranking

Fingerprint

Dive into the research topics of 'Counting research ⇒ directing research. The hazard of using simple metrics to evaluate scientific contributions. An EU experience'. Together they form a unique fingerprint.

Cite this