Emergent Mind

Abstract

Systematic evaluations of publicly funded research typically employ a combination of bibliometrics and peer review, but it is not known whether the bibliometric component introduces biases. This article compares three alternative mechanisms for scoring 73,612 UK Research Excellence Framework (REF) journal articles from all 34 field-based Units of Assessment (UoAs) 2014-17: peer review, field normalised citations, and journal average field normalised citation impact. All three were standardised into a four-point scale. The results suggest that in almost all academic fields, bibliometric scoring can disadvantage departments publishing high quality research, with the main exception of article citation rates in chemistry. Thus, introducing journal or article level citation information into peer review exercises may have a regression to the mean effect. Bibliometric scoring slightly advantaged women compared to men, but this varied between UoAs and was most evident in the physical sciences, engineering, and social sciences. In contrast, interdisciplinary research gained from bibliometric scoring in about half of the UoAs, but relatively substantially in two. In conclusion, out of the three potential source of bias examined, the most serious seems to be the tendency for bibliometric scores to work against high quality departments, assuming that the peer review scores are correct. This is almost a paradox: although high quality departments tend to get the highest bibliometric scores, bibliometrics conceal the full extent of departmental quality advantages. This should be considered when using bibliometrics or bibliometric informed peer review.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.