- The paper’s main contribution is clarifying the strengths and weaknesses of various citation impact indicators used for evaluating research.
- It compares major bibliographic databases like WoS, Scopus, and Google Scholar to highlight differences in coverage and indexing practices.
- The review details normalization and fractional counting methods, offering insights into accurate credit allocation and potential indicator improvements.
Overview of "A Review of the Literature on Citation Impact Indicators" by Ludo Waltman
Ludo Waltman's paper examines the intricate landscape of citation impact indicators which play a vital role in research evaluation. This meticulous review not only encapsulates foundational aspects but also explores specialized topics such as normalization techniques, counting methods for co-authored publications, and indicators for journals. Here, I offer a high-level synthesis with a focus on the empirical results, methodological choices, and future directions in citation analysis research.
Bibliographic Databases
Waltman begins by evaluating primary bibliographic databases instrumental in citation analysis: Web of Science (WoS), Scopus, and Google Scholar. Each database exhibits distinct coverage features and limitations. WoS and Scopus are subscription-based, offering comprehensive multidisciplinary coverage but with varying document type classifications and indexing accuracies. Google Scholar, in contrast, provides broader but less curated coverage.
Key findings from the comparison of WoS and Scopus indicate that while Scopus's broader coverage includes more conference proceedings, WoS's stable, historical data make it reliable albeit narrower. Google Scholar's appeal lies in its extensive reach, although quality control concerns and lack of indexing transparency are notable drawbacks.
Core Topics in Citation Impact Indicators
Selection of Publications and Citations
The selection criteria for publications and citations are crucial for accurate citation impact measurements. Exclusions based on document type, language, and national vs. international journals are common strategies to mitigate noise and biases. Removing non-English publications and focusing on international journals, for instance, corrects for biases against countries where local language publications are prevalent.
Self-citations present another layer of complexity. While excluding self-citations can reduce inflation of citation counts, the sensitivity of indicators like the h-index to self-citations remains contentious. Studies suggest differing levels of impact at macro, meso, and micro levels, necessitating a nuanced approach depending on the evaluation context.
Normalization Techniques
Normalization addresses disparities in citation behaviors across fields and publication years. The paper emphasizes two main methods: average of ratios and ratio of averages, with empirical evidence pointing to minor differences between them. More sophisticated approaches like citing-side normalization—accounting for differences in reference list lengths—offer promising alternatives but also invite debate on their efficacy compared to traditional methods.
Counting Methods
Given the rise in collaborative research, Waltman highlights the importance of accurate credit allocation in multi-authored publications. The fractional counting method, while reducing the inflationary effect of full counting, may oversimplify by equally distributing credit irrespective of actual contribution. Weighted methodologies, considering author positions and other factors, provide refined alternatives though their implementation and acceptance are mixed.
Citation Impact Indicators for Journals
The impact factor (IF) remains the most recognized journal-level indicator, despite its limitations such as short citation windows unsuitable for certain fields. Alternatives like the five-year IF and diachronic measures offer adjustments yet face adoption challenges. Normalization techniques and recursive indicators such as the eigenfactor and SCImago Journal Rank (SJR) provide field-agnostic comparisons but require further validation.
Implications and Future Directions
Waltman concludes with recommendations aimed at advancing the field of citation analysis:
- Stringent Introduction of New Indicators: Researchers are advised against the proliferation of redundant indicators unless they present clear, demonstrable improvements over existing ones.
- Theoretical Foundation: Emphasis should be placed on a robust theoretical underpinning of citation impact indicators, enhancing the understanding of their construction and implications.
- Practical Usage: Greater attention is needed towards how these indicators are employed in real-world evaluations, ensuring alignment with the practical needs and expectations of end-users.
- Leveraging New Data Sources: Exploiting advancements in digital publishing and open access can lead to more sophisticated citation metrics, potentially incorporating quantitative and qualitative data from full-text analyses.
Waltman's review encapsulates the evolving, complex nature of citation impact indicators and lays a foundation for ongoing advancements while addressing practical and theoretical challenges remaining in the domain.