To be a researcher is to find oneself under constant evaluation. Academics' worth is based on evaluation of the levels of esteem within which their contributions are held by their peers, decision-makers and others. It is therefore worthwhile to distinguish between evaluation of a piece of work and evaluation of the researcher themselves. Both research and research find themselves evaluated through two primary methods: peer review and metrics, the first qualitative and the latter quantitative.
In recent years alternative metrics or altmetrics have become a topic in the debate about a balanced assessment of research efforts that complement citation counting by gauging other online measures of research impact, including bookmarks, links, blog posts, tweets, likes, shares, press coverage and the like. Underlying all of these issues with metrics is that they are produced by commercial entities (e.g. Clarivate Analytics and Elsevier) based on proprietary systems, which can lead to some issues with transparency.
Beginning in the 17th century with the Royal Society of London (1662) and the Académie Royale des Sciences de Paris (1699) as the privilege of science to censor itself rather than through the church, it took many years for peer review to be properly established in science. Peer review, as a formal mechanism, is much younger than many assume. For example, the journal Nature only introduced it in 1967. Although surveys show that researchers value peer review they also think it could work better. There are often complaints that peer review takes too long, that it is inconsistent and often fails to detect errors, and that anonymity shields biases. Open peer review (OPR) hence aims to bring greater transparency and participation to formal and informal peer review processes.
Being a peer reviewer presents researchers with opportunities for engaging with novel research, building academic networks and expertise, and refining their own writing skills. It is a crucial element of quality control for academic work. Yet, in general, researchers do not often receive formal training in how to do peer review. Even where researchers believe themselves confident with traditional peer review, however, the many forms of open peer review present new challenges and opportunities.
As OPR covers such as a diverse range of practices, there are many considerations for reviewers and authors to take into account.
There are some potential pitfalls to watch out for, including:
Regarding evaluation, current rewards and metrics in science and scholarship are not (yet) in line with Open Science. The metrics used to evaluate research (e.g. Journal Impact Factor, h-index) do not measure - and therefore not reward - open research practices. Furthermore, many evaluation metrics, especially certain types of bibliometrics, are not as open and transparent as the community would like.
But, more funders and institutions are taking steps to support Open Science by encouraging more openness, building related metrics and quantifying outputs, as well as experimenting with alternative research practices and assessment, open data, citizen science and open education.
At the UFS, take a look at the efforts of the Digital Scholarship Centre (DSC) and the UFS's Open Educational Resources (OERs).
The San Francisco Declaration on Research Assessment (DORA) recommends moving away from journal based evaluations, consider all types of output and use various forms of metrics and narrative assessment in parallel. The Leiden Manifesto provides guidance on how to use metrics responsibly.
Altmetrics have the following benefits:
The timeliness of altmetrics presents a particular advantage to early-career researchers, whose research impact may not yet be reflected in significant numbers of citations, yet whose career-progression depends upon positive evaluations.
Is research evaluation fair?
Research evaluation is as fair as its methods and evaluation techniques. Metrics and altmetrics try to measure research quality with research output quantity, which can be accurate, but does not have to be.