Skip to Main Content

Open Science

What is it?

To be a researcher is to find oneself under constant evaluation. Academics' worth is based on evaluation of the levels of esteem within which their contributions are held by their peers, decision-makers and others. It is therefore worthwhile to distinguish between evaluation of a piece of work and evaluation of the researcher themselves. Both research and research find themselves evaluated through two primary methods: peer review and metrics, the first qualitative and the latter quantitative.

In recent years alternative metrics or altmetrics have become a topic in the debate about a balanced assessment of research efforts that complement citation counting by gauging other online measures of research impact, including bookmarks, links, blog posts, tweets, likes, shares, press coverage and the like. Underlying all of these issues with metrics is that they are produced by commercial entities (e.g. Clarivate Analytics and Elsevier) based on proprietary systems, which can lead to some issues with transparency.

Open peer review

Beginning in the 17th century with the Royal Society of London (1662) and the Académie Royale des Sciences de Paris (1699) as the privilege of science to censor itself rather than through the church, it took many years for peer review to be properly established in science. Peer review, as a formal mechanism, is much younger than many assume. For example, the journal Nature only introduced it in 1967. Although surveys show that researchers value peer review they also think it could work better. There are often complaints that peer review takes too long, that it is inconsistent and often fails to detect errors, and that anonymity shields biases. Open peer review (OPR) hence aims to bring greater transparency and participation to formal and informal peer review processes.

Being a peer reviewer presents researchers with opportunities for engaging with novel research, building academic networks and expertise, and refining their own writing skills. It is a crucial element of quality control for academic work. Yet, in general, researchers do not often receive formal training in how to do peer review. Even where researchers believe themselves confident with traditional peer review, however, the many forms of open peer review present new challenges and opportunities.

As OPR covers such as a diverse range of practices, there are many considerations for reviewers and authors to take into account.

  • Open identities (non-blinded) review fosters greater accountability amongst reviewers and reduces the opportunities for bias or undisclosed conflicts of interest.
  • Open peer review reports add another layer of quality assurance, allowing the wider community to scrutinise reviews to examine the decision-making process.
  • In combination, open identities and open reports are theorised to lead to better reviews, as the though of having their name publicly connected to a work or seeing their review published encourages reviewers to be more thorough.
  • Open identities and open reports enable reviewers to gain public credit for their review work, thus incentivising this vital activity and allowing review work to be cited in other publications and in career development activities linked to promotion.
  • Open participation could overcome problems associated with editorial selection of reviewers (e.g. biases, closed-networks, elitism). Especially for early career researchers who do not yet receive invitations to review, such open processes may also present a chance to build their research reputation and practice their review skills.

There are some potential pitfalls to watch out for, including:

  • Open identities remove anonymity conditions for reviewers (single-blind) or authors and reviewers (double-blind) which are traditionally in place to counteract social biases (although there is not strong evidence that such anonymity has been effective). It's therefore important for reviewers to constantly question their assumptions to ensure their judgements reflect only the quality of the manuscript, and not the status, history, or affiliations of the author(s). Authors should do the same in receiving peer review comments.
  • Giving and receiving criticism is often a process fraught with unavoidably emotional reactions - authors and reviewers may subjectively agree or disagree on how to present the results and/or what needs improvement, amendment or correction. In open identities and/or open reports, the transparency could exacerbate such difficulties. It is therefore essential that reviewers ensure that they communicate their points in a clear and civil way, in order to maximise the chances that it will be received as valuable feedback to the author(s).
  • Lack of anonymity for reviewers in open identities review might subvert the process by discouraging reviewers from making strong criticisms, especially against higher-status colleagues.
  • Finally, given these issues, potential reviewers may be more likely to decline to review.

Evaluation

Regarding evaluation, current rewards and metrics in science and scholarship are not (yet) in line with Open Science. The metrics used to evaluate research (e.g. Journal Impact Factor, h-index) do not measure - and therefore not reward - open research practices. Furthermore, many evaluation metrics, especially certain types of bibliometrics, are not as open and transparent as the community would like.

But, more funders and institutions are taking steps to support Open Science by encouraging more openness, building related metrics and quantifying outputs, as well as experimenting with alternative research practices and assessment, open data, citizen science and open education.

At the UFS, take a look at the efforts of the Digital Scholarship Centre (DSC) and the UFS's Open Educational Resources (OERs).

Open metrics

The San Francisco Declaration on Research Assessment (DORA) recommends moving away from journal based evaluations, consider all types of output and use various forms of metrics and narrative assessment in parallel. The Leiden Manifesto provides guidance on how to use metrics responsibly.

Altmetrics have the following benefits:

  • They accumulate quicker than citations
  • they can gauge the impact of research outputs other than journal publications (e.g. datasets, code, protocols, blog posts, tweets, etc.)
  • They can provide diverse measures of impact for individual objects

The timeliness of altmetrics presents a particular advantage to early-career researchers, whose research impact may not yet be reflected in significant numbers of citations, yet whose career-progression depends upon positive evaluations. 

Questions, obstacles and common misconceptions

Is research evaluation fair?

Research evaluation is as fair as its methods and evaluation techniques. Metrics and altmetrics try to measure research quality with research output quantity, which can be accurate, but does not have to be.