The problems of the journal impact factor
Biomedical research suffers from systemic flaws, some of which concern PhD training, scientific publishing or working conditions. We need to reconsider how we evaluate research and think about scientific progress – the journal impact factor (IF) by itself is not our biggest problem.
Randy Schekman, Nobel Laureate, declared in 2013 that he would boycott the prestigious journals Nature, Science and Cell because they were damaging science. He is not the only top-tier scientist acknowledging that the assessment of scholarly research needs reform.
IF is one of the most debated scientometric indicators, and its use for evaluating articles and their authors has been discussed extensively. IF – a metric originally introduced as a tool for librarians – is calculated by dividing the number of citations received in a given year by the number of original articles and review articles published in the preceding two years. The two most common criticisms of IF are: (i) that it is calculated inappropriately as the arithmetic mean of a highly skewed distribution of citations and (ii) that it is influenced by bias factors that have nothing to do with the quality of the journal (for example, document type, research subject or the social status of the paper due to the author’s institution).
Novelty needs time
Another fundamental problem is that the scientific community generally assumes that science published in a high IF journal automatically equals groundbreaking research. Paula Stephan, a professor of economics at Georgia State University (USA), and colleagues have classified papers as ‘non-novel’, ‘moderately novel’ and ‘highly novel’ and compared how they were cited from 2001 to 2015, a much longer period than is used to calculate IF. More novel papers were more likely to be either highly cited or ignored compared with non-novel papers in the same field. The ones that became big hits took time (more than three years and more likely after 15 years) to be recognised. Highly novel papers also tend to be published in journals with lower IF.
This is illustrated by two examples: Hans Krebs submitted his paper describing the discovery of the citric acid cycle, or Krebs cycle, to Nature in 1937, where it was rejected. He then submitted his findings to the journal Enzymologia, where they were published within two months. In 1953, he was awarded the Nobel Prize in Physiology or Medicine for his discovery of the Krebs cycle. Another more recent example is Virginijus Šikšnys from Vilnius University, Lithuania. He was among the first to discover the CRISPR-Cas9 system and submitted his findings in 2012 to Cell, where his paper was rejected. This year, he will receive one of the most prestigious scientific awards, the Kavli Prize. His scientific contributions have finally been recognised – six years later.
Solutions proposed: will they make any difference?
So far, IF has been a handy metric to evaluate scientists and their research quickly, as research evaluation committees (whose members are often not scientists) have to make decisions on the allocation of resources in short time. Suggestions to improve research evaluation include: (i) using more appropriate evaluation metrics, (ii) publishing in open-access (OA) journals or as preprint, (iii) adopting multiple research evaluation strategies (e.g. reading the publications).
Using new metrics seems reasonable at first sight, but this solution is merely methodological and it is not clear whether improved indicators will necessarily give rise to better evaluation practices. Further, most of the proposed new metrics also assess citation impact as quality measure though one could argue that citations are not sufficiently accurate indicators of an article’s value. Publishing in OA journals appears to be a solution that can be adopted once evaluation criteria have been changed; otherwise, this is too risky for not yet established scientists. Reading publications seems obvious, but is unfeasible, given that decisions have to be made, in some cases, on a daily basis.
The bigger problem is the way researchers think about scientific progress
The scientific community has to think more thoroughly about how it wants research to be evaluated and establish common criteria that can be implemented in the evaluation procedures. This calls for deeper transformations in the way we think about scholarly progress: what is good science and what makes a good scientist?
I have seen students and postdocs who, even before starting a project, already think about the IF of the journal they would end up publishing in. This energy spent on chasing a high IF publication is distracting from the efforts that underlie scientific training. Those scientists will eventually become lab heads themselves and promote the same questionable incentives for research. But if we ask recent Nobel Laureates what brought them to success on the long term, they often answer the following: consistent solid work and not the focus on IF.
I believe that one of the most important skills in science is to identify interesting questions and design the best experiments to answer them with appropriate controls – something that takes time and energy to learn. This energy can absolutely not be wasted on an obsession with IF. In my humble opinion, research should be driven by curiosity and passion. It needs knowledge, perseverance, good mentors, a stimulating scientific environment as well as some luck to succeed. If we really want to enlarge the frontier of knowledge, junior as well as established scientists have to start setting the criteria for what is good and rigorous science again – as if there were no IF.