#LINO18 panel discussion ‚Publish or Perish‘. Photo/Credit: Patrick Kunkel/Lindau Nobel Laureate Meetings
Publish or perish – anyone working in research or academia knows that phrase all too well and has felt the pressure and dread caused by it. The original idea behind publishing new results was of course to disseminate and archive knowledge. However, while the exchange of knowledge has always been an integral part of research and academia, the current way of researchers climbing over one another or, worse yet, molding their results to fit a promising narrative to get a publication in one of the glamour journals, is unsustainable. The rationale behind wanting a “big” publication is simple: A paper in a journal with a high glamour factor (which is closely correlated to the impact factor of the journal) such as Nature, Science or Cell promises high visibility for the leading authors of the paper and presumably heightens their value and prospects when applying for a new fellowship or grant.
The journal impact factor, however, is averaged over all papers that a certain journal has published in a given year. Meaning: it only takes a few, very good articles that are highly cited by the community to increase the impact factor of the entire journal and thereby the perceived value of the other papers that are published by the same journal, which might be of comparatively little value to the progress of the research community.
Recently though, there have been more and more attempts to change that system and find a new way of measuring scholarly achievements other than via the impact factor. But to change the status quo, what exactly needs to change and how can this be achieved? These are just three of the many issues that were discussed during a Panel Discussion on Wednesday afternoon of the 68th Lindau Nobel Laureate Meeting.
The panel discussion itself was nothing short of a verbal boxing match with host Alaina Levine doing her best to ensure everyone gets a fair shot. The sparring partners:
- Daniel Ropers, CEO of Springer Nature, Germany
- Maria Leptin, Director of EMBO, Germany
- Randy W. Schekman, received the Nobel Prize in 2013 in Physiology or Medicine for the discovery of machinery regulating vesicle traffic; former editor-in-chief of PNAS and editor of eLife since 2011
- Amy Shepherd, graduate student at the University of Melbourne, Australia
- Harold E. Varmus, received the Nobel Prize in 1989 in Physiology or Medicine for the studies of the genetic basis of cancer
The first round of the discussion started innocently enough: Ropers pointed out that he was fairly new to the world of life science publishing but stressed how much he values this world and especially the hard work of the researchers. The other panellists briefly summarised the history of scientific publishing and Schekman highlighted just how much it has changed: “When I was a student, all the articles were in hardcover copies, and I could look at them in the library; now, everything is online!”
However, it wasn’t long until the panellists hit on the heavily debated topic of impact factors. Schekman proclaimed: “The impact factor is a simplification and often times a mismeasurement of scholarship!” To which Varmus added: “We can’t allow the publishing process to become a surrogate for measuring scholarly value!” Some might argue that this is already the case, especially from the perspective of a young scientist who is only just starting out. “We are constantly told by our supervisors and our peers that we need a high-impact paper to advance our career,” Shepherd argued.
So, what could be an alternative measurement of scholarly prowess? Although it has been debated and criticized for many years, few, if any solutions, have emerged, as immunologist and senior lecturer at Imperial College London John Tregoning points out in a recent comment in Nature.
During the discussion, Varmus suggested assessing the impact factor and the citations of a single paper, rather than for the whole journal. Many publishers are already doing this, but it is not yet common practice to use this metric for job or fellowship applications.
Another option mentioned by Varmus and Schekman would be to ask the researchers to write a single-paragraph narrative summarising the importance of their newest results to their respective fields. “Everyone has time to write or read a single paragraph,” argues Schekman.
Leptin introduced yet another initiative recommending to rely less on the impact factor and more on other aspects of a researcher’s work output, such as teaching or science outreach, when funding agencies and institutions are assessing future candidates: the San Francisco Declaration on Research Assessment (DORA), which was developed by a group of editors and publishers of scholarly journals during the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco in December 2012.
The panellists all agreed that something has to change in the way research quality is assessed, and Leptin added: “We (i.e., EMBO funding agency) know that not everyone can have a high-impact-factor paper at the end of their PhD. When we assess prospective candidates, we value the personal or motivational statement of the researcher far more than their publication history.” Other funding opportunities like the Wellcome Trust Fund and Howard Hughes also rely less and less on the impact factor for their decision-making process.
Soon after, however, the topics of Open Science and Open Access were brought up, sparking a very lively debate about the fact that a handful of publishers have a stronghold on all of scientific publishing. Here, Ropers was very much thrown into the deep end of the pool and experienced the full brunt of frustration of a room full of young scientists worried about their academic future as well as experienced scientists who have come up against paywalls and obscure publishing policies time and again.
Yet, as easy as it is to vilify him and his company, one must remember that Nature Springer is certainly not the only publishing endeavour making a sizeable profit – Wiley and Elsevier for instance are no non-profit organisations either, just to name a few.
Academic publishing in general is a billion-dollar industry, and although making or earning money is not inherently bad, it is difficult to explain to the research community, why they should have to pay huge sums of money for submitting an article as well as to read it. Obviously, a business needs to make money – after all, there are people working for it and a certain infrastructure needs to be maintained. But again, why does this have to come out of the researcher’s pocket – twice?
However, profit margins and open science are not mutually exclusive, they might not even be two sides of the same coin but rather two aspects of a gigantic scientific enterprise. While some might argue “open science is just science done right,” Ropers brought up the point that the scientific community probably has to accept that it is not going to be that easy to move to a completely open publishing system. To this, Leptin gave an emotional and impassioned response: “No, the time is now! Politicians have already decided, and the big publishing houses will have to adjust as fast as possible in order to survive!” She was referring, of course, to a recent EU guideline which demands that any results coming out of a publicly EU-funded research grant have to be published in an open access journal.
While this is a great incentive and an applaudable example, again, there is the issue of money. Because many open access journals also charge a lot of money for the submission of a paper that an independent postdoctoral researcher might not be able to afford. The solution: if grant agencies demand these kinds of publications, they have to account for it in their funding.
Another aspect of open science is the review process, which, for a long time took place behind closed doors – authors rarely get to know who reviews them. To counter this, Schekman has recently started an experiment with the journal eLife, in which the peer-reviews – including the names of the reviewers – are published alongside the research paper.
Towards the end of the debate, a couple of additional issues were raised: The results that end up being published in a paper are usually only a fraction of the data that has been collected. Usually there are several “negative” results that precede the “publishable” results. And in this case, negative doesn’t necessarily mean the opposite or unexpected result, but that certain approaches or models simply don’t work. The current tendency is to hide these data away, the problem however is that then someone else might try the exact same approach, which again, won’t work – wasting time and money in the process. Therefore, Shepherd urges: “We need to find a way to publish negative results in order to save immense amounts of money and time!”
Again, Schekman and Varmus both argued for putting research papers also on preprint archives as soon as the papers are ready. These archives are not necessarily meant as a storage for negative results, but the reviews and comments by fellow researchers are invaluable.
Coming full circle, when confronted again with the fact that many of the supervisors of the young scientists in the audience still push high-impact-factor papers on their students and praise them as the ultimate goal, Varmus sums up the main issue: “The change of the publication process has to come out of the scientific community. But we can’t expect our trainees to do the right thing, unless we do the right thing!”