Ranking overrated in publishing today

Svenska 2016-01-11

The current system for scientific publication was created at a time when there were fewer articles and the pressure to publish was less. The ranking of journals now plays an increasingly large role and there are an increasing number of problems in publishing.

 

The process of scientific publication was shaped for the scientific community of the 1950s, not for today’s situation. This fact is emphasised by Catriona MaCallum, open access advocate at the Public Library of Science, PLOS, and editor of the scientific journal PLOS One.

“The scientific community is global now, research is often based on large collaborations and there is a huge amount of information available on the internet. The tools we use are very different and we now have access to large databases.”

More and more scientific articles are being published. The downside is that the quality often seems to suffer.

The current system has a number of problems which appear to be increasing: It is difficult to repeat published results, the proportion of retracted articles is on the increase and there are flaws in the reviewing system. It is also difficult to obtain negative, less exciting results published compared with new discoveries and results with significant p-values.”

The chase for impact factors

Since 1964, scientific journals have been ranked using so-called impact factors in the Science Citation Index (SCI), which has been owned by the company Thomson Reuters since 1991. The impact factor is based on how often the journal is cited and has had a major influence on how scientific articles are evaluated.

It is also used when scientific journals market themselves, says Catriona MaCallum. The ultimate goal of a researcher today is getting published in a high impact factor journal such as Science or Nature. That has not always been the case, however.

Bruce Alberts has had a long career as a biochemist and textbook author, including his classic, The Cell. When he started as a researcher it was not so important to be published in high-ranking journals.

“People didn’t care about Science or Nature like they do now. You discovered something and others build on it. I purified proteins and most of my publications were in the Journal of Biological Chemistry.”

He believes that the change came gradually after the 1990s and was the result of organisations that allocate resources and services to researchers.

“When they are too lazy to evaluate the work themselves, they look at the impact factor of the journal which publishes the article.”

He points out that high-ranking, broad interest journals like Science want exciting articles that reveal some great discovery. The problem arises when we are always under pressure to make new discoveries – there is a large risk that research results are not reproducible, he argues.

“People focus on the wrong thing. Biological systems are complex.”

Poor descriptions of methodology

As well as the chase for impact factors, Bruce Alberts sees another problem: descriptions of how research is carried out are often not sufficiently detailed.

“In half of the cases you can’t work out how it was done. The Method section should be like a cookbook.”

The tradition of skimpy method descriptions started because it was costly to print long texts in paper journals, according to Bruce Alberts. He believes the problem has stayed around due to an exaggerated fear of accusations of plagiarism.

At the editorial offices there are computer programs that go through articles submitted and recognise what has already been published. This makes people afraid to write method descriptions which are often identical in different articles, for natural reasons.

“Journals should make it clear that long, detailed method descriptions are good,” he underlines. “Complete method descriptions should be a requirement.”

Open access – online publication

With the advent of internet came open access, publishing online without any financial outlays for those who want to read articles. This means that research results are more accessible, but need new financing models.

One model is that the authors of an article pay the open access journal. Another way is putting the article on a server financed by a foundation or university, without any charge. An example of the latter model is the database Arxiv (described in Curie’s next article on scientific publications).

Catriona MacCallum works as an advisory editor to PLOS One, a popular open access journal where authors generally pay for publication, except in certain cases of exemption. The researcher’s funder often pays the charge.

She stresses that this form of publication has equally high standards of peer review as the traditional journals that are financed through subscription fees.

“Peer review is fundamental.”

PLOS One receives 30,000 article scripts per year and also publishes studies where the hypothesis was not confirmed, so-called negative studies, as well as replication studies.

Colleagues review articles

One of the fundamental pillars in maintaining the quality of scientific publications is peer review, when researchers in the same field scrutinise articles. Unfortunately, the model does not work well for examining major research projects, says Catriona MacCallum.

“There are often huge quantities of data. Can one editor and two reviewers really hope to cover all that? ”

There are other problems with the peer review system, too. The authors themselves often propose which researchers can review their articles. In the past they were also allowed to send e-mail addresses with the article. This led to some researchers pretending to be their own reviewers by giving false e-mail addresses.

Catriona MacCallum does not believe this was common practice, but points out that there is a higher pressure to publish in certain countries and among certain groups of scientists, which can lead to some taking short cuts.

Bruce Alberts mentions a different problem: established researchers often give the reviewing work to their postdocs.

“It is not good for the journal. Post-doctoral students have the wrong incentives. They want to impress the group manager by finding as many weaknesses as possible in a script.”

He feels that peer review audits were better when journals arranged for reviewers to see each other’s reviews and comment on them, albeit anonymously.

List their most important articles

In the future, Bruce Alberts hopes that educational institutions will not allow the number of articles in high ranking journals to determine recruitment. He recommends that universities request their candidates to list their five most important articles and briefly describe what they are about.

“In some advertisements for academic posts, it is now clearly indicated that they will not be looking at impact factors but at the content of the articles.”

Also read in Curie: Forum for criticism of published articles

Text: Anja Castensson
Photo: Kurt Nielsen / TT Nyhetsbyrån