The impact factor is a way of ranking scientific journals on the basis of how often their articles are cited. Nowadays, however, the measurement is also used to assess research and researchers. Criticism of the abuse of JIF in evaluating research has been around almost as long as the measure itself.

Factor that influences research careers

Svenska 2016-09-28

Crucial for researchers’ careers, proudly presented on journals’ websites, influences grant allocations. The Journal Impact Factor – a method of ranking scientific journals on the basis of how often their articles are cited – is often used and often criticised, especially when it is used to evaluate research and researchers.

“Like nuclear energy, the impact factor is both good and bad”. Those were the words of Eugene Garfield, inventor of the Journal Impact Factor, JIF, in a presentation in 2005. The comparison may seem a trifle exaggerated, but the fact is that JIF is a widely used and fiercely criticised bibliometric figure with the power to influence researchers’ careers.

In short, JIF is a way of ranking scientific journals according to how often their articles are cited. A journal’s JIF for a given year is calculated on the basis of how often articles published in the two previous years were cited during the year (there is a more detailed definition in the information box). It is calculated once every year by Thomson Reuters and is based on the citations in their database, Web of Science.

Came in the 1960s

The measurement was born in the beginning of the 1960s. At that time, Eugene Garfield needed a way to select which journals would be part of his newly created Science citation index – a list of where and by whom scientific articles were cited.

Journals that were often referred to would obviously be included, but if you only compared the total number of citations, small circulation journals could be penalised. The Journal Impact Factor, JIF, was thus created to be able to compare journals, no matter what their circulation.

Librarians quickly accepted JIF as an aid to managing journal stocks. With this accessible, simple and seemingly very reliable measure of the status of different publications, it also became more important for researchers to publish their results in the “right” journal, and since research value must increasingly be proved in the form of results published in prestigious journals, the influence of the impact factor has grown.

JIF is one of the most frequent quantitative methods of assessing scientific journals. It is an important marketing tool for publishers and is used by scientists as an aid to decide where their research results should be published, especially in areas with good coverage in the Web of Science, such as economics and biomedicine.

Assessing research and researchers

However, the measure has also started to be used for other purposes. The assumption that a journal’s JIF is representative of the individual articles and their authors is used widely to assess both research and researchers. A publication can then be weighted with the JIF for the journal in which it is published and put together in a measurement of the researcher’s or institution’s performance, for example.

Ton van Raan

Ton van Raan

This is not entirely correct, however, because a journal’s JIF says nothing about the real impact of the individual articles, according to Ton van Raan, Professor Emeritus in quantitative science studies at the University of Leiden in the Netherlands.

“Many articles, even in top-rated journals such as Nature and Science, are cited rarely or not at all. It is very possible that something you publish in a journal with a low impact factor later has a very high impact,” he says.

The citation frequency for individual articles in a journal has a wide spread with a skewed distribution. A few articles are widely cited, while most are cited infrequently or not at all.

Simple but not smart

A journal’s impact factor is largely determined by a small number of widely cited articles, and nothing says that a publication in a high JIF journal will lead to individual articles being cited.

Despite this obvious fact, it is not unusual for JIF to be used in evaluations of researchers prior to employment, the assessment of funding applications or the allocation of research resources at different levels.

“It is quick and easy, but not smart. JIF is more or less OK to use to check what position a journal has within a field of research, but not for the evaluation of research,” says Ton van Raan, whose institution has developed one out of a number of current alternative measures of a journal’s impact (see fact box).

The JIF measurement is also criticised for being easy to manipulate. Every year Thomson Reuters blocks a number of titles that are considered to have cheated to increase their impact factor. Examples include journals agreeing to cite others’ articles, or excessive internal citations. The latter is possible since the calculation does not analyse whether citations come from the same journal or from other journals.

Different publishing patterns

Differences in publication patterns also makes the JIF unsuitable for comparisons between different areas of science. Within mathematics and technology, for example, few citations are made, so journals in these areas receive lower JIF scores. Physics and biomedicine are cited widely, giving their journals a higher JIF.

Gustaf Nelhans

Gustaf Nelhans

“The time period between submission of scripts and publishing is often longer for social science and humanities journals. As a result, their impact factor is very low measured over two years. Nor is there any evidence to say that two years is the ideal time period to measure citation rates,” says Gustaf Nelhans, senior lecturer in library and information science at the University of Borås.

Together with his colleague Björn Hammarfelt, among others, he has examined how Swedish universities use various bibliometrics, including the JIF, when government research funding is to be allocated at a higher education institution. The so-called Norwegian model is often included, based on points awarded for where the researchers publish their results.

The idea is that the publication in a channel of good quality should be given more points and the valuation of publishing channels, the “Norwegian list”, is based in part on the journals’ impact factor. JIF is also used in various allocation models at several other universities (readhere in Curie about how some Swedish funders and higher education institutions use JIF).

Problematic shortcut

The motive for persuading researchers to focus on higher-ranking journals is often that it should increase the quality of research. Since a journal with high JIF generally has a high reputation, JIF becomes a proxy measure of quality. Gustaf Nelhans has objections to this use, however.

“As a shortcut to research quality, JIF quickly becomes problematic. To give one example, comparisons between subjects become crazy – is a publication in Nature, with a JIF of 38, really so much better than a publication in Sociological Research, with a JIF of 0.08? Another thing is that JIF, like other bibliometric measures, is difficult to apply to small amounts of data that often occur at the individual, group and department level.

On top of this, too much focus on JIF forces researchers to change their research and publishing practices,” says Gustaf Nelhans.

“You maybe don’t write in reports or in commemorative publications if it doesn’t give any points in the system. For those in humanities, there are few journals with a JIF score – should research be adapted to those few, just because the university uses that measure? This means that research is compromised and runs the risk of becoming impoverished.”

Criticism nothing new

Criticism of the abuse of JIF in evaluating research has been around almost as long as the measure itself, and from time to time it flares up. In an appeal from 2013, a number of influential funders and organisations stressed the importance of assessing research on its own merits rather than the journal in which it is published, and there are now several other measures of a journal’s impact.

But JIF iwll probably remain despite all the criticism, believes Gustaf Nelhans.

“The new measurements are more advanced and more difficult to understand and have no real impact. As long as Journal Citation Reports, from where the data is taken for JIF, remains a leading index for journals, JIF will continue to be an important measure.

Read more in Curie: The use of impact factors in the research community

Text: Sara Nilsson
Photo: SPL / IBL Bildbyrå