Global university rankings have become very influential. Different institutions are compared using indicators such as bibliometric data and the number of students per faculty. The indicators are weighted and totalled and the results are then presented in the form of a list that ranks each university in order.

Criticism for popular rankings

Svenska 2016-09-06

In only a short period of time, the lists that rank the world’s higher education institutions have grown in number and become more influential. Rises and falls are newsworthy, and today the rankings influence both universities’ local activity and national research policy. But these attempts to measure the quality of the institutions also come in for harsh criticism.

Björn Hammarfelt

Björn Hammarfelt

“I understand the attraction of university rankings – they provide a way of defining what a university should be and how it performs. But the fact that the underlying data is problematic does not seem to be seen as important” says Björn Hammarfelt, senior lecturer at the University of Borås and guest researcher at the Centre for Science and Technology Studies (CWTS) at Leiden University.

2003 saw the start of a boom in global university rankings. That was the first year of publication of what is known as the Shanghai Ranking, which was soon joined by a long line of other worldwide rankings of higher education institutions. There are now at least ten global higher education ranking lists varying in focus, most produced by commercial organisations. The biggest of these are the Shanghai, QS and THE rankings (see sidebar).

Easy to use, the rankings have quickly become popular in a time of increased globalisation with a focus on knowledge as a prerequisite for economic growth. The results are featured as news on university websites and are presented on general news channels. They are used by students, by the institutions themselves for marketing, decision-making and benchmarking purposes and also by decision-makers and politicians.

Harsh criticism for rankings

However, the rankings also come in for some harsh criticism. Critics do not believe they can measure quality in any relevant or fair way. They are said to cover only a small percentage of the world’s universities and give a much too simplified picture of a university’s mandate. In many cases the methodology is not clearly defined and does not always meet scientific standards.

The rankings also disfavour certain areas because bibliometric data is taken from databases that contain mainly English publications and have less coverage of the humanities and social sciences.

Ton van Raan

Ton van Raan

“The rankings are based almost entirely on performance in medicine and science,” says Ton van Raan, Professor Emeritus of Quantitative Studies of Science at Leiden University. “Researchers in the social sciences and humanities publish more in their native languages. They produce high-quality work, but it is not cited as often as work in the medicine and science fields and so is scarcely visible in the rankings”.

Do they reflect the quality of research?
“They reflect the overall international impact of a university’s research. It is an important aspect of quality, but it is not the same thing as quality.”

His department produces what is known as the CWTS Leiden Ranking (see sidebar). Van Raan says that it is statistically reliable with a clearly-defined methodology. It is based exclusively on bibliometric indicators and measures only research, scientific impact and collaborations.

The various leading global rankings largely feature the same universities. But Hammarfelt points out that, even if the lists give some idea of differences in quality, the question is whether they are measuring things that are comparable.

“Comparing Harvard, which has budget akin to that of a small state, with a nationally-based university in Sweden is like comparing a sports car with a small family car and deciding which is best. That type of ranking tells us very little.”

Significant influence

Despite their failings, the rankings have become very influential. It is easier for people with degrees from high-ranking universities to get a work permit in some countries, for example Denmark. Foreign educational institutions that want to operate in India must have a certain ranking, and other countries also require potential collaborating partners to have specific positions in the listings.

Both Australia and Russia have long-term strategies to get more of their universities into the world’s top 100 lists, and in Sweden the rankings sometimes affect how institutions formulate their work programmes, according to a report published in 2015 by the Association of Swedish Higher Education (read more in Curie here about the attitude of Swedish educational institutions to the rankings).

Björn Hammarfelt believes that the fact that the rankings have been allowed to influence research policy and the way universities formulate their work programmes is problematic.

“Even the rankings that are more inclusive measure only quite a small part of what universities do – mainly international publication and reputation. That can mean that universities give less priority to activity that does not help them to rise up in the rankings, for example partnerships with the community, or teaching and learning.”

Disregarding education and learning

Ellen Hazelkorn

Ellen Hazelkorn

The biggest problem with the university rankings is their influence on higher education, in that they affect institutional strategy and national research policy. That is the view of Ellen Hazelkorn, Professor Emeritus at the Higher Education Policy Research Unit at the Dublin Institute of Technology. She is also Policy Advisor to the Irish Higher Education Authority and author of ‘Rankings and the Reshaping of Higher Education. The Battle for World-Class Excellence’.

She claims that the rankings are misleading, because they purport to provide statistically-correct information about the quality of education but use indicators that favour elite universities with only a small fraction of the student population. They also have a distorting effect as they mainly use research-related measurements and largely disregard education and learning.

“The difference between a university in 200th position and one in 150th position is not statistically significant – although it can look to be quite a big difference – and universities and governments spend a significant amount of time trying to rise up in the rankings,” she explains to Curie. “Overall, this distorts the focus and purpose of higher education.”

Meanwhile, Ellen Hazelkorn says, the global university rankings have challenged views on quality and highlighted the importance of investing in higher education in order to increase national competitiveness.

“In societies with weak quality assurance systems, the rankings have also become a tool with which to demand accountability.”

Results must be seen in context

Phil Baty, editor of the Times Higher Education World University Rankings, sees their ranking as a tool to help people gain an understanding of an institution or a national education system. He is very well aware that rankings can be abused and points out that they should not be the only tool used.

“The results must be seen in context and used in parallel with the institution’s own strategic plans. The rankings can provide information that is useful for decision-making but are not intended to drive decisions. The best way to use them is to break the results down and look at individual aspects that are meaningful for you.”

“We live in an era of transparency and accountability,” he continues. “We need data. We supply data that is highly relevant to research universities with a global focus and politicians who want to check whether they are getting good value from their investments in the education system.”

Phil Baty does not agreed that their methodology is unclear. He refers to the description on their website and more in-depth information that is available on request. In his opinion, the 13 indicators used in the Times Higher Education ranking are well balanced and chosen to reflect universities’ various core functions.

“No ranking system can capture all aspects of a university. There is no perfect, objective ranking system. A ranking is a subjective opinion about which indicators are significant and how they are weighted, and about what data exists that can be compared on a global basis”.

It does seem that the university ranking phenomenon is here to stay, and, in Ellen Hazelkorn’s eyes, the influence exerted by the lists is increasing. While some observers see a development towards more specialised rankings, she believes that tools that can sort a number of indicators according to the needs of the user, as well as rankings that compare entire educational systems, will become more important in the future.

Read more in Curie: Ranking lists important for Swedish universities

Text: Sara Nilsson
Photo: Caia Images / IBL Bildbyrå

1 comment

Thank you for your comment. It may be moderated before it is published.

  • Rolf Westerman

    Hej jag har än fråga hur många universitet det finns i usa som forskar i naturvetenskap som tillep evolutionsbiologi. Vissa kallar sig för universtiet.tillep universtet i usa kan var allt från kunvux till nobelprisklass har jag hört dent finns tusentals universitet i usa. Men vi Svergie gör väll framstånde forskning fast vi blir internationellt rakning dålit men vi är ett litet land i norden. vänli från Rolf

    2017.02.06