How To Understand University Rankings

What are the most credible university rankings?

 When considering credibility, one questions the integrity or reliability of university rankings.  This depends on a number of factors, including methodology,  ways of data collection, ranking criteria, sample sizes, target demographics, and so on.

These factors can affect their pertinence to whoever intends to use them, be it a government looking for policy shapers, a company seeking R&D partners or a student making the one of the most important decisions of his or her academic career. Recognising that the latter party better exemplifies the us, the students of the IFP, is key to easing the task of understanding rankings.

However, it is also difficult to understand why the QS ranking puts Cambridge in second place worldwide, while it places at 7th on the THE table, or why Oxford ties with Harvard for 2nd place on the THE rankings, but stands at 10th on the Shanghai Ranking.

The hope is that by perusing this page, you will be able to better understand what at first glance are simply multiple tables of names and numbers.

Some of the most popular university rankings, and indeed the most trusted by prominent publications such as The Guardian, The New York Times and The Telegraph, include the Times Higher Education (THE) tables, the QS World University Rankings, the Shanghai Ranking, and within the UK, the Complete University Guide (CUG).


How do the different rankings differ in their methodology?

 The methodology of each ranking comprises of, not just the chosen criteria, but sample sizes, the ethos of target demographics and method of data collection, amongst other contributing factors.

While there are many overlapping areas, highlighting the differences in the methodology of each system can make a student's choice easier.

The Shanghai Ranking is very clearly based on figures; the number of papers, notable alumni, and so on. Their sources include websites such as the official Nobel Peace Prize website and publicly available statistics from the governing authorities of countries.

THE and QS are more heavily based on surveys, however the THE system relies more heavily on the opinion of academics and exclusivity, hence the preference of many governing bodies to refer to the THE rankings. THE uses Thomson Reuters as a primary source for data collected, in addition to survey results.

The QS system is less discerning and has a much larger sample group for its surveys, and these are sent to both academics and employers. A much larger percentage of the final score on the QS ranking is dependent on surveys, as compared to the THE rankings.

THE rankings are gathered through the results of the world's largest invitation-only survey of academic opinion. The Academic Reputation survey is a key piece of research that has contributed to the integrity of the annual rankings by the Times. Below are a few key aspects of the survey:

1)    Available in 10 different languages

2)    Uses data from the United Nations to ensure that it is-properly distributed to reflect the demographics of world scholarship

3)    Survey is evenly spread across different academic disciples

4)    Those invited to take part are statistically representative of both their country and their discipline

5)    The questionnaire targets only qualified and experienced scholars which offer their views on the research and teaching within their disciples at an institute which they          are familiar with.

6)    Scholars are questioned based on their competency level of their specific discipline.

7)    Scholars are also asked action-based questions such as "Which university would you send your most talented graduates to for the best postgraduate supervision?”

Times Higher Education numerically ranks universities based on their overall measure of a university’s esteem (a combination of their reputation for research and teaching). Scores are also based on the number of times an institution was cited by respondents as being the best in their field.

The CUG is much more dependent on figures generated by governing councils and bodies, as all its criteria are clearly quantifiable, with the exception of student satisfaction being subject to surveyed opinion.

The Higher Education Statistics Agency (HESA) is the primary source of information that contributes to the university rankings under CUG. HESA is also responsible for collecting and analysing quantitative information about universities in the UK. Funding councils also have a statuary role in assessing the quality of learning and teaching in the UK universities that they fund. Among these councils are: The Higher Education Funding Council for England (HEFCE), the Scottish Higher Education Funding Council (SHEFC) and the Higher Education Funding Council for Wales (HEFCW). The funding councils oversee a yearly survey titled National Student Survey assesses the view of final year university students of the quality of their courses. All the data is then compiled to measure overall student satisfaction.

Various measures that are gathered are then combined to create a total score. After undergoing selected statistical methods, an overall score is gathered for each university. This is a pivotal factor in placing a numerical value on overall university rankings.



Explaining the Times Higher Education Rankings

What are they assessing when they say university A is better than university B?

Placing one university above another is dependent upon multiple factors, as demonstrated above. However, it is crucial to keep in mind that everything is relative. A university's rank can vary depending on scope (i.e. worldwide, UK, Europe, The Americas etc.), faculty (i.e. Engineering, Law, Business, Medicine etc.), and filters placed on the rankings.

For example, on the THE ranking, if the criteria is changed and narrowed down to any one individual criterion, the places of the top universities do not vary at all. However, this may not be the case further down the list.

Based on this concept of relativity and the knowledge of what criteria is used, one can assess which ranking system is most suited to his or herself.


What Are They Actually Assessing When They Say That University 'A' is better than University 'B'?

Why should I treat rankings with caution?

 Most college applicants will look at rankings from different from different websites such as the guardian, and over one in six consider the rankings "very important" in selecting a school. However, while people may know if the college they are applying to has a high ranking, few understand the methodology behind these numbers. The subjectivity that factors into college rankings is reason to use them with caution.

Many of the stated facets that apply to each ranking system can be cause for concern and caution. A system that is highly dependent upon surveys has a higher chance of being exposed to bias. Nonetheless, facts and figures are not alone enough to truly gain perspective on any one institution. It is also important to consider how updated your information is.

Consider, for example, the term "Ivy League". This categorisation of 8 prestigious schools in a league of their own may have held water during the turn of the 19th century and part of the 20th century. Today however, these schools are known as the "Ancient Eight" and thanks to greater transparency, it is now known that such a label can be derived from large financial endowments. Hence, groupings such as the "Magnolia Ivy" are very often discounted as well.

The Sunday Times, for instance, formulated a grouping of England's Ivy League based on the concentration of AAB students entering the colleges. Such information may not ultimately hold sway over ones university experience.

There are, however, groups such as the Russell Group which are very clearly driven by ideology and, being a group that is known for high standards of research, performance indicators are more easily quantifiable.

It is also known that universities where the language of instruction is not English suffer greatly in the global rankings, regardless of their calibre.

Summarily, caution should be exercised when considering all information, whether from rankings or groups, and the knowledge of what is involved in each system helps one to make an informed decision. On the websites for each ranking there are specifications of certain gaps in information and disclaimers which should be taken into consideration.


Why are rankings so valued?

 Rankings are valuable to many parties: employers, students, law makers, corporations, and the list goes on. Rankings are used by everyone, and can hold sway over your life path, from the moment you step on campus to your first job promotion.

However, its pertinence to students seems to be the raison d'être. This is because an institution's primary commodity are its students and academics, and rankings are meant to empower students to make an educated decision. The reliability of a ranking, as has been discussed, is dependent upon multiple factors, and allow one to choose how applicable these rankings are.

University Rankings are a sign of importance and hierarchal status for universities. In general people lack knowledge of universities unless they are mentioned in public media several times. Universities at a higher ranking are so because how long they have been open, quality of education they offer, experienced professors and tutors, student satisfaction and other different criteria. When a university is ranked by an organization the publicity they receive can further affect their ranking. Positive feedback upon an institution will only raise its ranking, however if a university is ranked highly but receives poor response this may lower their ranking.

Prospective students are partial to ignoring student satisfaction yet should try not to overlook this aspect, because despite a universities high educational qualifications its environment could possibly be lacking. Even though this may be rare it sheds a light upon student life and how a new student may feel in their institution's environment. University rankings are important as they grade universities not only upon traditional aspects (education, age, resources, research, etc) but upon aspects that should directly influence students in making well-reasoned decisions upon where they would like to apply. 

Range, reliability and relevance - the three Rs that give rankings their golden standard. Nevertheless, rankings do not make a decision -  a student does.


What criteria are used in the popular rankings?

 It is known that the ranking of a university is simply a rough estimate of what position it occupies in comparison to other rival institutions. In some cases, universities can obtain a different ranking according to different criteria and forms of analysis. When ranking universities, most look for major factors such as quality of education, quality of research, entry requirements and overall satisfaction with the particular institution. Rankings tend to be dependent on the faculty, reputation, the age of the institute or even the scores the students at the particular school achieve.

The following is a summary of the criteria used in the THE, QS and Shanghai rankings. Information such as weightage of each criterion is made available at the discretion of each ranking consultancy, and can vary based on variable filters in the control of the viewing public.


13 performance indicators are grouped into five areas:

  • Teaching: the learning environment (worth 30 per cent of the overall ranking score)
  • Research: volume, income and reputation (worth 30 per cent)
  • Citations: research influence (worth 30 per cent)
  • Industry income: innovation (worth 2.5 per cent)
  • International outlook: staff, students and research (worth 7.5 per cent).


Core Criteria

·        teaching

·        employability

·        research

·        internationalisation

Learning Environment

·        facilities

·        online/distance learning

Advanced Criteria

·        culture

·        innovation

·        engagement

·        access

Shanghai Ranking

The Shanghai Ranking considers every university that has any Nobel Laureates, Fields Medalists, Highly Cited Researchers, or papers published in Nature or Science.

Quality of Education

Alumni of an institution winning Nobel Prizes and Fields Medals


Quality of Faculty

Staff of an institution winning Nobel Prizes and Fields Medals


Highly cited researchers in 21 broad subject categories


Research Output

Papers published in Nature and Science*


Papers indexed in Science Citation Index-expanded and Social Science Citation Index


Per Capita Performance

Per capita academic performance of an institution






·        student satisfaction

·        research assessment

·        entry standards

·        staff ratio

·        services expenditure

·        facilities expenditure

·        good honours

·        graduate prospects

·        completion

In conclusion, the Shanghai Ranking is more focused on research- centric institutions and those that achieve excellence in the fields of science and technology. Meanwhile, THE publicises its focus on the quality of teaching in comparison to the QS rankings. Nevertheless, QS is seen as the best point of reference by many international students, due to a wide range of criteria that encompasses many unquantifiables in the learning environment. The CUG is UK-centric and consists of mostly quantifiable criteria, spanning both students and academics.


How does QM fare in the rankings?

 In the overall rankings, this is how QM placed:

·        THE  - 114th

·        QS -  147th

·        Shanghai - between 201st and 300th

·        CUG - 35th

Please bear in mind that entry into QM is extremely competitive because of various accolades and achievements in areas such as research, medicine, law and many other faculties. QM's devotion to excellence in teaching and the availability of specialised facilities and equipment also make it a distinguished institution.



19 April 2016, 2:32 PM
01 July 2016, 4:40 AM