Academia’s Stockholm Syndrome: The Ambivalent Status of Rankings in Higher Education (Research)

Published in International Higher Education, No. 107, Summer Issue 2021
DOI: https://doi.org/10.36197/IHE.2021.107.05
Abstract
Academia has an ambivalent relationship with rankings. Academics constantly complain about them; yet they always look for ways to “fix” them. Higher education scholars researching rankings often also exhibit a similar kind of ambivalence. I argue that this ambivalence contributes to the further entrenchment of rankings as a practice in higher education and I call for a heightened appreciation of reflexivity in the research on this subject.

Over the past several decades, university rankings have become ubiquitous to the point that they have become an accepted—though not uncontested—part of the transnational academic landscape. The sentiment that “rankings are here to stay” has come to resonate with many academics, administrators and policy makers. Despite the mounting evidence of their adverse effects and not least often-relentless criticism from various parties, many in higher education would argue that rankings are inevitable or even necessary. Why is that so?

Why do we believe (in) rankings?

To address this puzzle, we could observe more closely how rankings resonate with a broader cultural and institutional context. First, rankings work through the public production of competition, effectively urging universities to see each other as competitors. The quasi-natural affinity between rankings and discourses on global competition is possibly one of the reasons why rankings are often seen primarily in geopolitical terms. Rankings, furthermore, resonate with some of the higher education’s best known “rationalized myths,” such as strategic management, performance indicators, accountability, transparency, internationalization, excellence, and impact. Given that rankings themselves possess an aura of rationality, they easily emerge as a “logical” instrument for fostering these myths and measuring society’s progress towards them.

The very idea of ranking is rarely seriously questioned, even in higher education research

It is of no lesser importance that the imaginary of higher education as a hierarchy of institutions, with Harvards, Oxfords and such at the top, predates the “hegemony” of rankings of the past several decades. When, for example, U.S. News and Shanghai issued their first rankings, they largely confirmed what everyone had already “known” about who was the “best.” Had this not been the case, the subsequent reception of global rankings may have been different. For a ranking to be believed, it needs to stay in the domain of plausible, while allowing for continuous improvement in performances. In fact, every university is expected always to strive to improve in rankings.

Finally, together with ratings, benchmarks, standards and various performance-related metrics, rankings are usually seen as part of a larger repertoire of policy instruments and evaluation devices. This also facilitates their “travel” across contexts. One explanation for this is historical. Academics interested in evaluating their own work and that of their institutions had been experimenting with these instruments for decades before they were adopted by non-academic actors in the name of broader societal purposes, such as efficiency, accountability and transparency.

Placed against this cultural and historical backdrop, the fact that rankings are taken for granted should not surprise. Because of their “naturalization” in public discourse, much of the debate on rankings is relegated to the domain of the “how.” Meanwhile, the very idea of ranking is rarely seriously questioned, even in higher education research.

Blurred lines: the science of ranking(s)

Higher education studies have always had a somewhat ambiguous relationship with rankings. Given the field’s strong ties with policy and practice, much of its research is done with a clear purpose to make higher education fair, efficient, responsible, and so on. To make it better—whatever this may mean. One implication of this distinctly normative streak is that higher education scholars often routinely act in the name of protecting higher education from various trends they deem harmful. Rankings—for the reasons that have been extensively documented over the past decades—are usually treated as one such trend.

As a result, much of the research on rankings is implicitly or explicitly critical. And yet, paradoxically, the criticism seems only partial: the scholarly debate on rankings tends to revolve around their methodologies and the effects thereof, frequently extending to the discussions on how rankings can be improved and “better” ones developed. The research is often openly critical toward the rankers for whom it believes to be primarily, if not exclusively, motivated by commercial interests. By extension, ranking organizations are hereby held to a certain standard of “appropriate” motives and behavior.

Ranking organizations are especially keen on ensuring that their rankings look like ‘solid science’ and are treated as such by the scientific community

Therefore, instead of observing rankings as an object of study, this line of research evaluates them on the grounds of how “good” or how “true” they are as a policy or transparency tool. This type of reasoning implies that, if rankings were methodologically sound, measured things that mattered, produced for non-commercial gain, and were used responsibly instead of misused, things would somehow be better. However, while this may temporarily undermine a specific ranking, in the long run it is more likely to strengthen, rather than diminish, the legitimacy of rankings as a practice of evaluating universities. There are at least two reasons to expect this.

First, the arguments addressing the “how” of rankings, including aiming to “fix” them, essentially confirm the idea of higher education promoted by rankings—which goes beyond their methodologies, interests, or how rankings are used. In this idea, higher education is imagined as a zero-sum stratified order made up of universities continuously striving to overtake other universities, whereby all of them are expected to compete, all the time. All international rankings that wield some influence today promote this idea of higher education as a zero-sum competitive order as “natural” and even “superior” to alternative conceptions.

Second, the research evaluating rankings provides them with a much-needed scientific legitimacy. Ranking organizations are especially keen on ensuring that their rankings look like “solid science” and are treated as such by the scientific community. Academic publications that give suggestions for improvement of ranking methodologies and their effects arguably treat these organizations as partners in scholarly conversation. This carries the risk of backing various ideologies and policy agendas with scientific credibility. A similar risk exists when academics sit on rankers’ boards and panels, participate in their events, or complete their surveys. Drawing on the cultural authority of science is crucial for rankings because, like scientists, they too are in the business of making truth claims about what is and what is not in the world of higher education.

The importance of reflexivity

None of this is to say that higher education scholarship should not be critical; quite the contrary. However, not all criticism is the same. For this and other reasons, it is fundamental that the proverbial “big picture,” together with our own role and place in it, is continuously examined.

The more we repeat the ‘there is no alternative’ mantra, the closer it gets to a self-fulfilling prophecy

In practical terms, we could start with thinking of rankings and rankers as, first and foremost, an object of study. Rather than treating rankings as an established higher education phenomenon, or rankers as partners in the purposes of the academic enterprise, we could simply treat them as sites of empirical investigation. Data, if you will. If we criticize our data, this could raise questions about our capacity to make sound judgements. If we have expectations about how our data should behave or we in any way try to force norms and expectations upon our data, then our credibility as scholars could be brought into question. Being mindful of these risks is crucial for the validity of our observations. That is, viewing rankings and rankers as objects of study requires that we treat them objectively and analyze the phenomena accordingly.

Insisting that anything is “here to stay” is shortsighted. If history has anything to teach us, it is that things change. Possibly the most dangerous thing about the notorious “there is no alternative” mantra is that, the more we repeat it, the closer it gets to a self-fulfilling prophecy. After all, challenging the taken-for-grantedness of socially produced “facts,” and not least seeking to expose their ideological premises, is our duty as scholars.


Jelena Brankovic is a postdoctoral researcher at the Faculty of Sociology, Bielefeld University (Germany). Currently she is researching the role rankings in the institutional dynamics within and across sectors. Her interests extend to the practice of theorizing in social sciences, academic writing and publishing in interdisciplinary fields, and academic peer work. She serves as Books Editor on the Editorial Board of Higher Education and is Joint Lead Editor of the ECHER blog. Twitter: @jelena3121


Photo by Mitchell Luo from Pexels

Leave a Reply