Published January 27, 2026 | Version v1
Presentation Open

Rankings as Governance: Predatory Inclusion, Data Intermediation, and the Politics of Visibility

Authors/Creators

  • 1. University of Toronto Scarborough

Description

University rankings persist not because they are methodologically robust, many of their limitations are well known, but because they function as governance. They shape institutional strategy, research priorities, and reputational standing, and increasingly operate through a broader enclosed system of data services, analytics, and advisory markets. In this presentation I argue that rankings endure in part because a vacuum in global governance of science—combined with globalization’s demand for a universal yardstick—creates the conditions for private actors to supply and control the metrics of legitimacy.

I begin with two lines from a webinar advertisement that captures a "normalized" trope. The posting acknowledges that “rankings shape global reputation, funding, and policy,” yet frames the underrepresentation of institutions in Africa, Asia, and Latin America as a technical deficit—"closed data, narrow metrics"—to be remedied through “open science practices” such as transparency, persistent identifiers, and interoperable infrastructures. I use this juxtaposition as a symptom of a larger structural problem: the implied message to underrepresented universities is straightforward: "be compliant with dominant standards, or be invisible". This is an exertion of structural power, embedding legibility requirements into the operating system of research assessment.

If ranking is the operational definition of inequality, then “inclusion” into rankings cannot be assumed to be progress. Drawing on Keeanga-Yamahtta Taylor’s concept of predatory inclusion, I argue that majority world institutions are invited into the rankings economy on extractive terms: becoming legible to narrow, privately governed metrics at significant costs, while leaving underlying rules and power asymmetry intact and unchallenged. I further show how instrumentalist framings of open science—treating PIDs and transparency as visibility fixes—function as epistemic governance, governing what becomes recognizable, countable, and therefore fundable.

Using the THE–Elsevier partnership as a case, I illustrate the “legibility stack” (identifiers → indexes → analytics → rankings → consulting) and the feedback loop through which intermediation and consultancy amplify further dependency. I close with policy directions for refusing auditing under privatized governance regimes and redesigning assessment and open science toward public purpose and benefits, knowledge plurality, and epistemic justice.

Files

OCED panel on rankings and research assessment.pdf

Files (2.5 MB)