On pluralism, tech oligarchs, and the shape of AI sovereignty
Pluralism entered my life long before artificial intelligence did.
Even before I joined the Aga Khan Foundation Canada Global Leadership Program, I had already been wrestling with the idea that societies survive not by sameness, but by holding together many identities, many truths, and many ways of seeing the world. I had listened to speeches from people who spoke from what felt like the high altar of global cooperation — urging us to think systemically, to build bridges, to resist the easy seduction of binaries.
Their message was simple, though never simplistic: our challenges are shared, and our solutions must be shared too.
Perhaps that is why something about the current global conversation on AI sovereignty feels so jarringly out of tune.
In Canada, the word “sovereignty” is suddenly everywhere. National sovereignty. Provincial sovereignty. Data sovereignty. Conversations about where our information sits, who controls it, who owns the compute, and what infrastructure we can claim as “ours.” These are not new concerns, but AI has made them louder and strangely urgent. Has a tinge of nuclear arms race (there are obvious difference between nuclear tech and AI, but that is for another post to explore)
And yet, the more these discussions surface, the more I feel a growing contradiction with the very pluralistic ideals we claim to hold. This tension sharpened for me after listening to the recent The Rachman Review episode about the US–China AI race. Not because of the rivalry itself — that narrative has been around for years — but because of the way the conversation revealed two worlds building two very different futures.
John Thornhill made an observation that could have easily passed unnoticed: America, he said, is competing at the “Frontiers” while China is competing at the “application layer.” The line stayed with me.
The United States is pouring staggering capital into frontier labs — OpenAI, Google DeepMind, Anthropic. Models with closed weights, massive GPU clusters, and the rhetoric of AGI as destiny. A worldview where value flows upward, toward a small set of actors who claim both capability and authority. A narrative shaped heavily by what Karen Hao, in Empire of AI, describes as the emerging tech oligarchs — people who promise salvation through scale while quietly consolidating control over the frontier.
China, meanwhile, has taken a very different path. Open-weight models that anyone can download. Lean, elegant engineering born out of chip restrictions. Rapid deployment across hospitals, schools, logistics networks, legal offices. A population trained in AI literacy, not just specialists. Entire new infrastructure categories — 智算中心, “smart computing centres” — built specifically for this era, distinct from traditional data facilities.
One system reaches for the top; the other spreads outward.
Listening to this, I realized that something had been bothering me for far longer than I could articulate. It began as a faint discomfort during the early hype cycles — when AI was presented as the inevitable job-killer and job-creator all at once, when frontier labs spoke of curing every disease but avoided the complexity of real public health, when tech leaders promised Mars while skipping the messiness of Earth.
But recently, that discomfort has taken on a clearer shape.
We say we want pluralism, yet we are building AI systems designed for concentration. Concentration of compute, concentration of data, concentration of power, and concentration of decision-making in the hands of a few private labs who insist that centralization is the only path to safety.
Even the fears have evolved. We once worried about jobs. Then about data. Now, we worry about sovereignty — not just national, but technological, computational, epistemic.
And all the while, the tech oligarchs tell us that fear justifies speed: If we slow down, China wins. If we regulate, the frontier collapses. If we question the narrative, we endanger humanity.
Fear as governance. Fear as persuasion. Fear as the architecture of a new kind of Cold War.
But arc of the conversation in the Rachman Review raised a deeper question for me — one that ties back to those early lessons in pluralism:
What if the problem isn’t who wins the AI race? What if the problem is the race itself?
Because pluralism is not built through supremacy. It is built through systems that allow many actors to participate, adapt, and create meaning.
In this sense, China’s open-weight, high-deployment model — stripped of its political context for a moment — actually aligns more closely with pluralistic design than the Western monopoly model does. Not morally, but structurally. An ecosystem where thousands of developers can adapt models for their own communities is inherently more pluralistic than one where a handful of companies hold the frontier and everyone else simply accesses it.
And that leaves countries like Canada in a strange in-between space. We speak the language of cooperation, but we are starting to design our infrastructure around isolation. We talk of shared futures, but we are building sovereign silos. We celebrate pluralism — then engineer systems that quietly harden borders.
Sovereignty matters, of course. But sovereignty without pluralism becomes a wall. And walls are terrible architects of the future.
As I reflect on all of this — the hype cycles, the promises of Mars, the skipping of Earth, the tech oligarchs who speak of humanity while consolidating power, the global race that feels more nuclear than digital — one question continues to return to me:
Can we build AI that strengthens pluralistic societies, or will we allow sovereignty and supremacy to shape a future that contradicts everything we say we value?
A Cold War dressed in silicon will not deliver collaboration. But pluralistic AI — local, contextual, open-weight, culturally adaptive — just might.
And perhaps the first step toward that future is simply acknowledging the contradiction we are currently living in.
