Dreaming of a Global Cooperative Intelligence Federation
Together with the Platform Coops team, I’ve had the pleasure to participate in the Cooperative AI conference, organised by The Platform Cooperativism Consortium and NeedsMap in Istanbul in November 2025. I was invited to share an input in one of the panels and the following text is the script of this conference contribution. It draws on topics and conversations I have had during the three days that I was present at the conference.
In my panel, we were asked to reflect on the prospect of building a global cooperative AI federation. And the first thought that keeps coming back is this:
Why are we still such beginners when it comes to federating our efforts? With all the tools we have today, why do we still meet in such centralized ways: on stages, behind microphones, sitting in auditoriums where knowledge travels in only one direction?
We should be meeting in workshops, at eye level, because there is a ton of work waiting for us. Many of our organizations still struggle in isolation instead of using our collective power. Centralized systems remain our default mode of connecting. It’s time to challenge that default.
A Global Cooperative AI Federation?
This is a big vision, but so was the idea of cooperatives in the 19th century. So were many of the co-ops that exist today.
Back then, people organized to survive industrial capitalism. Today, we organize to survive informational capitalism.
Our imagination might be limited, but the buzzword “AI” has captured our minds so completely that we rarely stop to ask: Do we even know what we’re talking about when we say “AI”?
Maybe there’s a fundamental misconception.
Artificial Intelligence or Collective Intelligence?
The term “artificial intelligence” suggests something that emerges miraculously from machines, as if it would fall from the sky, without human intervention. However, it is built out of human labour. Of blood, sweat, tears.
Watch the documentary In the Belly of AI. You’ll see Finnish prisoners paid almost nothing to annotate datasets. You’ll see data workers in the Global South who spend their days classifying violent or abusive texts and images. Many of them develop trauma. You’ll see content moderators protecting your social media feed by absorbing the worst of humanity so you don’t have to.
Referring to AI as “artificial intelligence” is misleading. There is nothing artificial about it. AI is built on collective human intelligence: industrialised, extracted, and repackaged, mostly without our consent.
So what if we began to reframe AI as collective intelligence? As something we want to protect, nurture, and make visible for ourselves and the communities we belong to. As a shared knowledge commons rather than a resource to be mined.
There are already ways for cooperatives and communities to build and use locally hosted, federated intelligence systems that work in our favour. Many of us have no desire to become mere data sources for generative AI models. We want to shape the systems that shape us.
The Solidarity Stack
In his keynote for the Cooperative AI conference, Trebor Scholz suggested that we (the community) have the capacity to develop what he calls a Solidarity Stack: a layered architecture of cooperative tools, protocols, and governance models designed to embed solidarity, collective decisionmaking and shared ownership into the very foundations of our digital future. I am asking myself: What’s keeping us from building it?
Part of the answer lies in our still unformed imagination of what a shared data commons could be. How it should look, how it should function, and how it should feel to participate in it. We also lack the participatory design spaces where cooperatives can learn to evaluate, shape, and guide AI tools inside their own daily operations. Just as important is the governance knowledge required to use our collective intelligence wisely. Also the federated open-service architectures needed to connect cooperative platforms through open APIs and shared protocols so they can exchange data and insight without giving up their autonomy. And, of course, there is the question of capital: who will finance the foundations of such a system?
What’s really missing is the collective realization that a federated cooperative intelligence system isn’t a lifestyle choice. It’s a necessity. Just as cooperatives once became the only viable solution to industrial exploitation, federated intelligence may become the only viable answer to digital exploitation
Building Without Burning Out
We can build cooperative intelligence, but only if it grows alongside our daily work and if its value becomes tangible right from the start. We will not build it if it feels like an overwhelming burden or just another layer of responsibility added to already stretched schedules.
So the real challenge is finding ways to create such a system without burning out; finding ways to strengthen our autonomy from the very first step; finding ways to step out of the orbit of corporate data-extraction models. And beneath all of this lies a deeper question: What would it take to build a life-centred, truly collective intelligence system that serves our communities rather than consuming them?
These are not abstract considerations. They form the groundwork for the future we want to inhabit. Perhaps the first shift we need is conceptual: to stop thinking in terms of artificial intelligence, and start understanding it as cooperative intelligence.
Cooperative intelligence as an expression of the knowledge, creativity and care we choose to steward together. And if we can commit to that shift, then building this future becomes not an act of sacrifice, but an act of collective self-determination.
Photo credit: Marija Zaric at Unsplash


