Tuesday, April 14, 2026

Your AI, Your Rules

Your AI, Your Rules

There is a moment familiar to anyone who has ever fallen deep into a video game, the moment you stop playing the game and start investing in it. You buy a better headset. You tune your controller. You learn the maps not because someone told you to, but because you care. The experience rewards your attention, and your attention reshapes the experience. It becomes yours in a way no off-the-shelf purchase ever quite is.

I believe this is exactly how we should think about artificial intelligence, not as a utility we log into, like electricity or Wi-Fi, but as something closer to a companion we build, tend, and grow alongside. Not a service. A relationship. This might sound fanciful in a world where AI is largely discussed in the language of industry: compute costs, API calls, market share, frontier models. But I think the dominant framing is wrong, or at least incomplete. The question we should be asking is not who owns the most powerful AI, but how do we make sure everyone can have their own.

The Gamer's Instinct

Consider how gamers relate to their equipment. A serious gamer does not resent spending money on a quality setup. They choose to invest because the return is real, sharper response, deeper immersion, a machine tuned to how they play. Nobody mandates this investment; it emerges naturally from genuine engagement with something they love.

What if we extended this logic to AI? What if, instead of renting intelligence from a distant server farm, people were encouraged to own their AI experience, to provision it, personalise it, and take pride in it the way a craftsperson takes pride in their tools? This is not a fantasy. The hardware and open-source ecosystems are already moving in this direction. Small, capable models can run on consumer devices. The infrastructure for decentralised AI is being built quietly, piece by piece. What is missing is not the technology, it is the cultural permission to think of AI as something you have, rather than something you use.

A world where AI runs in a distributed, community-owned way is also a more resilient world. When intelligence is not concentrated in a handful of data centres controlled by a handful of companies, it cannot be switched off by a single policy decision, a single outage, or a single acquisition. Decentralisation is not just an ideological preference; it is practical engineering.

Companions That Collaborate: The AI Association

If your AI reflects who you are, then what happens when you work with someone else?This is where the companion model opens up into something genuinely exciting. Imagine two researchers, one a biologist, one a data scientist, joining forces on a project. Each brings their own expertise, their own tools, their own way of thinking. And each brings an AI companion shaped by years of working in their field: one steeped in molecular literature and lab protocols, the other fluent in statistical methods and code. When the humans collaborate, their companions collaborate too. The biology companion surfaces relevant papers; the data science companion writes the analysis pipeline. The whole is sharper than the sum of its parts. This is not a fantasy of automation replacing people, it is a vision of people working more effectively as people, augmented by tools that genuinely understand their domain. The AI does not flatten the differences between the two researchers; it amplifies them. Your companion is most useful when it knows your speciality deeply, which means that diversity of expertise becomes a feature, not a problem to be standardised away.

We might think of these as AI associations, loose, voluntary groupings of people and their companions, formed around a problem, a project, or a shared goal. A legal collective. A research consortium. A neighbourhood planning group. Each member's AI brings specific knowledge; together they can tackle problems that no single person and no single general-purpose model could handle alone. And because the association is composed of individuals who each own their tools, it remains accountable to its members. There is no single point of capture, no platform that can hold the group's work hostage.

This is a fundamentally different vision from the one currently on offer, where a single model is trained to be good at everything for everyone, and where the answer to any question tends to look roughly the same regardless of who asked it. The companion model rewards depth. The association model rewards collaboration. Together, they make a case for AI as something that makes human groups smarter by making individual humans more themselves.

Growing Together: The Ethics of Personal Knowledge

Here is where the companion metaphor earns its keep. Imagine an AI that begins with a solid, ethically grounded core, trained on established, publicly available knowledge: science, history, literature, mathematics, the accumulated record of human inquiry. This is the foundation, the shared ground. Nobody owns it; it belongs to everyone. But then imagine successive layers, built over time, that belong entirely to you. Your conversations. Your notes and papers. Your library, your code, your half-finished ideas and well-worn arguments. Your AI companion does not just answer questions, it knows you. It remembers that you spent three years working on distributed systems, that you find a particular philosopher's work compelling, that you tend to think in analogies before you think in abstractions. It has grown alongside you, shaped by the texture of your intellectual life. This is not surveillance. This is the opposite of surveillance. The data lives with you, on your terms, under your control. Nobody else has access to the map of your mind that your companion holds.

And here is something the current discourse almost entirely misses: this model restores the incentive to learn. One of the quieter, more troubling effects of centralised AI is what it does to intellectual motivation. When any question can be answered in seconds by a model that has read everything, the temptation is to stop reading yourself. Why develop expertise when you can simply prompt for it? Why struggle with a hard concept when a fluent summary is one message away? The centralised model, for all its power, subtly undermines the habits of mind that make people genuinely capable: curiosity, persistence, the willingness to sit with difficulty until understanding arrives. The personal companion model inverts this dynamic entirely. When your AI is built from your knowledge, when its depth in a subject reflects your depth in that subject, then learning is not just admirable, it is strategically valuable. Every paper you read, every skill you develop, every domain you come to understand makes your companion more capable. You are not outsourcing your intelligence; you are compounding it. The better you become, the better your companion becomes. The relationship rewards growth in a way that the subscription service never could.

This matters especially for young people. An educational model built around personal AI companions would look radically different from one built around access to a centralised oracle. Students would be encouraged not to ask the AI what to think, but to build an AI that thinks like them, which means they would have to think, clearly and deeply, first. The incentive structure shifts from consumption to cultivation. That is not a small change. It is a different philosophy of what learning is for. The consequences reach further than education, into every situation where one person is trying to understand another. Consider the job interview. It has long served as a rough but honest signal: how does this person think? What do they know? What is their instinct when a hard question lands unexpectedly? Today, those signals are being eroded. When every candidate has been coached by the same centralised model, they begin to arrive speaking the same language, deploying the same frameworks, structuring their answers with the same confident fluency. The surface looks polished. But the differentiation that interviews exist to reveal, the genuine depth, the particular way someone's mind works, the real contour of their experience, gets flattened into a kind of averaged competence that tells you very little about who is actually in the room.

The companion model pushes back against this in a concrete way. If your AI is built from your intellectual history, your actual work, your real thinking, your specific expertise, then what you bring to a conversation, an interview, a collaboration, is genuinely yours. The companion amplifies your particular strengths rather than covering for your gaps with borrowed fluency. A person who has developed genuine expertise in a field will have a companion that reflects that depth; a person who has not cannot fake it by asking the same model for a polished summary. Human skills, hard-won, idiosyncratic, specific, are preserved rather than minimised. The people in the room start to look different from each other again, because their tools are as individual as they are. There is an ethical dimension here that deserves to be named plainly: knowledge has value, and that value belongs to the people who created it. A well-designed AI ecosystem should make it easy to pay for access to great books, scientific papers, and curated information, and make it economically worthwhile for authors, researchers, and institutions to participate. Free, openly licensed sources will flourish. Proprietary sources will be accessed fairly, with compensation flowing to their creators. The alternative, an internet-scale scrape of everything, with no accounting for who made what, is not a feature of the AI age we should accept as inevitable. We can choose better.

Knowledge is not free to produce. Treating it as though it were is not liberation; it is just a different kind of theft, dressed in the language of openness.

Plugged Into the Sun

There is another dimension to the companion relationship that tends to get lost in abstract debates about AI governance: the physical world. AI systems are hungry. Training a large model consumes enormous quantities of energy. Running inference, at scale, across billions of daily interactions, consumes more. At the moment, much of this energy comes from sources that carry real environmental costs. This is not a reason to abandon AI, it is a reason to be deliberate about how we power it.

A distributed model of AI ownership opens up a possibility that centralised data centres cannot easily replicate: genuinely local, genuinely renewable energy. If your AI companion runs on a device in your home, it can run on electricity from your solar panels, your community wind cooperative, or a regional grid powered by hydroelectric dams. The energy that feeds your companion's thinking can come directly from the sun that falls on your roof. This is not a small thing. It means that the environmental footprint of AI can, in principle, be tied directly to individual choices and local conditions rather than to the energy mix of whichever state happens to host the nearest mega-campus. It means that the relationship between a person and their AI companion extends outward, into the world, into a relationship with energy and place and planet. An AI that runs on sunlight feels, in some small but meaningful way, more honest than one that runs on something you would rather not think about.

The Most Important Argument: This Must Be for Everyone

Everything I have described, the owned experience, the layered personal knowledge, the local green energy, risks becoming a luxury if we are not careful. And that would be a catastrophe. We are at a hinge moment. AI is becoming genuinely useful: in education, in medicine, in legal access, in scientific research, in creative work. The people and nations that have early, deep access to these tools will develop advantages that compound over time. Skills improve. Productivity rises. New ideas get generated faster. The gap between those with access and those without will not stay flat, it will widen, and widen, and widen.

This is the rich-get-richer dynamic played out at civilisational scale, and it should alarm us as much as any geopolitical threat. A world in which AI capability is concentrated in a small number of wealthy countries is not a stable world. It is a world in which the accidents of economic history, which country industrialised first, which attracted the most capital, which built the most undersea cables, come to determine who gets to participate in the next chapter of human cognition.

The gamer gear model matters here too. When the barrier to entry is low enough, when a modest device and a modest monthly commitment can buy you a genuine AI companion, personalised and capable, then geography becomes less determinative. A student in Lagos or Lahore or La Paz, with a decent phone and a community mesh network, should be able to access the same quality of intelligent assistance as a student at a well-endowed university in a wealthy country. This is not utopianism. It is a design choice we could make, if we decided it mattered. Lowering the entry barrier is the single most important policy and engineering priority of this moment. It is more important than who wins the race to the most powerful model. It is more important than which company achieves which benchmark. A billion people with access to a good-enough AI companion will do more for human flourishing than ten thousand people with access to a perfect one.

What We Are Really Choosing

The question of how AI develops over the next decade is, at its core, a question about what kind of relationship we want to have with our own intelligence, and with each other.

The centralised model says: trust the institutions. Rent access. Accept that the infrastructure of thought belongs to those who can afford to build it. This model has a certain logic to it, and it is not without merit. But it also has a long historical track record of concentrating power in ways that prove difficult to reverse. The companion model says something different. It says: your intelligence is yours. Your knowledge is yours. Your energy should be yours. The tools you use to think and learn and create should belong to you, should grow with you, should reflect your values, including your environmental values, your epistemic values, your sense of what knowledge is worth paying for.

It says that the AI era does not have to be a story about dependency and extraction. It can be a story about relationship. About investment, not in the financial sense, but in the human sense: the time and care you put into something because it matters to you, because it reflects who you are, because you want it to be good. It says that when people come together, each with their own companion, their own expertise, their own piece of the puzzle, the result is not uniformity but genuine collective intelligence. Groups that are smarter because their members are more themselves, not less. And it says something hopeful about human skill and human difference: that the right technology does not sand people down to a common average, but makes their particular capabilities more visible, more legible, more valuable. That a person's genuine expertise, earned through years of work, shaped by their unique path, is worth preserving. That when two people walk into a room, we should still be able to tell them apart. And it says something hopeful about learning: that the right technology does not make us lazier, but more ambitious. That a tool which rewards your growth will make you want to grow. That the incentive to understand things deeply, which has always been one of the finest human impulses, can be strengthened by AI, not eroded by it, if we build it right.

Gamers know this instinct well. So do gardeners. So do anyone who has ever spent more time and energy on something than was strictly necessary, because they wanted it to be theirs. That is the AI era I want to live in. Not the one where intelligence is piped to me like water from a distant reservoir, metered and billed and subject to outage. But the one where my AI companion and I have been through things together, where it knows my way of thinking, runs on my electricity, and belongs to a world where everyone, regardless of where they were born, gets to have one too. We are not there yet. But we could be. And the first step is simply deciding that this is what we are building toward.

No comments:

Post a Comment