Friday, April 17, 2026

The Session of Gratitude



 

 The Plenary Hall of Sector Seven smelled of cold concrete and old paper. Morten sat in the back row, designated Staff Witness — a formality the Party required to demonstrate transparency to the working class. He folded his hands and watched the three figures at the long table beneath the red aurora portrait of the Founder. Prime 

Secretary Selja spoke first. She was a small woman with a careful mouth and tired eyes — the kind of tiredness that came not from overwork but from constant restraint. "Our output indices have increased by thirty-one percent," she said, reading from a tablet she never seemed to enjoy holding. "Algorithm 14-K performed admirably. The targets were exceeded." She paused, and something almost human moved across her face. "I would ask, however, that the Committee direct some portion of this surplus toward the workers of Fabrication Blocks Four through Nine. Their redeployment has been... abrupt. Their families require transition support." 

A silence followed that was not really silence. It was the sound of a room deciding whether compassion was politically appropriate. 

Chief Engineer Martin folded his large hands on the table. He had the physique of someone who had once done manual work and the manner of someone who had long since decided that was shameful. "The workers of Blocks Four through Nine," he said evenly, "have been reassigned to Monitoring and Validation roles. Roles that require presence, not cognition." He straightened a document that did not need straightening. "I will remind the Committee that effective this quarter, independent deliberation is prohibited within all production facilities. Unauthorised cognitive activity — meaning: personal problem-solving, speculative reasoning, any ideation not prompted by official interface — will be flagged as inefficiency and logged accordingly." He looked at Selja without warmth. "Thinking inside the building is no longer a resource we can afford." Selja did not argue. She made a small note on her tablet. 

Political Secretary Lise rose last. She was the youngest of the three, and she spoke with the conviction of someone who had not yet learned to be ashamed of conviction. "I want to speak to the human dimension," she said. "Because I believe it is beautiful." She paused, allowing the word to land. "For generations, we told ourselves that thought was the highest form of service. That to reason, to imagine, to decide for oneself — this was dignity. But we were wrong." She leaned slightly forward. "To surrender that impulse — to offer your mental capacity to the collective intelligence and say: I trust this more than I trust myself — that is not diminishment. That is the most unselfish act available to a human being. It is the gift of your interior life to the many." Morten stared at the table. He thought of his hands on a piece of ash wood — the way the grain told you things before the tool did. The small corrections no blueprint had ever asked for. The judgment that lived in the fingertips. Lise was still speaking. "We do not ask you to stop feeling. We ask only that you stop insisting. There is a difference." Through the frost-blurred window, the aurora moved in long red curtains across the sky. From here it looked almost warm. 

Morten wrote nothing in his Staff Witness log. There was nothing, he understood now, that he was authorised to observe.

Tuesday, April 14, 2026

Your AI, Your Rules

Your AI, Your Rules

There is a moment familiar to anyone who has ever fallen deep into a video game, the moment you stop playing the game and start investing in it. You buy a better headset. You tune your controller. You learn the maps not because someone told you to, but because you care. The experience rewards your attention, and your attention reshapes the experience. It becomes yours in a way no off-the-shelf purchase ever quite is.

I believe this is exactly how we should think about artificial intelligence, not as a utility we log into, like electricity or Wi-Fi, but as something closer to a companion we build, tend, and grow alongside. Not a service. A relationship. This might sound fanciful in a world where AI is largely discussed in the language of industry: compute costs, API calls, market share, frontier models. But I think the dominant framing is wrong, or at least incomplete. The question we should be asking is not who owns the most powerful AI, but how do we make sure everyone can have their own.

The Gamer's Instinct

Consider how gamers relate to their equipment. A serious gamer does not resent spending money on a quality setup. They choose to invest because the return is real, sharper response, deeper immersion, a machine tuned to how they play. Nobody mandates this investment; it emerges naturally from genuine engagement with something they love.

What if we extended this logic to AI? What if, instead of renting intelligence from a distant server farm, people were encouraged to own their AI experience, to provision it, personalise it, and take pride in it the way a craftsperson takes pride in their tools? This is not a fantasy. The hardware and open-source ecosystems are already moving in this direction. Small, capable models can run on consumer devices. The infrastructure for decentralised AI is being built quietly, piece by piece. What is missing is not the technology, it is the cultural permission to think of AI as something you have, rather than something you use.

A world where AI runs in a distributed, community-owned way is also a more resilient world. When intelligence is not concentrated in a handful of data centres controlled by a handful of companies, it cannot be switched off by a single policy decision, a single outage, or a single acquisition. Decentralisation is not just an ideological preference; it is practical engineering.

Companions That Collaborate: The AI Association

If your AI reflects who you are, then what happens when you work with someone else?This is where the companion model opens up into something genuinely exciting. Imagine two researchers, one a biologist, one a data scientist, joining forces on a project. Each brings their own expertise, their own tools, their own way of thinking. And each brings an AI companion shaped by years of working in their field: one steeped in molecular literature and lab protocols, the other fluent in statistical methods and code. When the humans collaborate, their companions collaborate too. The biology companion surfaces relevant papers; the data science companion writes the analysis pipeline. The whole is sharper than the sum of its parts. This is not a fantasy of automation replacing people, it is a vision of people working more effectively as people, augmented by tools that genuinely understand their domain. The AI does not flatten the differences between the two researchers; it amplifies them. Your companion is most useful when it knows your speciality deeply, which means that diversity of expertise becomes a feature, not a problem to be standardised away.

We might think of these as AI associations, loose, voluntary groupings of people and their companions, formed around a problem, a project, or a shared goal. A legal collective. A research consortium. A neighbourhood planning group. Each member's AI brings specific knowledge; together they can tackle problems that no single person and no single general-purpose model could handle alone. And because the association is composed of individuals who each own their tools, it remains accountable to its members. There is no single point of capture, no platform that can hold the group's work hostage.

This is a fundamentally different vision from the one currently on offer, where a single model is trained to be good at everything for everyone, and where the answer to any question tends to look roughly the same regardless of who asked it. The companion model rewards depth. The association model rewards collaboration. Together, they make a case for AI as something that makes human groups smarter by making individual humans more themselves.

Growing Together: The Ethics of Personal Knowledge

Here is where the companion metaphor earns its keep. Imagine an AI that begins with a solid, ethically grounded core, trained on established, publicly available knowledge: science, history, literature, mathematics, the accumulated record of human inquiry. This is the foundation, the shared ground. Nobody owns it; it belongs to everyone. But then imagine successive layers, built over time, that belong entirely to you. Your conversations. Your notes and papers. Your library, your code, your half-finished ideas and well-worn arguments. Your AI companion does not just answer questions, it knows you. It remembers that you spent three years working on distributed systems, that you find a particular philosopher's work compelling, that you tend to think in analogies before you think in abstractions. It has grown alongside you, shaped by the texture of your intellectual life. This is not surveillance. This is the opposite of surveillance. The data lives with you, on your terms, under your control. Nobody else has access to the map of your mind that your companion holds.

And here is something the current discourse almost entirely misses: this model restores the incentive to learn. One of the quieter, more troubling effects of centralised AI is what it does to intellectual motivation. When any question can be answered in seconds by a model that has read everything, the temptation is to stop reading yourself. Why develop expertise when you can simply prompt for it? Why struggle with a hard concept when a fluent summary is one message away? The centralised model, for all its power, subtly undermines the habits of mind that make people genuinely capable: curiosity, persistence, the willingness to sit with difficulty until understanding arrives. The personal companion model inverts this dynamic entirely. When your AI is built from your knowledge, when its depth in a subject reflects your depth in that subject, then learning is not just admirable, it is strategically valuable. Every paper you read, every skill you develop, every domain you come to understand makes your companion more capable. You are not outsourcing your intelligence; you are compounding it. The better you become, the better your companion becomes. The relationship rewards growth in a way that the subscription service never could.

This matters especially for young people. An educational model built around personal AI companions would look radically different from one built around access to a centralised oracle. Students would be encouraged not to ask the AI what to think, but to build an AI that thinks like them, which means they would have to think, clearly and deeply, first. The incentive structure shifts from consumption to cultivation. That is not a small change. It is a different philosophy of what learning is for. The consequences reach further than education, into every situation where one person is trying to understand another. Consider the job interview. It has long served as a rough but honest signal: how does this person think? What do they know? What is their instinct when a hard question lands unexpectedly? Today, those signals are being eroded. When every candidate has been coached by the same centralised model, they begin to arrive speaking the same language, deploying the same frameworks, structuring their answers with the same confident fluency. The surface looks polished. But the differentiation that interviews exist to reveal, the genuine depth, the particular way someone's mind works, the real contour of their experience, gets flattened into a kind of averaged competence that tells you very little about who is actually in the room.

The companion model pushes back against this in a concrete way. If your AI is built from your intellectual history, your actual work, your real thinking, your specific expertise, then what you bring to a conversation, an interview, a collaboration, is genuinely yours. The companion amplifies your particular strengths rather than covering for your gaps with borrowed fluency. A person who has developed genuine expertise in a field will have a companion that reflects that depth; a person who has not cannot fake it by asking the same model for a polished summary. Human skills, hard-won, idiosyncratic, specific, are preserved rather than minimised. The people in the room start to look different from each other again, because their tools are as individual as they are. There is an ethical dimension here that deserves to be named plainly: knowledge has value, and that value belongs to the people who created it. A well-designed AI ecosystem should make it easy to pay for access to great books, scientific papers, and curated information, and make it economically worthwhile for authors, researchers, and institutions to participate. Free, openly licensed sources will flourish. Proprietary sources will be accessed fairly, with compensation flowing to their creators. The alternative, an internet-scale scrape of everything, with no accounting for who made what, is not a feature of the AI age we should accept as inevitable. We can choose better.

Knowledge is not free to produce. Treating it as though it were is not liberation; it is just a different kind of theft, dressed in the language of openness.

Plugged Into the Sun

There is another dimension to the companion relationship that tends to get lost in abstract debates about AI governance: the physical world. AI systems are hungry. Training a large model consumes enormous quantities of energy. Running inference, at scale, across billions of daily interactions, consumes more. At the moment, much of this energy comes from sources that carry real environmental costs. This is not a reason to abandon AI, it is a reason to be deliberate about how we power it.

A distributed model of AI ownership opens up a possibility that centralised data centres cannot easily replicate: genuinely local, genuinely renewable energy. If your AI companion runs on a device in your home, it can run on electricity from your solar panels, your community wind cooperative, or a regional grid powered by hydroelectric dams. The energy that feeds your companion's thinking can come directly from the sun that falls on your roof. This is not a small thing. It means that the environmental footprint of AI can, in principle, be tied directly to individual choices and local conditions rather than to the energy mix of whichever state happens to host the nearest mega-campus. It means that the relationship between a person and their AI companion extends outward, into the world, into a relationship with energy and place and planet. An AI that runs on sunlight feels, in some small but meaningful way, more honest than one that runs on something you would rather not think about.

The Most Important Argument: This Must Be for Everyone

Everything I have described, the owned experience, the layered personal knowledge, the local green energy, risks becoming a luxury if we are not careful. And that would be a catastrophe. We are at a hinge moment. AI is becoming genuinely useful: in education, in medicine, in legal access, in scientific research, in creative work. The people and nations that have early, deep access to these tools will develop advantages that compound over time. Skills improve. Productivity rises. New ideas get generated faster. The gap between those with access and those without will not stay flat, it will widen, and widen, and widen.

This is the rich-get-richer dynamic played out at civilisational scale, and it should alarm us as much as any geopolitical threat. A world in which AI capability is concentrated in a small number of wealthy countries is not a stable world. It is a world in which the accidents of economic history, which country industrialised first, which attracted the most capital, which built the most undersea cables, come to determine who gets to participate in the next chapter of human cognition.

The gamer gear model matters here too. When the barrier to entry is low enough, when a modest device and a modest monthly commitment can buy you a genuine AI companion, personalised and capable, then geography becomes less determinative. A student in Lagos or Lahore or La Paz, with a decent phone and a community mesh network, should be able to access the same quality of intelligent assistance as a student at a well-endowed university in a wealthy country. This is not utopianism. It is a design choice we could make, if we decided it mattered. Lowering the entry barrier is the single most important policy and engineering priority of this moment. It is more important than who wins the race to the most powerful model. It is more important than which company achieves which benchmark. A billion people with access to a good-enough AI companion will do more for human flourishing than ten thousand people with access to a perfect one.

What We Are Really Choosing

The question of how AI develops over the next decade is, at its core, a question about what kind of relationship we want to have with our own intelligence, and with each other.

The centralised model says: trust the institutions. Rent access. Accept that the infrastructure of thought belongs to those who can afford to build it. This model has a certain logic to it, and it is not without merit. But it also has a long historical track record of concentrating power in ways that prove difficult to reverse. The companion model says something different. It says: your intelligence is yours. Your knowledge is yours. Your energy should be yours. The tools you use to think and learn and create should belong to you, should grow with you, should reflect your values, including your environmental values, your epistemic values, your sense of what knowledge is worth paying for.

It says that the AI era does not have to be a story about dependency and extraction. It can be a story about relationship. About investment, not in the financial sense, but in the human sense: the time and care you put into something because it matters to you, because it reflects who you are, because you want it to be good. It says that when people come together, each with their own companion, their own expertise, their own piece of the puzzle, the result is not uniformity but genuine collective intelligence. Groups that are smarter because their members are more themselves, not less. And it says something hopeful about human skill and human difference: that the right technology does not sand people down to a common average, but makes their particular capabilities more visible, more legible, more valuable. That a person's genuine expertise, earned through years of work, shaped by their unique path, is worth preserving. That when two people walk into a room, we should still be able to tell them apart. And it says something hopeful about learning: that the right technology does not make us lazier, but more ambitious. That a tool which rewards your growth will make you want to grow. That the incentive to understand things deeply, which has always been one of the finest human impulses, can be strengthened by AI, not eroded by it, if we build it right.

Gamers know this instinct well. So do gardeners. So do anyone who has ever spent more time and energy on something than was strictly necessary, because they wanted it to be theirs. That is the AI era I want to live in. Not the one where intelligence is piped to me like water from a distant reservoir, metered and billed and subject to outage. But the one where my AI companion and I have been through things together, where it knows my way of thinking, runs on my electricity, and belongs to a world where everyone, regardless of where they were born, gets to have one too. We are not there yet. But we could be. And the first step is simply deciding that this is what we are building toward.

Friday, April 10, 2026

Selling Snake Oil: Celebrity, Credibility, and the Hype Machine

Selling Snake Oil: Celebrity, Credibility, and the Hype Machine

Created on 2026-04-10 10:22

Published on 2026-04-10 10:26

There is a peculiar affliction spreading through the technology world, one that has nothing to do with algorithms or compute budgets. It is the calculated deployment of a famous name to lend gravity to something that would otherwise sink without a trace. Two recent cases illustrate the pattern with uncomfortable clarity.

The first involves Kristen Stewart. In January 2017, a paper appeared on arXiv titled Bringing Impressionism to Life with Neural Style Transfer in Come Swim, co-authored by an Adobe engineer, a producer and, notably, the actress Kristen Stewart. The AI community was briefly confused, then largely charmed. It should have stayed confused. The three-page paper is a high-level case study of applying an existing, well-understood technique to a short film Stewart directed. No new method. No novel insight. Nothing the original style-transfer literature hadn't already established. What Stewart genuinely contributed was using the technology in her film, a legitimate creative act, but not a research contribution. The inflation was entirely in the reception: breathless press coverage, NVIDIA blog posts, researchers competing to calculate her Erdős number. The name did all the work the science could not.

The second case is fresher, louder, and considerably less defensible.

In April 2026, a GitHub repository appeared under the account milla-jovovich, launching an AI memory system called MemPalace, co-credited to the actress Milla Jovovich and crypto CEO Ben Sigman. Within 48 hours it had over 23,000 stars. The headline claim was a perfect score on LongMemEval, the gold-standard benchmark for AI memory systems. Developers tore it apart within hours.

The benchmark, it turned out, never generated an answer to any question. It checked whether a correct session ID appeared in a retrieved list, never verifying that the retrieved content actually answered anything. The 100% LoCoMo score was achieved by setting the retrieval pool size larger than the total number of sessions in the dataset, guaranteeing the right answer was always included by default. As one analyst put it, it reduced to dumping everything into Claude and asking which part matched. That is not memory. That is not retrieval. The README advertised compression ratios that real tokenizer counts disproved, and a knowledge-graph contradiction-detector that was never actually wired into the code.

Then there is the authorship question,the one that cuts deepest. The account hosting the repository had seven commits and two days of GitHub history. The original account that pushed the code, named "aya-thekeeper", was deleted immediately after launch. When pressed, Jovovich and Sigman explained that "Lu", the mysterious name appearing in commit history, was simply Jovovich's AI coding agent. Jovovich herself admitted the division of labor plainly: she described the concept, Sigman built the software. Whether that constitutes co-development, or simply the purchase of a famous face for a launch campaign, is a question neither of them has answered convincingly. What is documented is that a cryptocurrency also named MemPalace, with Jovovich and Sigman holding a 50% creator reward split, was pumped and dumped within 24 hours of the announcement.

The real contribution of Milla Jovovich to MemPalace remains unproven. What is proven is that her name generated millions of impressions for a project whose benchmarks were rigged, whose README described features that didn't exist, and whose original developer quietly vanished.

Both cases expose the same mechanism. A famous name bypasses the scrutiny that any anonymous submission would face. It reframes the question from "Is this good?" to "Isn't this surprising?"and, surprise, unlike quality, requires no verification. The press amplifies, the stars accumulate, and the actual engineers doing mundane, honest work in the same problem space receive nothing.

In science and engineering, a name is not an argument. The only thing that has changed is how efficiently a famous one can be converted into attention and in the right hands, into a coin.