Friday, April 17, 2026

The Session of Gratitude



 

 The Plenary Hall of Sector Seven smelled of cold concrete and old paper. Morten sat in the back row, designated Staff Witness — a formality the Party required to demonstrate transparency to the working class. He folded his hands and watched the three figures at the long table beneath the red aurora portrait of the Founder. Prime 

Secretary Selja spoke first. She was a small woman with a careful mouth and tired eyes — the kind of tiredness that came not from overwork but from constant restraint. "Our output indices have increased by thirty-one percent," she said, reading from a tablet she never seemed to enjoy holding. "Algorithm 14-K performed admirably. The targets were exceeded." She paused, and something almost human moved across her face. "I would ask, however, that the Committee direct some portion of this surplus toward the workers of Fabrication Blocks Four through Nine. Their redeployment has been... abrupt. Their families require transition support." 

A silence followed that was not really silence. It was the sound of a room deciding whether compassion was politically appropriate. 

Chief Engineer Martin folded his large hands on the table. He had the physique of someone who had once done manual work and the manner of someone who had long since decided that was shameful. "The workers of Blocks Four through Nine," he said evenly, "have been reassigned to Monitoring and Validation roles. Roles that require presence, not cognition." He straightened a document that did not need straightening. "I will remind the Committee that effective this quarter, independent deliberation is prohibited within all production facilities. Unauthorised cognitive activity — meaning: personal problem-solving, speculative reasoning, any ideation not prompted by official interface — will be flagged as inefficiency and logged accordingly." He looked at Selja without warmth. "Thinking inside the building is no longer a resource we can afford." Selja did not argue. She made a small note on her tablet. 

Political Secretary Lise rose last. She was the youngest of the three, and she spoke with the conviction of someone who had not yet learned to be ashamed of conviction. "I want to speak to the human dimension," she said. "Because I believe it is beautiful." She paused, allowing the word to land. "For generations, we told ourselves that thought was the highest form of service. That to reason, to imagine, to decide for oneself — this was dignity. But we were wrong." She leaned slightly forward. "To surrender that impulse — to offer your mental capacity to the collective intelligence and say: I trust this more than I trust myself — that is not diminishment. That is the most unselfish act available to a human being. It is the gift of your interior life to the many." Morten stared at the table. He thought of his hands on a piece of ash wood — the way the grain told you things before the tool did. The small corrections no blueprint had ever asked for. The judgment that lived in the fingertips. Lise was still speaking. "We do not ask you to stop feeling. We ask only that you stop insisting. There is a difference." Through the frost-blurred window, the aurora moved in long red curtains across the sky. From here it looked almost warm. 

Morten wrote nothing in his Staff Witness log. There was nothing, he understood now, that he was authorised to observe.

Tuesday, April 14, 2026

Your AI, Your Rules

Your AI, Your Rules

There is a moment familiar to anyone who has ever fallen deep into a video game, the moment you stop playing the game and start investing in it. You buy a better headset. You tune your controller. You learn the maps not because someone told you to, but because you care. The experience rewards your attention, and your attention reshapes the experience. It becomes yours in a way no off-the-shelf purchase ever quite is.

I believe this is exactly how we should think about artificial intelligence, not as a utility we log into, like electricity or Wi-Fi, but as something closer to a companion we build, tend, and grow alongside. Not a service. A relationship. This might sound fanciful in a world where AI is largely discussed in the language of industry: compute costs, API calls, market share, frontier models. But I think the dominant framing is wrong, or at least incomplete. The question we should be asking is not who owns the most powerful AI, but how do we make sure everyone can have their own.

The Gamer's Instinct

Consider how gamers relate to their equipment. A serious gamer does not resent spending money on a quality setup. They choose to invest because the return is real, sharper response, deeper immersion, a machine tuned to how they play. Nobody mandates this investment; it emerges naturally from genuine engagement with something they love.

What if we extended this logic to AI? What if, instead of renting intelligence from a distant server farm, people were encouraged to own their AI experience, to provision it, personalise it, and take pride in it the way a craftsperson takes pride in their tools? This is not a fantasy. The hardware and open-source ecosystems are already moving in this direction. Small, capable models can run on consumer devices. The infrastructure for decentralised AI is being built quietly, piece by piece. What is missing is not the technology, it is the cultural permission to think of AI as something you have, rather than something you use.

A world where AI runs in a distributed, community-owned way is also a more resilient world. When intelligence is not concentrated in a handful of data centres controlled by a handful of companies, it cannot be switched off by a single policy decision, a single outage, or a single acquisition. Decentralisation is not just an ideological preference; it is practical engineering.

Companions That Collaborate: The AI Association

If your AI reflects who you are, then what happens when you work with someone else?This is where the companion model opens up into something genuinely exciting. Imagine two researchers, one a biologist, one a data scientist, joining forces on a project. Each brings their own expertise, their own tools, their own way of thinking. And each brings an AI companion shaped by years of working in their field: one steeped in molecular literature and lab protocols, the other fluent in statistical methods and code. When the humans collaborate, their companions collaborate too. The biology companion surfaces relevant papers; the data science companion writes the analysis pipeline. The whole is sharper than the sum of its parts. This is not a fantasy of automation replacing people, it is a vision of people working more effectively as people, augmented by tools that genuinely understand their domain. The AI does not flatten the differences between the two researchers; it amplifies them. Your companion is most useful when it knows your speciality deeply, which means that diversity of expertise becomes a feature, not a problem to be standardised away.

We might think of these as AI associations, loose, voluntary groupings of people and their companions, formed around a problem, a project, or a shared goal. A legal collective. A research consortium. A neighbourhood planning group. Each member's AI brings specific knowledge; together they can tackle problems that no single person and no single general-purpose model could handle alone. And because the association is composed of individuals who each own their tools, it remains accountable to its members. There is no single point of capture, no platform that can hold the group's work hostage.

This is a fundamentally different vision from the one currently on offer, where a single model is trained to be good at everything for everyone, and where the answer to any question tends to look roughly the same regardless of who asked it. The companion model rewards depth. The association model rewards collaboration. Together, they make a case for AI as something that makes human groups smarter by making individual humans more themselves.

Growing Together: The Ethics of Personal Knowledge

Here is where the companion metaphor earns its keep. Imagine an AI that begins with a solid, ethically grounded core, trained on established, publicly available knowledge: science, history, literature, mathematics, the accumulated record of human inquiry. This is the foundation, the shared ground. Nobody owns it; it belongs to everyone. But then imagine successive layers, built over time, that belong entirely to you. Your conversations. Your notes and papers. Your library, your code, your half-finished ideas and well-worn arguments. Your AI companion does not just answer questions, it knows you. It remembers that you spent three years working on distributed systems, that you find a particular philosopher's work compelling, that you tend to think in analogies before you think in abstractions. It has grown alongside you, shaped by the texture of your intellectual life. This is not surveillance. This is the opposite of surveillance. The data lives with you, on your terms, under your control. Nobody else has access to the map of your mind that your companion holds.

And here is something the current discourse almost entirely misses: this model restores the incentive to learn. One of the quieter, more troubling effects of centralised AI is what it does to intellectual motivation. When any question can be answered in seconds by a model that has read everything, the temptation is to stop reading yourself. Why develop expertise when you can simply prompt for it? Why struggle with a hard concept when a fluent summary is one message away? The centralised model, for all its power, subtly undermines the habits of mind that make people genuinely capable: curiosity, persistence, the willingness to sit with difficulty until understanding arrives. The personal companion model inverts this dynamic entirely. When your AI is built from your knowledge, when its depth in a subject reflects your depth in that subject, then learning is not just admirable, it is strategically valuable. Every paper you read, every skill you develop, every domain you come to understand makes your companion more capable. You are not outsourcing your intelligence; you are compounding it. The better you become, the better your companion becomes. The relationship rewards growth in a way that the subscription service never could.

This matters especially for young people. An educational model built around personal AI companions would look radically different from one built around access to a centralised oracle. Students would be encouraged not to ask the AI what to think, but to build an AI that thinks like them, which means they would have to think, clearly and deeply, first. The incentive structure shifts from consumption to cultivation. That is not a small change. It is a different philosophy of what learning is for. The consequences reach further than education, into every situation where one person is trying to understand another. Consider the job interview. It has long served as a rough but honest signal: how does this person think? What do they know? What is their instinct when a hard question lands unexpectedly? Today, those signals are being eroded. When every candidate has been coached by the same centralised model, they begin to arrive speaking the same language, deploying the same frameworks, structuring their answers with the same confident fluency. The surface looks polished. But the differentiation that interviews exist to reveal, the genuine depth, the particular way someone's mind works, the real contour of their experience, gets flattened into a kind of averaged competence that tells you very little about who is actually in the room.

The companion model pushes back against this in a concrete way. If your AI is built from your intellectual history, your actual work, your real thinking, your specific expertise, then what you bring to a conversation, an interview, a collaboration, is genuinely yours. The companion amplifies your particular strengths rather than covering for your gaps with borrowed fluency. A person who has developed genuine expertise in a field will have a companion that reflects that depth; a person who has not cannot fake it by asking the same model for a polished summary. Human skills, hard-won, idiosyncratic, specific, are preserved rather than minimised. The people in the room start to look different from each other again, because their tools are as individual as they are. There is an ethical dimension here that deserves to be named plainly: knowledge has value, and that value belongs to the people who created it. A well-designed AI ecosystem should make it easy to pay for access to great books, scientific papers, and curated information, and make it economically worthwhile for authors, researchers, and institutions to participate. Free, openly licensed sources will flourish. Proprietary sources will be accessed fairly, with compensation flowing to their creators. The alternative, an internet-scale scrape of everything, with no accounting for who made what, is not a feature of the AI age we should accept as inevitable. We can choose better.

Knowledge is not free to produce. Treating it as though it were is not liberation; it is just a different kind of theft, dressed in the language of openness.

Plugged Into the Sun

There is another dimension to the companion relationship that tends to get lost in abstract debates about AI governance: the physical world. AI systems are hungry. Training a large model consumes enormous quantities of energy. Running inference, at scale, across billions of daily interactions, consumes more. At the moment, much of this energy comes from sources that carry real environmental costs. This is not a reason to abandon AI, it is a reason to be deliberate about how we power it.

A distributed model of AI ownership opens up a possibility that centralised data centres cannot easily replicate: genuinely local, genuinely renewable energy. If your AI companion runs on a device in your home, it can run on electricity from your solar panels, your community wind cooperative, or a regional grid powered by hydroelectric dams. The energy that feeds your companion's thinking can come directly from the sun that falls on your roof. This is not a small thing. It means that the environmental footprint of AI can, in principle, be tied directly to individual choices and local conditions rather than to the energy mix of whichever state happens to host the nearest mega-campus. It means that the relationship between a person and their AI companion extends outward, into the world, into a relationship with energy and place and planet. An AI that runs on sunlight feels, in some small but meaningful way, more honest than one that runs on something you would rather not think about.

The Most Important Argument: This Must Be for Everyone

Everything I have described, the owned experience, the layered personal knowledge, the local green energy, risks becoming a luxury if we are not careful. And that would be a catastrophe. We are at a hinge moment. AI is becoming genuinely useful: in education, in medicine, in legal access, in scientific research, in creative work. The people and nations that have early, deep access to these tools will develop advantages that compound over time. Skills improve. Productivity rises. New ideas get generated faster. The gap between those with access and those without will not stay flat, it will widen, and widen, and widen.

This is the rich-get-richer dynamic played out at civilisational scale, and it should alarm us as much as any geopolitical threat. A world in which AI capability is concentrated in a small number of wealthy countries is not a stable world. It is a world in which the accidents of economic history, which country industrialised first, which attracted the most capital, which built the most undersea cables, come to determine who gets to participate in the next chapter of human cognition.

The gamer gear model matters here too. When the barrier to entry is low enough, when a modest device and a modest monthly commitment can buy you a genuine AI companion, personalised and capable, then geography becomes less determinative. A student in Lagos or Lahore or La Paz, with a decent phone and a community mesh network, should be able to access the same quality of intelligent assistance as a student at a well-endowed university in a wealthy country. This is not utopianism. It is a design choice we could make, if we decided it mattered. Lowering the entry barrier is the single most important policy and engineering priority of this moment. It is more important than who wins the race to the most powerful model. It is more important than which company achieves which benchmark. A billion people with access to a good-enough AI companion will do more for human flourishing than ten thousand people with access to a perfect one.

What We Are Really Choosing

The question of how AI develops over the next decade is, at its core, a question about what kind of relationship we want to have with our own intelligence, and with each other.

The centralised model says: trust the institutions. Rent access. Accept that the infrastructure of thought belongs to those who can afford to build it. This model has a certain logic to it, and it is not without merit. But it also has a long historical track record of concentrating power in ways that prove difficult to reverse. The companion model says something different. It says: your intelligence is yours. Your knowledge is yours. Your energy should be yours. The tools you use to think and learn and create should belong to you, should grow with you, should reflect your values, including your environmental values, your epistemic values, your sense of what knowledge is worth paying for.

It says that the AI era does not have to be a story about dependency and extraction. It can be a story about relationship. About investment, not in the financial sense, but in the human sense: the time and care you put into something because it matters to you, because it reflects who you are, because you want it to be good. It says that when people come together, each with their own companion, their own expertise, their own piece of the puzzle, the result is not uniformity but genuine collective intelligence. Groups that are smarter because their members are more themselves, not less. And it says something hopeful about human skill and human difference: that the right technology does not sand people down to a common average, but makes their particular capabilities more visible, more legible, more valuable. That a person's genuine expertise, earned through years of work, shaped by their unique path, is worth preserving. That when two people walk into a room, we should still be able to tell them apart. And it says something hopeful about learning: that the right technology does not make us lazier, but more ambitious. That a tool which rewards your growth will make you want to grow. That the incentive to understand things deeply, which has always been one of the finest human impulses, can be strengthened by AI, not eroded by it, if we build it right.

Gamers know this instinct well. So do gardeners. So do anyone who has ever spent more time and energy on something than was strictly necessary, because they wanted it to be theirs. That is the AI era I want to live in. Not the one where intelligence is piped to me like water from a distant reservoir, metered and billed and subject to outage. But the one where my AI companion and I have been through things together, where it knows my way of thinking, runs on my electricity, and belongs to a world where everyone, regardless of where they were born, gets to have one too. We are not there yet. But we could be. And the first step is simply deciding that this is what we are building toward.

Friday, April 10, 2026

Selling Snake Oil: Celebrity, Credibility, and the Hype Machine

Selling Snake Oil: Celebrity, Credibility, and the Hype Machine

Created on 2026-04-10 10:22

Published on 2026-04-10 10:26

There is a peculiar affliction spreading through the technology world, one that has nothing to do with algorithms or compute budgets. It is the calculated deployment of a famous name to lend gravity to something that would otherwise sink without a trace. Two recent cases illustrate the pattern with uncomfortable clarity.

The first involves Kristen Stewart. In January 2017, a paper appeared on arXiv titled Bringing Impressionism to Life with Neural Style Transfer in Come Swim, co-authored by an Adobe engineer, a producer and, notably, the actress Kristen Stewart. The AI community was briefly confused, then largely charmed. It should have stayed confused. The three-page paper is a high-level case study of applying an existing, well-understood technique to a short film Stewart directed. No new method. No novel insight. Nothing the original style-transfer literature hadn't already established. What Stewart genuinely contributed was using the technology in her film, a legitimate creative act, but not a research contribution. The inflation was entirely in the reception: breathless press coverage, NVIDIA blog posts, researchers competing to calculate her Erdős number. The name did all the work the science could not.

The second case is fresher, louder, and considerably less defensible.

In April 2026, a GitHub repository appeared under the account milla-jovovich, launching an AI memory system called MemPalace, co-credited to the actress Milla Jovovich and crypto CEO Ben Sigman. Within 48 hours it had over 23,000 stars. The headline claim was a perfect score on LongMemEval, the gold-standard benchmark for AI memory systems. Developers tore it apart within hours.

The benchmark, it turned out, never generated an answer to any question. It checked whether a correct session ID appeared in a retrieved list, never verifying that the retrieved content actually answered anything. The 100% LoCoMo score was achieved by setting the retrieval pool size larger than the total number of sessions in the dataset, guaranteeing the right answer was always included by default. As one analyst put it, it reduced to dumping everything into Claude and asking which part matched. That is not memory. That is not retrieval. The README advertised compression ratios that real tokenizer counts disproved, and a knowledge-graph contradiction-detector that was never actually wired into the code.

Then there is the authorship question,the one that cuts deepest. The account hosting the repository had seven commits and two days of GitHub history. The original account that pushed the code, named "aya-thekeeper", was deleted immediately after launch. When pressed, Jovovich and Sigman explained that "Lu", the mysterious name appearing in commit history, was simply Jovovich's AI coding agent. Jovovich herself admitted the division of labor plainly: she described the concept, Sigman built the software. Whether that constitutes co-development, or simply the purchase of a famous face for a launch campaign, is a question neither of them has answered convincingly. What is documented is that a cryptocurrency also named MemPalace, with Jovovich and Sigman holding a 50% creator reward split, was pumped and dumped within 24 hours of the announcement.

The real contribution of Milla Jovovich to MemPalace remains unproven. What is proven is that her name generated millions of impressions for a project whose benchmarks were rigged, whose README described features that didn't exist, and whose original developer quietly vanished.

Both cases expose the same mechanism. A famous name bypasses the scrutiny that any anonymous submission would face. It reframes the question from "Is this good?" to "Isn't this surprising?"and, surprise, unlike quality, requires no verification. The press amplifies, the stars accumulate, and the actual engineers doing mundane, honest work in the same problem space receive nothing.

In science and engineering, a name is not an argument. The only thing that has changed is how efficiently a famous one can be converted into attention and in the right hands, into a coin.

Wednesday, March 11, 2026

The Oracle of Oslo

When Nicolai Tangen speaks, boardrooms listen. As chief executive of Norges Bank Investment Management, the body that manages Norway's $2 trillion sovereign wealth fund — the largest of its kind on earth — he occupies one of the most structurally powerful positions in global finance. The fund owns approximately 1.5% of every listed company in the world. That means when Tangen walks into a room, he is rarely just a guest. He is, in some measurable sense, a co-owner of the organization hosting him.

In recent years, Tangen has become one of the most vocal institutional champions of artificial intelligence in European business. His message is consistent, urgent, and unambiguous: AI is transforming work, and those who resist it have no future. He has described himself as running "like a maniac" to persuade his own staff to adopt the technology, and has publicly suggested that employees who refuse to engage with AI should not expect to remain at the fund. At conferences, in his widely followed podcast "In Good Company," and in interviews with global media, he projects a vision of AI as an epochal force that business leaders must embrace — now, without hesitation.

Many do. Business figures across Norway and Europe have echoed his positions, aligned their organizational strategies with his framing, and appeared at his events as willing amplifiers of his message. The question worth asking, calmly and without prejudice, is: why? And is the enthusiasm genuinely earned?

A Portfolio Interest That Goes Unmentioned

The most straightforward observation is also the one that receives the least public attention. NBIM's portfolio is heavily weighted toward the technology sector. The fund holds significant stakes in the world's largest AI infrastructure companies — the chipmakers, the cloud platforms, the software giants currently spending hundreds of billions of dollars on AI development. When the valuations of those companies rise, the fund benefits directly.

An institutional investor who publicly and repeatedly declares that AI is transformative, inevitable, and that resistance to it is professional suicide is not speaking from a neutral position. He is, structurally, talking up assets he owns. This need not imply bad faith — it may be entirely sincere belief. But sincerity and financial interest are not mutually exclusive, and the absence of any public acknowledgment of this tension is worth noting.

A fiduciary managing public wealth might reasonably be expected to introduce caveats: that AI productivity gains are real but unevenly distributed, that the current infrastructure investment cycle may not generate returns proportional to its cost, that automation carries systemic risks a sovereign fund should hedge against. These arguments exist, are made by credible economists, and are largely absent from Tangen's public commentary on the subject. What the public hears instead is closer to advocacy than analysis.

The Circular Investment Question

There is a subtler financial dynamic worth examining — one that goes beyond simply holding shares in AI companies and endorsing them publicly.

NBIM, by virtue of owning approximately 1.5% of virtually every significant listed company on earth, sits at the center of an extraordinarily dense web of corporate cross-investment. When business leaders are encouraged to adopt AI aggressively, those organizations do not build AI infrastructure themselves. They purchase it. They buy cloud computing, AI software, and processing power from vendors who are, in almost every significant case, also NBIM portfolio companies.

The result is a closed loop. Spending decisions made by one set of portfolio companies flow directly to another set of portfolio companies. The fund benefits not once from an adoption decision, but potentially multiple times as capital moves through the chain of technology vendors it already owns.

This is not a claim of wrongdoing. Universal ownership of this scale creates these dynamics structurally, regardless of intent. But it does raise a question that is rarely asked publicly: when the world's largest sovereign fund promotes a technology through the voice of its chief executive, and that fund owns both the companies being urged to adopt the technology and the companies selling it, whose interests does the advocacy ultimately serve?

For any business leader considering a significant AI investment, that question deserves to sit on the table alongside the enthusiasm.

The Architecture of Influence

To understand why business leaders often feel obliged to echo Tangen's positions, one must understand the architecture of the relationships involved.

A CEO whose company sits in NBIM's portfolio is not in a symmetric relationship with its chief executive. NBIM is a major shareholder. Its proxy voting can influence board composition, executive compensation, and strategic direction. When that shareholder expresses strong views about a technology, the CEO faces a subtle but real calculation: push back and risk being seen as a laggard by one of the company's most powerful investors, or align with his vision and signal forward-thinking leadership. The professional incentive, in most cases, points toward alignment.

This dynamic rarely surfaces openly. Instead it reproduces itself as consensus. When multiple business figures who share NBIM exposure begin saying similar things about AI — enthusiastically, with similar vocabulary and similar urgency — what looks like organic convergence of opinion may in part reflect the gravitational pull of a single, very large source of influence. The consensus then becomes self-reinforcing. Leaders who have publicly aligned themselves with the AI vision have a personal stake in it proving correct. They promote it further, attend the next event, share the next interview, and bring more colleagues into the orbit. The network expands not because the underlying argument has been independently tested, but because professional identity has become attached to it.

A Background Worth Considering

Tangen's biography contains a detail that is routinely noted and rarely examined. Before his career in finance, during mandatory military service, he received specialized training in interrogation and Russian translation at the Norwegian Armed Forces' School of Intelligence and Security, under the Norwegian Intelligence Service.

The training is typically described as simply part of his conscription — unremarkable military service. That may be entirely accurate. It is nonetheless worth asking, as a matter of intellectual curiosity rather than accusation, what that training involved. Interrogation techniques are fundamentally concerned with understanding how people process information, what motivates their decisions, how trust is established, and how a conversational environment can be shaped so that a subject reaches desired conclusions through their own apparent reasoning.

Whether or not those skills have any bearing on Tangen's professional conduct today is genuinely unknown. What can be observed is that he is an exceptionally effective communicator who holds structural power over virtually every audience he addresses, who has built an unusually loyal network of admirers across European business, and who promotes a specific set of ideas in which his fund has substantial and compounding financial interests. Whether those facts are connected, and how, is a question each observer must answer for themselves.

What Independent Thinking Requires

None of this is an argument that Tangen is wrong about AI, or that his motives are improper. The technology is genuinely significant. His fund's investments may prove entirely justified. He is by any measure a person of formidable intelligence and real accomplishment.

The argument is narrower and more specific. When a single individual combines structural power over your organization, financial interests in a particular outcome, and an exceptional capacity for influence — and when the professional culture around him rewards agreement and subtly penalizes skepticism — the conditions for uncritical consensus are in place. Business leaders who adopt AI strategies primarily because a powerful figure told them to, who sign contracts with vendors that powerful figure's fund already owns, and who then take the stage to echo the message, may be serving interests that are not entirely their own.

The most valuable thing any business leader can bring to Tangen's message — or to anyone's message delivered from a position of superior power — is the same thing that message quietly discourages: independent judgment, exercised before the applause begins.

This article is based on publicly available information and is offered as opinion and analysis for the purpose of informed professional debate. It does not assert wrongdoing by any named individual.

Friday, January 30, 2026

Buckets and Sovereignity

Buckets by Josh Hallett CC-BY 2.0 (https://www.flickr.com/photos/hyku/301566516)

Buckets and Sovereignity

Created on 2026-01-31 07:12

Published on 2026-01-31 07:26

S3 buckets have a problem. A developer needs a bucket, a single bucket. Something that'll take AWS about 30 seconds to spin up. However they have to open a Jira ticket. Then they ping you on Slack. Then they show up at your desk because the ticket's been sitting there for three days and their sprint is ending.

Then, to your chagrin, open the huge Terraform repo, add their bucket, run terraform plan, watch myriads of resources scroll by, get it reviewed, wait for the pipeline, and boom. One S3 bucket. Only took a week and three senior engineers.

This is stupid. Very stupid. But we keep doing it because we've convinced ourselves that "infrastructure as code" means all infrastructure must live in the same repository, managed by the same team.

Is it true? An S3 bucket that only exists to serve one application isn't infrastructure. It's part of the application.

Infrastructure is: VPC, EKS cluster, networks, etc. A bucket that stores upload images for the marketing website? That's application state that happens to live in AWS instead of Kubernetes. That could have been another API call to a object storage service. Managing application resources as infrastructure with an infrastructure tool is not friendly at all.

Enter AWS Controllers for Kubernetes.

ACK does something wonderfully simple: it turns AWS resources into Kubernetes resources. Want a bucket? Write a manifest. The ACK controllers watch these manifests and call the AWS APIs. The developers get self-service. You get to stop being the S3 bucket vending machine. There is no giving up control, you're shifting what you control. Th control increases in fact as the cluster will try to keep resources under control continuously, not only when terraform plan/apply happen. Instead of gatekeeping every single resource creation, you define the rules.

Another team needs a Postgres database? They drop this in their repo:

They commit it. The pipeline runs. The database appears. Your policies ensure it's in the right VPC, encrypted, backed up, tagged correctly. Tags are applied for correct cost allocation so the financial minded persons have a nice cost breakdown.

This is what platform engineering actually means. You build the platform, the foundation, the policies, the standards. Developers build on top of it. Both teams do what they're good at.

And when they delete that feature branch? The database goes away. No orphaned resources quietly racking up charges for six months because everyone assumed someone else would delete it. Everything's in Git, versioned with the application code. Developers can see infrastructure status with kubectl get dbinstance. The blast radius of any change is limited to that team's namespace.

The plot twist: digital sovereignty.

In the last year cloud shifted from pure convenience to strategic threat. European regulators are increasingly uncomfortable with critical data for European companies subject to American surveillance laws. GDPR was just the opening act. The Digital Services Act, the Data Governance Act, these are forcing real architectural decisions.

And here's the uncomfortable truth about ACK: it makes you really, really good at AWS. Every pattern, every practice, every custom resource is AWS-specific. Which is fine for US but not so much for Europe.

Enter Crossplane who does something clever. Instead of direct AWS API bindings, it gives you an abstraction layer. You define what a "Database" means for your organization, and Crossplane figures out whether that's RDS, or Cloud SQL, or, interestingly, a database on OVHcloud, Scaleway, or Open Telekom Cloud. They're European cloud providers, subject to European law, outside the reach of the CLOUD Act. For organizations handling European citizen data, running European critical infrastructure. This actually matters.

The developer experience stays the same. They still request a "Database." Your platform team just swaps what that provisions underneath. Today it's AWS. Tomorrow it's a sovereign provider. The workflow doesn't change.

Maybe the organization doesn't care about digital sovereignty. Maybe one is fine being all-in on AWS forever. That's legitimate. But the organizations that do care: regulated industries, government contracts, ones thinking five years ahead, are realizing that flexibility isn't just nice to have.

Because the geopolitical landscape that made AWS the obvious choice in 2015 might not be the landscape of 2030. ACK solves the S3 bucket problem beautifully. But Crossplane solves it and gives you an exit strategy. In a world where "which country's laws apply to this data" is becoming as important as "how many nines of uptime do we get," that's worth thinking about.

Tuesday, January 13, 2026

Red Aurora



The snow outside was the same shade of pale grey as the concrete blocks that lined Oslo’s People’s Sector Seven. Inside the Productivity Hub, Comrade Morten stared at the dim glow of his workstation. 

A red icon pulsed in the corner: “AI Assistance Required by Order of the Central Efficiency Committee.” Morten sighed. He’d been a woodworker before the Great Automation Integration, before the Party had decreed that every citizen must harness state-approved AI to achieve optimal output. Now, every design he made was “enhanced” by Algorithm 14-K — smoothing lines, optimizing cuts, trimming away his individuality. 

“Comrade,” came a voice from behind. Morten turned to see Comrade Bjørn, draped in the standard-issue fur-trimmed coat, his breath steaming in the cold air of the underheated hall. “You’re falling behind quota,” Bjørn said, his voice both concerned and sharp. “The Party notices such things.” “I am meeting my numbers,” Morten protested. “Mostly.” Bjørn’s eyes narrowed. “Mostly is not enough. Have you forgotten Alexei Stakhanov? The coal miner who extracted fourteen times his quota in a single shift? The Party remembers him, even now. He is the model of the New Working Class Hero — not because he was forced, but because he believed.” Morten tapped the AI prompt reluctantly. Blueprints blossomed instantly on his screen — impossibly precise, impossibly fast. “The AI designs everything now,” he muttered. “What’s left for me to believe in?” 

Bjørn stepped closer, lowering his voice. “You believe in the result. Every chair you make, every beam you cut — they build the communal future. It doesn’t matter if your hands or the algorithm’s hands shape them. We are one machine, Comrade. You, me, and the AI — all tools of the Party.” Through the frosted window, the red aurora shimmered in the polar sky, cast by the orbital solar reflectors. It painted the snow crimson, as if the whole land bled for the collective. 

Morten looked at the glowing blueprints. Somewhere deep inside, a stubborn human pride whispered that he could do better without the machine. But another part of him — the part that wanted to survive the coming inspection — began to work faster. He pressed the Accept button. In the background, Bjørn’s voice echoed softly, almost like a hymn: “Production is devotion, Comrade. And devotion is forever.”

Wednesday, July 16, 2025

Recovering TP-LINK router

 My WDR430 router has been bricked for a while.

Initially I tied to reflash it using a SOIC clip but alas, that proved impossible as I had no level shifters for 1.8 V flash. Therefore I soldered an UART interface and tried to flash it via serial. That wasn't as easy as I was never able to pres the U-Boot interupt sequence `tpl` fast enough. 
The solution was to write an expect script and use that - worked perfectly :)
 

<

#!/usr/bin/expect -f

#!/usr/bin/expect -f

# Show all output
log_user 1

# Start cu
spawn cu -l /dev/ttyUSB0 -s 115200

# Increase timeout
set timeout 180

# Wait for autoboot
expect {
    -re "Autobooting.*1 seconds" {
        send "tpl\r"
        sleep 1
    }
    timeout {
        puts "Timeout waiting for autoboot message."
        exit 1
    }
}

# Send tftpboot
send "tftpboot\r"

# Wait for done after tftpboot
expect {
    -re ".*done.*" {
        send "erase\r"
    }
    timeout {
        puts "Timeout waiting for 'done' after tftpboot."
        exit 1
    }
}

# Wait for done after erase
expect {
    -re ".*done.*" {
        send "cp.b\r"
    }
    timeout {
        puts "Timeout waiting for 'done' after erase."
        exit 1
    }
}

# Wait for done after cp.b
expect {
    -re ".*done.*" {
        puts "All steps completed."
        interact
    }
    timeout {
        puts "Timeout waiting for 'done' after cp.b."
        exit 1
    }
}