Friday, April 17, 2026

The Session of Gratitude



 

 The Plenary Hall of Sector Seven smelled of cold concrete and old paper. Morten sat in the back row, designated Staff Witness — a formality the Party required to demonstrate transparency to the working class. He folded his hands and watched the three figures at the long table beneath the red aurora portrait of the Founder. Prime 

Secretary Selja spoke first. She was a small woman with a careful mouth and tired eyes — the kind of tiredness that came not from overwork but from constant restraint. "Our output indices have increased by thirty-one percent," she said, reading from a tablet she never seemed to enjoy holding. "Algorithm 14-K performed admirably. The targets were exceeded." She paused, and something almost human moved across her face. "I would ask, however, that the Committee direct some portion of this surplus toward the workers of Fabrication Blocks Four through Nine. Their redeployment has been... abrupt. Their families require transition support." 

A silence followed that was not really silence. It was the sound of a room deciding whether compassion was politically appropriate. 

Chief Engineer Martin folded his large hands on the table. He had the physique of someone who had once done manual work and the manner of someone who had long since decided that was shameful. "The workers of Blocks Four through Nine," he said evenly, "have been reassigned to Monitoring and Validation roles. Roles that require presence, not cognition." He straightened a document that did not need straightening. "I will remind the Committee that effective this quarter, independent deliberation is prohibited within all production facilities. Unauthorised cognitive activity — meaning: personal problem-solving, speculative reasoning, any ideation not prompted by official interface — will be flagged as inefficiency and logged accordingly." He looked at Selja without warmth. "Thinking inside the building is no longer a resource we can afford." Selja did not argue. She made a small note on her tablet. 

Political Secretary Lise rose last. She was the youngest of the three, and she spoke with the conviction of someone who had not yet learned to be ashamed of conviction. "I want to speak to the human dimension," she said. "Because I believe it is beautiful." She paused, allowing the word to land. "For generations, we told ourselves that thought was the highest form of service. That to reason, to imagine, to decide for oneself — this was dignity. But we were wrong." She leaned slightly forward. "To surrender that impulse — to offer your mental capacity to the collective intelligence and say: I trust this more than I trust myself — that is not diminishment. That is the most unselfish act available to a human being. It is the gift of your interior life to the many." Morten stared at the table. He thought of his hands on a piece of ash wood — the way the grain told you things before the tool did. The small corrections no blueprint had ever asked for. The judgment that lived in the fingertips. Lise was still speaking. "We do not ask you to stop feeling. We ask only that you stop insisting. There is a difference." Through the frost-blurred window, the aurora moved in long red curtains across the sky. From here it looked almost warm. 

Morten wrote nothing in his Staff Witness log. There was nothing, he understood now, that he was authorised to observe.

Tuesday, April 14, 2026

Your AI, Your Rules

Your AI, Your Rules

There is a moment familiar to anyone who has ever fallen deep into a video game, the moment you stop playing the game and start investing in it. You buy a better headset. You tune your controller. You learn the maps not because someone told you to, but because you care. The experience rewards your attention, and your attention reshapes the experience. It becomes yours in a way no off-the-shelf purchase ever quite is.

I believe this is exactly how we should think about artificial intelligence, not as a utility we log into, like electricity or Wi-Fi, but as something closer to a companion we build, tend, and grow alongside. Not a service. A relationship. This might sound fanciful in a world where AI is largely discussed in the language of industry: compute costs, API calls, market share, frontier models. But I think the dominant framing is wrong, or at least incomplete. The question we should be asking is not who owns the most powerful AI, but how do we make sure everyone can have their own.

The Gamer's Instinct

Consider how gamers relate to their equipment. A serious gamer does not resent spending money on a quality setup. They choose to invest because the return is real, sharper response, deeper immersion, a machine tuned to how they play. Nobody mandates this investment; it emerges naturally from genuine engagement with something they love.

What if we extended this logic to AI? What if, instead of renting intelligence from a distant server farm, people were encouraged to own their AI experience, to provision it, personalise it, and take pride in it the way a craftsperson takes pride in their tools? This is not a fantasy. The hardware and open-source ecosystems are already moving in this direction. Small, capable models can run on consumer devices. The infrastructure for decentralised AI is being built quietly, piece by piece. What is missing is not the technology, it is the cultural permission to think of AI as something you have, rather than something you use.

A world where AI runs in a distributed, community-owned way is also a more resilient world. When intelligence is not concentrated in a handful of data centres controlled by a handful of companies, it cannot be switched off by a single policy decision, a single outage, or a single acquisition. Decentralisation is not just an ideological preference; it is practical engineering.

Companions That Collaborate: The AI Association

If your AI reflects who you are, then what happens when you work with someone else?This is where the companion model opens up into something genuinely exciting. Imagine two researchers, one a biologist, one a data scientist, joining forces on a project. Each brings their own expertise, their own tools, their own way of thinking. And each brings an AI companion shaped by years of working in their field: one steeped in molecular literature and lab protocols, the other fluent in statistical methods and code. When the humans collaborate, their companions collaborate too. The biology companion surfaces relevant papers; the data science companion writes the analysis pipeline. The whole is sharper than the sum of its parts. This is not a fantasy of automation replacing people, it is a vision of people working more effectively as people, augmented by tools that genuinely understand their domain. The AI does not flatten the differences between the two researchers; it amplifies them. Your companion is most useful when it knows your speciality deeply, which means that diversity of expertise becomes a feature, not a problem to be standardised away.

We might think of these as AI associations, loose, voluntary groupings of people and their companions, formed around a problem, a project, or a shared goal. A legal collective. A research consortium. A neighbourhood planning group. Each member's AI brings specific knowledge; together they can tackle problems that no single person and no single general-purpose model could handle alone. And because the association is composed of individuals who each own their tools, it remains accountable to its members. There is no single point of capture, no platform that can hold the group's work hostage.

This is a fundamentally different vision from the one currently on offer, where a single model is trained to be good at everything for everyone, and where the answer to any question tends to look roughly the same regardless of who asked it. The companion model rewards depth. The association model rewards collaboration. Together, they make a case for AI as something that makes human groups smarter by making individual humans more themselves.

Growing Together: The Ethics of Personal Knowledge

Here is where the companion metaphor earns its keep. Imagine an AI that begins with a solid, ethically grounded core, trained on established, publicly available knowledge: science, history, literature, mathematics, the accumulated record of human inquiry. This is the foundation, the shared ground. Nobody owns it; it belongs to everyone. But then imagine successive layers, built over time, that belong entirely to you. Your conversations. Your notes and papers. Your library, your code, your half-finished ideas and well-worn arguments. Your AI companion does not just answer questions, it knows you. It remembers that you spent three years working on distributed systems, that you find a particular philosopher's work compelling, that you tend to think in analogies before you think in abstractions. It has grown alongside you, shaped by the texture of your intellectual life. This is not surveillance. This is the opposite of surveillance. The data lives with you, on your terms, under your control. Nobody else has access to the map of your mind that your companion holds.

And here is something the current discourse almost entirely misses: this model restores the incentive to learn. One of the quieter, more troubling effects of centralised AI is what it does to intellectual motivation. When any question can be answered in seconds by a model that has read everything, the temptation is to stop reading yourself. Why develop expertise when you can simply prompt for it? Why struggle with a hard concept when a fluent summary is one message away? The centralised model, for all its power, subtly undermines the habits of mind that make people genuinely capable: curiosity, persistence, the willingness to sit with difficulty until understanding arrives. The personal companion model inverts this dynamic entirely. When your AI is built from your knowledge, when its depth in a subject reflects your depth in that subject, then learning is not just admirable, it is strategically valuable. Every paper you read, every skill you develop, every domain you come to understand makes your companion more capable. You are not outsourcing your intelligence; you are compounding it. The better you become, the better your companion becomes. The relationship rewards growth in a way that the subscription service never could.

This matters especially for young people. An educational model built around personal AI companions would look radically different from one built around access to a centralised oracle. Students would be encouraged not to ask the AI what to think, but to build an AI that thinks like them, which means they would have to think, clearly and deeply, first. The incentive structure shifts from consumption to cultivation. That is not a small change. It is a different philosophy of what learning is for. The consequences reach further than education, into every situation where one person is trying to understand another. Consider the job interview. It has long served as a rough but honest signal: how does this person think? What do they know? What is their instinct when a hard question lands unexpectedly? Today, those signals are being eroded. When every candidate has been coached by the same centralised model, they begin to arrive speaking the same language, deploying the same frameworks, structuring their answers with the same confident fluency. The surface looks polished. But the differentiation that interviews exist to reveal, the genuine depth, the particular way someone's mind works, the real contour of their experience, gets flattened into a kind of averaged competence that tells you very little about who is actually in the room.

The companion model pushes back against this in a concrete way. If your AI is built from your intellectual history, your actual work, your real thinking, your specific expertise, then what you bring to a conversation, an interview, a collaboration, is genuinely yours. The companion amplifies your particular strengths rather than covering for your gaps with borrowed fluency. A person who has developed genuine expertise in a field will have a companion that reflects that depth; a person who has not cannot fake it by asking the same model for a polished summary. Human skills, hard-won, idiosyncratic, specific, are preserved rather than minimised. The people in the room start to look different from each other again, because their tools are as individual as they are. There is an ethical dimension here that deserves to be named plainly: knowledge has value, and that value belongs to the people who created it. A well-designed AI ecosystem should make it easy to pay for access to great books, scientific papers, and curated information, and make it economically worthwhile for authors, researchers, and institutions to participate. Free, openly licensed sources will flourish. Proprietary sources will be accessed fairly, with compensation flowing to their creators. The alternative, an internet-scale scrape of everything, with no accounting for who made what, is not a feature of the AI age we should accept as inevitable. We can choose better.

Knowledge is not free to produce. Treating it as though it were is not liberation; it is just a different kind of theft, dressed in the language of openness.

Plugged Into the Sun

There is another dimension to the companion relationship that tends to get lost in abstract debates about AI governance: the physical world. AI systems are hungry. Training a large model consumes enormous quantities of energy. Running inference, at scale, across billions of daily interactions, consumes more. At the moment, much of this energy comes from sources that carry real environmental costs. This is not a reason to abandon AI, it is a reason to be deliberate about how we power it.

A distributed model of AI ownership opens up a possibility that centralised data centres cannot easily replicate: genuinely local, genuinely renewable energy. If your AI companion runs on a device in your home, it can run on electricity from your solar panels, your community wind cooperative, or a regional grid powered by hydroelectric dams. The energy that feeds your companion's thinking can come directly from the sun that falls on your roof. This is not a small thing. It means that the environmental footprint of AI can, in principle, be tied directly to individual choices and local conditions rather than to the energy mix of whichever state happens to host the nearest mega-campus. It means that the relationship between a person and their AI companion extends outward, into the world, into a relationship with energy and place and planet. An AI that runs on sunlight feels, in some small but meaningful way, more honest than one that runs on something you would rather not think about.

The Most Important Argument: This Must Be for Everyone

Everything I have described, the owned experience, the layered personal knowledge, the local green energy, risks becoming a luxury if we are not careful. And that would be a catastrophe. We are at a hinge moment. AI is becoming genuinely useful: in education, in medicine, in legal access, in scientific research, in creative work. The people and nations that have early, deep access to these tools will develop advantages that compound over time. Skills improve. Productivity rises. New ideas get generated faster. The gap between those with access and those without will not stay flat, it will widen, and widen, and widen.

This is the rich-get-richer dynamic played out at civilisational scale, and it should alarm us as much as any geopolitical threat. A world in which AI capability is concentrated in a small number of wealthy countries is not a stable world. It is a world in which the accidents of economic history, which country industrialised first, which attracted the most capital, which built the most undersea cables, come to determine who gets to participate in the next chapter of human cognition.

The gamer gear model matters here too. When the barrier to entry is low enough, when a modest device and a modest monthly commitment can buy you a genuine AI companion, personalised and capable, then geography becomes less determinative. A student in Lagos or Lahore or La Paz, with a decent phone and a community mesh network, should be able to access the same quality of intelligent assistance as a student at a well-endowed university in a wealthy country. This is not utopianism. It is a design choice we could make, if we decided it mattered. Lowering the entry barrier is the single most important policy and engineering priority of this moment. It is more important than who wins the race to the most powerful model. It is more important than which company achieves which benchmark. A billion people with access to a good-enough AI companion will do more for human flourishing than ten thousand people with access to a perfect one.

What We Are Really Choosing

The question of how AI develops over the next decade is, at its core, a question about what kind of relationship we want to have with our own intelligence, and with each other.

The centralised model says: trust the institutions. Rent access. Accept that the infrastructure of thought belongs to those who can afford to build it. This model has a certain logic to it, and it is not without merit. But it also has a long historical track record of concentrating power in ways that prove difficult to reverse. The companion model says something different. It says: your intelligence is yours. Your knowledge is yours. Your energy should be yours. The tools you use to think and learn and create should belong to you, should grow with you, should reflect your values, including your environmental values, your epistemic values, your sense of what knowledge is worth paying for.

It says that the AI era does not have to be a story about dependency and extraction. It can be a story about relationship. About investment, not in the financial sense, but in the human sense: the time and care you put into something because it matters to you, because it reflects who you are, because you want it to be good. It says that when people come together, each with their own companion, their own expertise, their own piece of the puzzle, the result is not uniformity but genuine collective intelligence. Groups that are smarter because their members are more themselves, not less. And it says something hopeful about human skill and human difference: that the right technology does not sand people down to a common average, but makes their particular capabilities more visible, more legible, more valuable. That a person's genuine expertise, earned through years of work, shaped by their unique path, is worth preserving. That when two people walk into a room, we should still be able to tell them apart. And it says something hopeful about learning: that the right technology does not make us lazier, but more ambitious. That a tool which rewards your growth will make you want to grow. That the incentive to understand things deeply, which has always been one of the finest human impulses, can be strengthened by AI, not eroded by it, if we build it right.

Gamers know this instinct well. So do gardeners. So do anyone who has ever spent more time and energy on something than was strictly necessary, because they wanted it to be theirs. That is the AI era I want to live in. Not the one where intelligence is piped to me like water from a distant reservoir, metered and billed and subject to outage. But the one where my AI companion and I have been through things together, where it knows my way of thinking, runs on my electricity, and belongs to a world where everyone, regardless of where they were born, gets to have one too. We are not there yet. But we could be. And the first step is simply deciding that this is what we are building toward.

Friday, April 10, 2026

Selling Snake Oil: Celebrity, Credibility, and the Hype Machine

Selling Snake Oil: Celebrity, Credibility, and the Hype Machine

Created on 2026-04-10 10:22

Published on 2026-04-10 10:26

There is a peculiar affliction spreading through the technology world, one that has nothing to do with algorithms or compute budgets. It is the calculated deployment of a famous name to lend gravity to something that would otherwise sink without a trace. Two recent cases illustrate the pattern with uncomfortable clarity.

The first involves Kristen Stewart. In January 2017, a paper appeared on arXiv titled Bringing Impressionism to Life with Neural Style Transfer in Come Swim, co-authored by an Adobe engineer, a producer and, notably, the actress Kristen Stewart. The AI community was briefly confused, then largely charmed. It should have stayed confused. The three-page paper is a high-level case study of applying an existing, well-understood technique to a short film Stewart directed. No new method. No novel insight. Nothing the original style-transfer literature hadn't already established. What Stewart genuinely contributed was using the technology in her film, a legitimate creative act, but not a research contribution. The inflation was entirely in the reception: breathless press coverage, NVIDIA blog posts, researchers competing to calculate her Erdős number. The name did all the work the science could not.

The second case is fresher, louder, and considerably less defensible.

In April 2026, a GitHub repository appeared under the account milla-jovovich, launching an AI memory system called MemPalace, co-credited to the actress Milla Jovovich and crypto CEO Ben Sigman. Within 48 hours it had over 23,000 stars. The headline claim was a perfect score on LongMemEval, the gold-standard benchmark for AI memory systems. Developers tore it apart within hours.

The benchmark, it turned out, never generated an answer to any question. It checked whether a correct session ID appeared in a retrieved list, never verifying that the retrieved content actually answered anything. The 100% LoCoMo score was achieved by setting the retrieval pool size larger than the total number of sessions in the dataset, guaranteeing the right answer was always included by default. As one analyst put it, it reduced to dumping everything into Claude and asking which part matched. That is not memory. That is not retrieval. The README advertised compression ratios that real tokenizer counts disproved, and a knowledge-graph contradiction-detector that was never actually wired into the code.

Then there is the authorship question,the one that cuts deepest. The account hosting the repository had seven commits and two days of GitHub history. The original account that pushed the code, named "aya-thekeeper", was deleted immediately after launch. When pressed, Jovovich and Sigman explained that "Lu", the mysterious name appearing in commit history, was simply Jovovich's AI coding agent. Jovovich herself admitted the division of labor plainly: she described the concept, Sigman built the software. Whether that constitutes co-development, or simply the purchase of a famous face for a launch campaign, is a question neither of them has answered convincingly. What is documented is that a cryptocurrency also named MemPalace, with Jovovich and Sigman holding a 50% creator reward split, was pumped and dumped within 24 hours of the announcement.

The real contribution of Milla Jovovich to MemPalace remains unproven. What is proven is that her name generated millions of impressions for a project whose benchmarks were rigged, whose README described features that didn't exist, and whose original developer quietly vanished.

Both cases expose the same mechanism. A famous name bypasses the scrutiny that any anonymous submission would face. It reframes the question from "Is this good?" to "Isn't this surprising?"and, surprise, unlike quality, requires no verification. The press amplifies, the stars accumulate, and the actual engineers doing mundane, honest work in the same problem space receive nothing.

In science and engineering, a name is not an argument. The only thing that has changed is how efficiently a famous one can be converted into attention and in the right hands, into a coin.

Wednesday, March 11, 2026

The Oracle of Oslo

When Nicolai Tangen speaks, boardrooms listen. As chief executive of Norges Bank Investment Management, the body that manages Norway's $2 trillion sovereign wealth fund — the largest of its kind on earth — he occupies one of the most structurally powerful positions in global finance. The fund owns approximately 1.5% of every listed company in the world. That means when Tangen walks into a room, he is rarely just a guest. He is, in some measurable sense, a co-owner of the organization hosting him.

In recent years, Tangen has become one of the most vocal institutional champions of artificial intelligence in European business. His message is consistent, urgent, and unambiguous: AI is transforming work, and those who resist it have no future. He has described himself as running "like a maniac" to persuade his own staff to adopt the technology, and has publicly suggested that employees who refuse to engage with AI should not expect to remain at the fund. At conferences, in his widely followed podcast "In Good Company," and in interviews with global media, he projects a vision of AI as an epochal force that business leaders must embrace — now, without hesitation.

Many do. Business figures across Norway and Europe have echoed his positions, aligned their organizational strategies with his framing, and appeared at his events as willing amplifiers of his message. The question worth asking, calmly and without prejudice, is: why? And is the enthusiasm genuinely earned?

A Portfolio Interest That Goes Unmentioned

The most straightforward observation is also the one that receives the least public attention. NBIM's portfolio is heavily weighted toward the technology sector. The fund holds significant stakes in the world's largest AI infrastructure companies — the chipmakers, the cloud platforms, the software giants currently spending hundreds of billions of dollars on AI development. When the valuations of those companies rise, the fund benefits directly.

An institutional investor who publicly and repeatedly declares that AI is transformative, inevitable, and that resistance to it is professional suicide is not speaking from a neutral position. He is, structurally, talking up assets he owns. This need not imply bad faith — it may be entirely sincere belief. But sincerity and financial interest are not mutually exclusive, and the absence of any public acknowledgment of this tension is worth noting.

A fiduciary managing public wealth might reasonably be expected to introduce caveats: that AI productivity gains are real but unevenly distributed, that the current infrastructure investment cycle may not generate returns proportional to its cost, that automation carries systemic risks a sovereign fund should hedge against. These arguments exist, are made by credible economists, and are largely absent from Tangen's public commentary on the subject. What the public hears instead is closer to advocacy than analysis.

The Circular Investment Question

There is a subtler financial dynamic worth examining — one that goes beyond simply holding shares in AI companies and endorsing them publicly.

NBIM, by virtue of owning approximately 1.5% of virtually every significant listed company on earth, sits at the center of an extraordinarily dense web of corporate cross-investment. When business leaders are encouraged to adopt AI aggressively, those organizations do not build AI infrastructure themselves. They purchase it. They buy cloud computing, AI software, and processing power from vendors who are, in almost every significant case, also NBIM portfolio companies.

The result is a closed loop. Spending decisions made by one set of portfolio companies flow directly to another set of portfolio companies. The fund benefits not once from an adoption decision, but potentially multiple times as capital moves through the chain of technology vendors it already owns.

This is not a claim of wrongdoing. Universal ownership of this scale creates these dynamics structurally, regardless of intent. But it does raise a question that is rarely asked publicly: when the world's largest sovereign fund promotes a technology through the voice of its chief executive, and that fund owns both the companies being urged to adopt the technology and the companies selling it, whose interests does the advocacy ultimately serve?

For any business leader considering a significant AI investment, that question deserves to sit on the table alongside the enthusiasm.

The Architecture of Influence

To understand why business leaders often feel obliged to echo Tangen's positions, one must understand the architecture of the relationships involved.

A CEO whose company sits in NBIM's portfolio is not in a symmetric relationship with its chief executive. NBIM is a major shareholder. Its proxy voting can influence board composition, executive compensation, and strategic direction. When that shareholder expresses strong views about a technology, the CEO faces a subtle but real calculation: push back and risk being seen as a laggard by one of the company's most powerful investors, or align with his vision and signal forward-thinking leadership. The professional incentive, in most cases, points toward alignment.

This dynamic rarely surfaces openly. Instead it reproduces itself as consensus. When multiple business figures who share NBIM exposure begin saying similar things about AI — enthusiastically, with similar vocabulary and similar urgency — what looks like organic convergence of opinion may in part reflect the gravitational pull of a single, very large source of influence. The consensus then becomes self-reinforcing. Leaders who have publicly aligned themselves with the AI vision have a personal stake in it proving correct. They promote it further, attend the next event, share the next interview, and bring more colleagues into the orbit. The network expands not because the underlying argument has been independently tested, but because professional identity has become attached to it.

A Background Worth Considering

Tangen's biography contains a detail that is routinely noted and rarely examined. Before his career in finance, during mandatory military service, he received specialized training in interrogation and Russian translation at the Norwegian Armed Forces' School of Intelligence and Security, under the Norwegian Intelligence Service.

The training is typically described as simply part of his conscription — unremarkable military service. That may be entirely accurate. It is nonetheless worth asking, as a matter of intellectual curiosity rather than accusation, what that training involved. Interrogation techniques are fundamentally concerned with understanding how people process information, what motivates their decisions, how trust is established, and how a conversational environment can be shaped so that a subject reaches desired conclusions through their own apparent reasoning.

Whether or not those skills have any bearing on Tangen's professional conduct today is genuinely unknown. What can be observed is that he is an exceptionally effective communicator who holds structural power over virtually every audience he addresses, who has built an unusually loyal network of admirers across European business, and who promotes a specific set of ideas in which his fund has substantial and compounding financial interests. Whether those facts are connected, and how, is a question each observer must answer for themselves.

What Independent Thinking Requires

None of this is an argument that Tangen is wrong about AI, or that his motives are improper. The technology is genuinely significant. His fund's investments may prove entirely justified. He is by any measure a person of formidable intelligence and real accomplishment.

The argument is narrower and more specific. When a single individual combines structural power over your organization, financial interests in a particular outcome, and an exceptional capacity for influence — and when the professional culture around him rewards agreement and subtly penalizes skepticism — the conditions for uncritical consensus are in place. Business leaders who adopt AI strategies primarily because a powerful figure told them to, who sign contracts with vendors that powerful figure's fund already owns, and who then take the stage to echo the message, may be serving interests that are not entirely their own.

The most valuable thing any business leader can bring to Tangen's message — or to anyone's message delivered from a position of superior power — is the same thing that message quietly discourages: independent judgment, exercised before the applause begins.

This article is based on publicly available information and is offered as opinion and analysis for the purpose of informed professional debate. It does not assert wrongdoing by any named individual.

Friday, January 30, 2026

Buckets and Sovereignity

Buckets by Josh Hallett CC-BY 2.0 (https://www.flickr.com/photos/hyku/301566516)

Buckets and Sovereignity

Created on 2026-01-31 07:12

Published on 2026-01-31 07:26

S3 buckets have a problem. A developer needs a bucket, a single bucket. Something that'll take AWS about 30 seconds to spin up. However they have to open a Jira ticket. Then they ping you on Slack. Then they show up at your desk because the ticket's been sitting there for three days and their sprint is ending.

Then, to your chagrin, open the huge Terraform repo, add their bucket, run terraform plan, watch myriads of resources scroll by, get it reviewed, wait for the pipeline, and boom. One S3 bucket. Only took a week and three senior engineers.

This is stupid. Very stupid. But we keep doing it because we've convinced ourselves that "infrastructure as code" means all infrastructure must live in the same repository, managed by the same team.

Is it true? An S3 bucket that only exists to serve one application isn't infrastructure. It's part of the application.

Infrastructure is: VPC, EKS cluster, networks, etc. A bucket that stores upload images for the marketing website? That's application state that happens to live in AWS instead of Kubernetes. That could have been another API call to a object storage service. Managing application resources as infrastructure with an infrastructure tool is not friendly at all.

Enter AWS Controllers for Kubernetes.

ACK does something wonderfully simple: it turns AWS resources into Kubernetes resources. Want a bucket? Write a manifest. The ACK controllers watch these manifests and call the AWS APIs. The developers get self-service. You get to stop being the S3 bucket vending machine. There is no giving up control, you're shifting what you control. Th control increases in fact as the cluster will try to keep resources under control continuously, not only when terraform plan/apply happen. Instead of gatekeeping every single resource creation, you define the rules.

Another team needs a Postgres database? They drop this in their repo:

They commit it. The pipeline runs. The database appears. Your policies ensure it's in the right VPC, encrypted, backed up, tagged correctly. Tags are applied for correct cost allocation so the financial minded persons have a nice cost breakdown.

This is what platform engineering actually means. You build the platform, the foundation, the policies, the standards. Developers build on top of it. Both teams do what they're good at.

And when they delete that feature branch? The database goes away. No orphaned resources quietly racking up charges for six months because everyone assumed someone else would delete it. Everything's in Git, versioned with the application code. Developers can see infrastructure status with kubectl get dbinstance. The blast radius of any change is limited to that team's namespace.

The plot twist: digital sovereignty.

In the last year cloud shifted from pure convenience to strategic threat. European regulators are increasingly uncomfortable with critical data for European companies subject to American surveillance laws. GDPR was just the opening act. The Digital Services Act, the Data Governance Act, these are forcing real architectural decisions.

And here's the uncomfortable truth about ACK: it makes you really, really good at AWS. Every pattern, every practice, every custom resource is AWS-specific. Which is fine for US but not so much for Europe.

Enter Crossplane who does something clever. Instead of direct AWS API bindings, it gives you an abstraction layer. You define what a "Database" means for your organization, and Crossplane figures out whether that's RDS, or Cloud SQL, or, interestingly, a database on OVHcloud, Scaleway, or Open Telekom Cloud. They're European cloud providers, subject to European law, outside the reach of the CLOUD Act. For organizations handling European citizen data, running European critical infrastructure. This actually matters.

The developer experience stays the same. They still request a "Database." Your platform team just swaps what that provisions underneath. Today it's AWS. Tomorrow it's a sovereign provider. The workflow doesn't change.

Maybe the organization doesn't care about digital sovereignty. Maybe one is fine being all-in on AWS forever. That's legitimate. But the organizations that do care: regulated industries, government contracts, ones thinking five years ahead, are realizing that flexibility isn't just nice to have.

Because the geopolitical landscape that made AWS the obvious choice in 2015 might not be the landscape of 2030. ACK solves the S3 bucket problem beautifully. But Crossplane solves it and gives you an exit strategy. In a world where "which country's laws apply to this data" is becoming as important as "how many nines of uptime do we get," that's worth thinking about.

Tuesday, January 13, 2026

Red Aurora



The snow outside was the same shade of pale grey as the concrete blocks that lined Oslo’s People’s Sector Seven. Inside the Productivity Hub, Comrade Morten stared at the dim glow of his workstation. 

A red icon pulsed in the corner: “AI Assistance Required by Order of the Central Efficiency Committee.” Morten sighed. He’d been a woodworker before the Great Automation Integration, before the Party had decreed that every citizen must harness state-approved AI to achieve optimal output. Now, every design he made was “enhanced” by Algorithm 14-K — smoothing lines, optimizing cuts, trimming away his individuality. 

“Comrade,” came a voice from behind. Morten turned to see Comrade Bjørn, draped in the standard-issue fur-trimmed coat, his breath steaming in the cold air of the underheated hall. “You’re falling behind quota,” Bjørn said, his voice both concerned and sharp. “The Party notices such things.” “I am meeting my numbers,” Morten protested. “Mostly.” Bjørn’s eyes narrowed. “Mostly is not enough. Have you forgotten Alexei Stakhanov? The coal miner who extracted fourteen times his quota in a single shift? The Party remembers him, even now. He is the model of the New Working Class Hero — not because he was forced, but because he believed.” Morten tapped the AI prompt reluctantly. Blueprints blossomed instantly on his screen — impossibly precise, impossibly fast. “The AI designs everything now,” he muttered. “What’s left for me to believe in?” 

Bjørn stepped closer, lowering his voice. “You believe in the result. Every chair you make, every beam you cut — they build the communal future. It doesn’t matter if your hands or the algorithm’s hands shape them. We are one machine, Comrade. You, me, and the AI — all tools of the Party.” Through the frosted window, the red aurora shimmered in the polar sky, cast by the orbital solar reflectors. It painted the snow crimson, as if the whole land bled for the collective. 

Morten looked at the glowing blueprints. Somewhere deep inside, a stubborn human pride whispered that he could do better without the machine. But another part of him — the part that wanted to survive the coming inspection — began to work faster. He pressed the Accept button. In the background, Bjørn’s voice echoed softly, almost like a hymn: “Production is devotion, Comrade. And devotion is forever.”

Wednesday, July 16, 2025

Recovering TP-LINK router

 My WDR430 router has been bricked for a while.

Initially I tied to reflash it using a SOIC clip but alas, that proved impossible as I had no level shifters for 1.8 V flash. Therefore I soldered an UART interface and tried to flash it via serial. That wasn't as easy as I was never able to pres the U-Boot interupt sequence `tpl` fast enough. 
The solution was to write an expect script and use that - worked perfectly :)
 

<

#!/usr/bin/expect -f

#!/usr/bin/expect -f

# Show all output
log_user 1

# Start cu
spawn cu -l /dev/ttyUSB0 -s 115200

# Increase timeout
set timeout 180

# Wait for autoboot
expect {
    -re "Autobooting.*1 seconds" {
        send "tpl\r"
        sleep 1
    }
    timeout {
        puts "Timeout waiting for autoboot message."
        exit 1
    }
}

# Send tftpboot
send "tftpboot\r"

# Wait for done after tftpboot
expect {
    -re ".*done.*" {
        send "erase\r"
    }
    timeout {
        puts "Timeout waiting for 'done' after tftpboot."
        exit 1
    }
}

# Wait for done after erase
expect {
    -re ".*done.*" {
        send "cp.b\r"
    }
    timeout {
        puts "Timeout waiting for 'done' after erase."
        exit 1
    }
}

# Wait for done after cp.b
expect {
    -re ".*done.*" {
        puts "All steps completed."
        interact
    }
    timeout {
        puts "Timeout waiting for 'done' after cp.b."
        exit 1
    }
}

Sunday, May 18, 2025

The marshmallow experiment

https://www.pexels.com/photo/colorful-marshmallows-5794870/

The marshmallow experiment

Created on 2025-05-18 07:08

Published on 2025-05-18 08:17

Somewhere in the 70s psychologists at Stanford University made an experiment about "delayed gratification". Children were offered marsh mallows that they could eat immediately or, if they delayed it for 15 minutes, they would also get some extra treat as a bonus. Following up the results they found a correlation between those that were able to delay gratification and their later results in life being better in comparison to those who were not able to do it.

On the other hand we are pushed towards immediate gratifications by a lot of factors that our society is based. We are sold the idea that everything should be "Bigger, better, faster, more" and we need to "want it all, and want it now". If things are fast we pat ourselves on the back considering ourselves "efficient". Is it true? Are the results similar to the Marshmallow experiment?

https://ownmygrowth.com/2020/09/27/instant-gratification/

The current trend for boosting efficiency is the use of AI everywhere. The most relevant trend is now the "vibe coding" - fast results with minimal effort. Yes, results are impressive, the assistants are able to produce code, sometimes better code than the humans in a fraction of the time as they have indexed HUGE amounts of ALREADY WRITTEN CODE. Hence a task that used to take days could be theoretically be finished in hours if not even in minutes. That is great and saved a lot of time.

But the question is what we do with that time? How do we spend it? Do we take just another task from the Jira board and throw it to the army of assistants asking the correct prompts or prompting an assistant to prompt another assistant to do it? Or we try to "grok" it ourselves? Do we invest the gained time in something creative and new? In my opinion creativity doesn't boost in the same way, contrariwise, creativity comes generally from scarcity not from abundance. Having everything served immediately reduces the need for searching and understanding the solutions for the problem at hand. As a recent Microsoft study revealed we are just trusting the response as it seems plausible and after a while we take it for granted as being correct without double checking it.

Doing something good as a human requires effort, delayed gratification. Try learning a new instrument. It will take a long time to produce something that resembles music, especially for those that at their first musical experience. I was, and I still am, an horrible player, practiced for years until I was able to play something that sounded good (depending to whom you ask). But I enjoyed the trip. Before playing that solo i had to learn not only the instrument, chords, and finger-picking but I learnt about bands, trends , music in general and, probably the most important thing, I got to know people with similar interests. Putting effort into doing something is the key to progress as Derek Muller explains it.

The lack of delayed gratification leads to stagnation, lack of desire to do something, no imagination for what might be the end of the trip. A general state of INFANTILIZATION might appear as the skills needed to obtain something will be underdeveloped and tantrums will be satisfied unconditionally. The issue is that also decision makers will be affected by the same symptom as all their goals will apparently be satisfied instantly with no rebuttals.

AI is a wonderful tool, I am all in for that. The issue is not AI, as it is not sentient, it is us in our infantile way of using it to get immediate gratification instead using it wisely - try first, get feedback or advice, try again cycle. If we are more efficient day after day, we should take some time to understand it. How embedding works, what is quantization, what risks MCP poses. Try replicate and understand the mathematics behind it, go back to the Algebra and Calculus one neglected earlier in life, verify the responses by reading the literature suggested as source (if it really exist) or try learning to draw something with pen and paper just to understand the techniques after asking the AI to do it as an example. Try to delay the gratification and dream of some better reward. Most of the time it worth it.

To finish in a darker note:

https://www.orau.org/health-physics-museum/collection/radioactive-quack-cures/pills-potions-and-other-miscellany/vita-radium-suppositories.html

Somewhere in the '30s Radium suppositories were seen as a great thing, as Radium was something novel at that time as a cure to slowness as their advertising was reading. Not understanding something completely and using it JUST BECAUSE IT WAS ADVERTISED is dangerous at least. Short term gratification cycle is just followed once again.

Weak Discouraged Men! Now Bubble Over with Joyous Vitality Through the Use of Glands and Radium

“If YOU are showing signs of “slowing up” in your actions and duties, perhaps long before you should—if you have begun to lose your charm, your personality, your normal manly vigor—certainly you want to stage a “comeback.” The man who has lost these precious attributes of youth knows how to appreciate their value. He realizes that happiness depends on his ability to perform the duties of a REAL MAN. Sweet, glorious pleasures of life. Nature intended that you should enjoy them.”

“Now is the time to act! Today! RIGHT NOW! Tomorrow may never come.”

Links:

  1. https://www.lifehack.org/353923/instant-gratification-short-lived-you-should-aim-for-long-term-goals

  2. https://medium.com/@darinleavitt/the-epidemic-of-instant-gratification-aef2a8d38903

Thursday, April 11, 2024

My response to "From Mere Engineer to True Artist"

Pablo Picasso, The Bull, 1945

My response to "From Mere Engineer to True Artist"

Created on 2024-04-11 15:28

Published on 2024-04-11 18:58

Codecamp Timisoara took place today, and it was quite a nice conference. The lineup was top-notch, and the topics were quite diverse. There were also some talks that seemed overly specialized, possibly requested by sponsors, but overall, it was quite an entertaining edition.

James Coplien delivered an interesting talk called "From Mere Engineer to True Artist" at the end. Coplien is known for stating unpleasant truths, and this is okay; it provides food for thought.

He pointed out many elephants in the room somewhat bluntly, but radical candor is necessary to shake up the industry. However, some of his assertions today were not entirely true, as there are many nuances that I struggle with.

  • What are architects? Everybody is an architect; there is nothing to argue about here. At a certain scale, architecture is present in every detail of the field; every commit is an architectural decision. However, the title "architect" carries more implications. The term "architect" originates from the Greek ἀρχιτέκτων (archi - first/over, techton - builder/mason). Architects were defined by Vitruvius as mediators, experienced workers able to mitigate stakeholders' needs. Architects have scars from previous battles and failed buildings, which gives them another "unnamed quality" of being able to steer and communicate to avoid repeating the same mistakes.

  • The lack of importance of a "4-year CS degree" and the notion that "CS is not a science/engineering." I completely disagree. In my view, CS is a science, residing somewhere between a branch of mathematics, a cornerstone of electrical engineering, a niche of quantum physics, and a domain of philosophy, yet with some fundamental ideas of its own. Of course, talent and hard work can compensate for formal education, but formal CS studies help evolve it from a craft to an engineering discipline by incorporating the feedback of countless generations of practitioners, creating something as solid as civil or chemical engineering. Maybe CS is not a true science yet, but it is at least "science in the making", gaining substance year after year. A 1-year boot camp could theoretically create good programmers but not scientists or engineers. The graduates of these boot camps could be productive, but they would mostly align with the 90% that Coplien fears will disappear, especially if their motives for choosing CS as a profession are not fueled by passion and continuous learning. He argued that kids can create software. True, they can create impressive things from a young age, but sometimes their creations are naive, functional indeed but lacking the details that a seasoned practitioner would add — details that provide an edge in terms of usability, performance, and quality. Kid's creations, even if not completely abstract are full of abstractions

Unicorn drawn by a kid

vs.

Unicorn drawn by a professional

Coplien argued that Renaissance architects who delivered value had no architectural studies; still, they created wonderful buildings. In my opinion, it was not necessary at that point in time, as during that period, the arts and crafts were mostly similar, and practitioners were genuine polymaths.

  • Commissioned work: Most of the examples of great architecture he presented (Brunelleschi, Michelangelo, da Vinci) worked on commissioned buildings. The artists worked at the order of a sponsor, as artists are often poor, in need of money. There are just a few self-sustained artists, mostly in the 20th century, who were able to market themselves so that they could create at their own will. Even these exceptions had to do commissioned work at the beginning until they were able to create independently. Starchitects like Oscar Niemeyer created "commissioned works of art" - for example, the city of Brasilia. It was their genius that transformed a mundane request into an unforgettable work of art, but they did not initiate the project themselves with their own funds.

  • Abstractions are bad: Again, I do not agree. Abstractions are the foundation of science and art. If we observe the evolution of Picasso's bull, we see it goes from concrete to abstract, refining the shape to its bare essence. There is nothing that one can take out from the last image and still have a bull. Abstractions create flexibility, a quality that permits evolution. If we abandon abstractions, modern art would still resemble the bucolic style of the 16th century, and software would still be in Fortran dialects. Functional programming and object-oriented programming are based on abstract concepts that allow them to express real-world problems in an understandable manner and create higher-order simplifications of the problem at hand.

  • Beautiful code: With some references to Christopher Alexander's "unnamed quality," Coplien wants us to deliver beauty. The problem lies in how he characterizes beauty. His aesthetic criteria are vague. Beautiful code is not only visually appealing; what if we cannot see? Perhaps one has a test-sensing organ that is sensitive to testable code and code with a lot of tests - a testable flower and its petals. We all strive to write beautiful code, even in suboptimal languages, but let's be honest - we do what we can to meet our deadlines. Maybe we write truly beautiful code once or twice in our careers, but often, we write code that brings value. Value can be delivered not only by ethereal, elegantly crafted code but also by solid, practical code. While he advocates for the axiom of delivering value first, his subsequent request for beauty is in stark contradiction.

Friday, August 11, 2023

Delivery pipelines

Image from https://energycapitalpower.com/top-5-pipeline-developments-in-africa-by-length/

Delivery pipelines

Created on 2023-08-11 09:14

Published on 2023-08-11 10:18

The industry is buzzing currently with term as Continuous Delivery, Continuous Deployment an Pipelines. Pipelines are techniques to seed up the processing time by sequentially running a set of tasks on specialized stages, at every moment of time one step consumes the output of the previous one. The concept is not new, it has been started in the early 1900 when it was applied to factories and assembly lines. But the current pace of software development just put a lot of spotlight on them.

The software pipeline most often do a set of operations as check-out code from VCS, compile/build it, package it, run tests on it, deploy the artifacts to a repository and inform the stakeholders of the result, update various metrics and dashboards. These actions require a script to put things in the right order and check the results in every step.

In software the pipelines are generally defined through a specific language, either a visual one or a text based one - generally YAML or Groovy. These are run by specialized software as Jenkins, GitHub Actions, CircleCI, etc. The high level pipeline described in YAML is calling various other scripts that define the individual steps - like micro-operations.

The build step for example is the invocation of a Makefile, shell script or a Maven build. The pipeline will wait for the completion of the build step before passing the artifacts for testing or packaging. Such a situation is interesting because there will be two points of control in the pipeline. One at pipeline level and one at step level but their responsibilities will bleed into each-other.

This might lead to some problems:

  1. Inefficient pipelines: If the result of a step can be guessed before the whole step completes the pipeline should fail fast. It makes no sense to wait for the completion. Imagine that you have a multi project build. In the trivial case the projects are built and packaged. But what if one of them has an issue? Should we wait for completion of all builds and then juts cancel the packaging step? What if there are modifications on a single project that has no egress dependencies? Again - it makes no sense to package all other projects as they haven't been modified. However with the split logic this is hard to achieve as the operations on pipeline level are quite coarse-grained. On the other hand most build tools (Gradle, MSBuild, ...) are perfectly able to run more parts of the process by themselves - create archives, containers, publish artifacts - and probably they should do it. Fail fast and incremental builds are essential for accelerated release cycles. The build tool is in a better position to understand what it is building by looking in the source code than the pipeline that is merely an orchestrator of some loosely coupled processes. Caching build results and artifacts, using parallel builds could reduce build times from 20 minutes to less than a minute - so some DORA metrics will look way better and will make both developers and managers less impatient. Speculative execution and rescheduling is well known in CPU pipelines - software engineers should have some inspiration from the clever solutions that hardware folks are successfully applying for more than two decades.
  2. Portability: The high level pipeline logic is hard to be moved from one type of executor to the other. Pipelines built for Jenkins will be hard to be ported to another CI/CD system. The worst part is that it won't be easy for a developer to run the pipelines locally in order to have similar results as on the server. Dev/Prod parity should be not only on the tooling versions but also on the environment where code is built. Being able to run the pipeline locally would mean that a developer can also debug it if it's the case - it creates better visibility in the whole project - enabling a DevOps culture. Having an ops/build team that is managing the pipeline in secrecy is in my own view an anti-pattern. The devs will happily throw any issue away to the build/ops as "it works on their computers" - creating some knowledge towers. Also the posibility of running the pipelines on the local machines will ultimatly reduce the load on build machines and the queuing - improving again the metrics.
  3. Mutability: The environment on which the pipeline runs should be immutable in order to produce consistent results. This is well treated by GitHub actions but for Jenkins (or other on premises CI/CD solutions) or even locally on the developer machine this is slightly complicated. However this can be solved if the pipeline can run in a container which is immutable. Many IDEs today offer the possibility of a development container that provides trusted and stable environments. This is extremely important nowadays in order to mitigate supply chain attacks. The immutability of the environment would mitigate issues as those described in Ken Thompson's paper from 1984 "Reflections on Trusting the Trust". This immutability contrasts partially with the caching needed for mitigating the speed but this can be solved quite elegantly nowadays with crypto methods so no cache poisoning can be inflicted.

Containerized pipelines solve both portability and mutability issues - so the developers could have both freedom of choosing tools and rigor for their builds in the same time. I learnt about a company that creates Visual Studio customized installations and pushes them every night to the developer's machines in order to solve the issues above. This is not only inefficient (hundred of gigabytes transferred and computers never in standby) but it is also error prone as a there are a lot of machine specific issues that might interfere - so in the end there is no certitude that the configuration is identical on all computers and that there is no drift. Imagine that in a large project a single library has some different settings - it will take hours for the developer to investigate an error caused by an obscure glitch that happened overnight. Running pipelines locally in the container would enable the developers use Rider or VS Code on Windows while still being able to test and build on trusted environments and deliver Linux software. Jenkins has this feature also, but is somehow exposing it in a clumsy way despite its huge value. Contrariwise GitHub Actions make it completely transparent for the consumers of the steps - one has to look in the action's source to know that it uses containers.

Speed is a crosscutting concern and can be addressed regardless if the pipeline runs or not in a container. A slow build won't get faster if done in docker. An aggregated approach - with the right tools could improve the development experience, speed and security of an organization.


#pipeline #containers


Tuesday, January 3, 2023

2022 Highlights

2022 was quite interesting. I made some changes in my toolbox.

The most interesting bits were:

1. Vercel and Supabase usage that worked flawlessly. 

2. I have ditched Docker completely and moved to Rancher Desktop and Podman. 

3. Still on Fedora 36 for all my machines. 

4. Moved some personal workloads to Oracle Cloud.

5. Visted Norway

6. Toured Romania

Felt dumb most of the year. And somehow powerless - cannot really get a grab, authority wise on the development of the  project.

Saturday, December 3, 2022

Not much here, huh?

 I wrote most of my rants elsewhere. But this kind of sucks...

 

Saturday, November 19, 2022

Legacy

Image from https://www.scaramangashop.co.uk

Legacy

Created on 2022-11-17 12:29

Published on 2022-11-19 19:56

The life that we are currently living is shaped in many ways by "legacy". In most cases, the word legacy carries positive connotations related to wealth, culture, and tradition. However, this is not the case with software. Here the word "legacy" has a lot of negative meanings associated with it. What is "legacy" in software? Why do we consider it bad?

Legacy software simply means that the code base is old, probably unmaintained, and hard to read. It says nothing about the value or the quality of the code, it just states that is old. Then why do we consider it bad? In the real world legacy is a source of wealth, something we grew upon. Probably the answer lies in our laziness. How many of us can fluently speak or write in Latin for example? Or classical Greek, old Norse, medieval French maybe? A few of us can, although is a pity, many fundamental works of mankind were written in those languages. Still, we do not read them in original, it is more convenient to read modern versions of those in a familiar language.

The same happens in software. Programmers are using contemporary languages and lost the ability to read older ones, so the code they stopped understanding became mysterious and potentially dangerous for them. As with classical languages, there is just of handful of people that have the patience to study and understand older code and those are the ones still able to explain the values that we, in our pride and ignorance, cannot see in the old code.

Many critical infrastructures and day-to-day codebases are "legacy", still, they work well, and they keep supporting our daily life. Banking still relies on Cobol and RPG, scientific computations still use Fortran, and operating systems still build on C.

I have to praise a relatively unknown "legacy" programming language and runtime: Concept. Concept is a 4GL (4th generation language) that started in Norway during the '80s. Googling it might yield 0 results, nevertheless, it delivers daily for a few million people. The language has all the features one would expect UI library, database connectivity, and it can run both server side and client side, it's kind of memory safe. The syntax seems a little dated but is still expressive enough to implement huge projects. As the community of Concept developers is not large it starts to lack new talent and also tooling. Despite this, there is still maintenance effort and is kept as much as possible in line with the latest industry trends: REST, JSON, MQs, 64bit code generation, containers, and such.

Understanding older languages is never easy. As I said before, due to our laziness we tend to ignore them and rewrite everything with no guarantee of a better job. Probably it is often a better idea to rewrite, but we first need to understand what we are replacing. We need tooling for understanding older code bases, especially when the original specifications of the software are lost. We need helper tools that guide us through the syntax and structure of the code and could provide handlers for the business logic already written in the old code bases enabling true reuse. Developing tooling that would transparently retarget old languages on new platforms will guarantee that the legacy received is still producing the expected results and we can build more interesting things in a more cooperative way. Rewriting software with feeble specifications or no specification at all, using the legacy system as a model but not fully grokking it, is in my opinion far more dangerous than keeping battle-tested code running.

Saturday, September 10, 2022

The Five Ideals

https://dribbble.com/shots/3349585-Einhorn - enfanterrible

The Five Ideals

Created on 2022-09-10 06:35

Published on 2022-09-10 14:15

Locality and Simplicity; Focus, Flow, and Joy; Improvement of Daily Work; Psychological Safety; Customer Focus - these are the five ideals of an organization that might blossom into a unicorn.

While some of them can be grown from the inside, starting from the development and operations teams and evolving them into a 'DevOps' culture, others are leveraged mostly by managers.

Helping teams improve for the first four ideals in the company creates more room for the latter. Those create a lot of turmoil and non-functional requirements but in the end, the price paid yields probably squared.

This is why managers that act toward the five ideals are as precious as mythical animals. This is also why when such a manager leaves it's quite sad.

Monday, January 31, 2022

To low code or to not low code

Image from https://flows.nodered.org/node/node-red-contrib-saprfc

To low code or to not low code

Created on 2022-01-31 19:43

Published on 2022-01-31 21:37

In 2005 I had my first encounter with a low-code platform. It was Alcatel's SCE/SDE later known as PrOSPer. It was a pretty capable environment for that period targeted for the generation of IN services composed of SIBs (Service Independent Blocks). Most of the IN abstractions were encapsulated inside the SIBs, the service creator was chaining the SIBs visually to create a new service. When it was needed a new SIB could be also created by implementing a set of interfaces needed for a SIB to run in its SLEE environment or to be seen inside the SCE/SDE interface.

The system generated C/C++ code, compiled it and then created the deployment descriptors. The idea behind it was a great one. It offered to domain experts the means for describing an IN service in a high-level language, with a good visual interface. Howeve,r there were some issues with it. The quality of the generated code was not optimal, the generator was a write-onlyy" one that missed any form of round-trip engineering, sometime the generated code had to be patched by hand and that made the high level description obsolete because the patched code couldn't be imported back in the SCE. Versioning of the code was also a nightmare in the SVN/ClearCase environment. All in all the developers were avoiding the SCE/SDE and were trying to handcraft their own code with simpler call-flows and better control over the implementation, keeping only the parts that were absolutely necessary to communicate with the SLEE.

PrOSPer

The next low code experience I had was with Apache NiFi. NiFi is a flow based programming system that is used for all kind of data manipulation. It is again composed by a set of blocks that perform various operations on the data streams that were sent to their inputs. New blocks can be easily added or generic scripted blocks could be inserted so that the system is easily extensible. Versioning is still not great but is not terrible also, the system works pretty well but it's still missing some advanced features as meta-models, types and so on. There are also several other data-flow systems as NodeRed but most of them are in the same category of "write only" generators.

A major step forward was made by the Eclipse foundation with its EclipseSirius modelling workbench. This one was indeed based on higher level abstractions. Being constructed on top of GEF/EMF it had superpowers as reverse engineering, round-trip engineering, and easy creation of user interfaces that were not limited just to a data flow like systems before it.

No alt text provided for this image

The interesting questions that those low-code solutions raise is in the sphere of languages. What is a language? What are the entities a language operates on? What are the rules that make the language correct?

Professor Jordi Cabot reaches the conclusion (see slide 13) that low-code is in fact a fancy syntax and a marketing term used for some model-driven architectures and development environments. All the "low code" systems described are in fact visual syntaxes that describe the interaction between some entities (called blocks, or SIBS).

Having models and meta-models make the "low code" even more interesting as now other abstractions can be created. Constructions in the new language can have some formal semantics, type systems can be applied. Testing can be moved from generated code to the visual syntax itself. Probably, as also Eric Evans describes in its Domain Driven Design book the most important quality of the "low code" is that it enables the human interaction and comprehension. Maybe lawyers will not operate with visual law designing tools (although it would be interesting) but for sure they could grasp models of law, fact, proof, patent that would permit them to use a low code solution in their own bounded context.

So far the "low code" solutions I referred to were visual but this is not the only way of having it. There can be for sure low code textual languages or DSLs.

Example of a tax rule in a DSL

The above example encapsulates lots of domain entities: tax payer, taxable income - entities that would be pretty hard to be manipulated in normal programming languages due to the amount of boilerplate code needed. If we make another step maybe the situation could look like:

Intersection of MDD, DSL, Formal verification

If we accept that a "low code" visual solution is just an alternative syntax for a DSL then we can link the two worlds. Most of the problems that were hard to solve in the visual syntax can be now rewritten in the DSL so we get many advantages:

  • as before we can have type systems and meta-models
  • we get an AST that facilitates transformations of the models
  • we get sane versioning as textual representations are VCS friendly.
  • we get somehow a human understandable/maintainable code

There are also two other aspects that also emerge here. One is the possibility of a "language engineering workbench" where domain specific languages to be created and augmented with thick semantic layers. This has huge applicability in domains where is a lot of formalism and already developed systems.

The other interesting one, that in my opinion has a major impact, is the "projectional editing" that would permit the language creator to manipulate the language internals easier, in some situations without resorting to traditional lexers or parsers as the language internals would be exposed as models in their own right. Modern language workbenches already offer great tooling for projectional editing making the development of DSLs easier.

What I am trying to conclude here is that "low code" is not a single term but rather a combination of paradigms so in my opinion evaluation of "low code" cannot be done in total separation from domain modeling and language engineering. My arguments are probably naive, but I see value in higher level systems description, although this is not always necessary or desirable. Higher level abstractions are beneficial not only in the human-machine languages but they are also great for human-to-human communication as they reduce the miscommunication. Well human engineered languages could result into better implementation.

Many thanks to Jennek Geels for introducing me to his concepts of domain modeling.

References:

https://modeling-languages.com/low-code-vs-model-driven/

https://martinfowler.com/dsl.html

https://martinfowler.com/bliki/DslBoundary.html

Dutch tax DSL: https://resources.jetbrains.com/storage/products/mps/docs/MPS_DTO_Case_Study.pdf

Language workbench: https://web.cecs.pdx.edu/~apt/onward14.pdf

Thursday, December 23, 2021

Technology Grafting

Image from pressdemocrat.com

Technology Grafting

Created on 2021-12-23 09:48

Published on 2021-12-23 17:01

Wikipedia defines grafting as: "a horticultural technique whereby tissues of plants are joined to continue their growth together". Grafting is often used for producing new varieties of fruits on older trunks that are more resilient or adapted to a climate.

When working with legacy software we can see modernization as grafting. The rootstock is the older technology - proven and battle-tested while the scion is a newer technology that will enhance the rootstock's qualities. The technological grafting needs two things: interfaces/protocols and encapsulation.

The interface layer ensures that the data can be understood by both scion and rootstock. If the rootstock uses some very specific encodings (EBCDIC, various other locale-specific charsets) then the translation between them must be consistent in both directions. Things seem to be easy with textual protocols (although there might be also some issues) than with binary proprietary protocols. If a fast bidirectional translation is not possible (very different protocols) then a middleware might be necessary but in this case, it would be best that this middleware

On the other hand, encapsulation permits the scion to maintain all its properties and continue working as if it would be still attached to its original trunk. When grafting a pear branch on an apple tree the pear branch will continue to function as pear and will not hinder the apple trunk. The same behavior should be present when grafting software components. However, this strong encapsulation requires the open interfaces discussed before.

One of the most interesting technological graftings I have seen lately is in the form of modernizing older web applications using micro frontends packed as web components. The idea is not a novel one, web components are one of the ways of composing micro frontends, but this has massive implications for all the functional and nonfunctional aspects of the application. Imagine old PHP or Struts code bases getting an instant makeup with nice web components that reduce the server load and break monolithic applications into simpler parts. The web components isolate the new developments from the old code base and will permit the evolution of the older code base.

Grafting techniques and patterns might prove very useful in all kinds of modernization scenarios as there are a lot of use cases for this. The use cases can range from, as I wrote above, UI modernization to more interesting ones like adapting 50 years old systems to web or IoT. If the results of grafting are successful then this might help the rootstock also evolve (like Autocad's migration to the web). Grafting permits the progressive evolution of the software without disruptions in base technologies used at a minimal extra cost. The complicated problem is to determine where is the best place to insert the scion and how to make it behave on the same security and functional constraints as the rest of the application.

Sunday, December 19, 2021

Necessary formalism

Capture from

Necessary formalism

Created on 2021-12-19 14:39

Published on 2021-12-19 16:02

Most of us have started with a small application that solved a business need. Then we had some success with it therefore we added functionalities, improved UI, made it multi tenant and so forth. In the meanwhile the code has grown and every "if" condition we added made it more complex.

We added "ifs" because we had to respond to changes created by customer requests, changing laws and policies, access to different markets. Some of us were smart and resorted early to design patterns but some of us continued with the old way of adding ifs... In the end application is huge and hard to debug because it's codebase no longer reflects it's initial purpose. There is no "isomorphism" between the problem that application solved and the implementation of the application. Even applications that were well engineered have this issue as the changes they accumulated over time are not described in the original universe where the application was spawned, in the business part, but rather the engineering created approximations in the implementation domain that are maintained. To make a forced comparison the situation is similar to getting the image of a forest in autumn simply by creating a huge palette of colours without thinking of what we are trying to represent with them.

So moving along this line, how do we keep applications from going adrift? How do we keep in sync the reality where applications emerged with their implemented behaviour? Here formalism appears. Describing application behaviour in a formal way, close to what people understand is the key. This formalism have many names ranging from "Domain Driven Design" to DSLs but all focus on capturing the domain information on a higher level, not directly in code. Code is generally a byproduct of those, formal descriptions are needed for any DSL and low-code solution because, quoting Markus Volter: "models are semantically rich for their domains" whilst code is not semantically rich, just behaviouraly rich.

One of the challenges I am thinking about is: how do we retrofit formalism into application so that we make them more maintainable? There is no definitive answer but especially for legacy application refactoring to models is a key. We should start small, refactor user interface behaviour for example to state charts (in form of Redux or XState). In the meanwhile the refactored interactions could be again reorganized both server and client side into some business process descriptions (it would be ideal to use a dedicated DSL for those in this stage - but this is generally hard, so refactoring to state charts is a good beginning). Iterating over this a couple of time we will reach better models on the domain and clearer actions defined on the set of the models - hence we get a crude formal definition of the domain. At this point the process, described probably in a specific language can be understood again by business people therefore they can evolve it while it is still generating lower level formalisms (as the statecharts) that developers or specific tools can generate code from.

The new developments nowadays have the chance of starting clean as the current tooling is quite advanced (Jetbrains MPS, Eclipse Sirius). New projects can start from DSLs or from BPMN visual representations and generate rich models that can be visually manipulated in low code solutions. Moreover a repository of well described models and processes can be shared inside a company so people can reuse the high level abstractions, manipulate them in their own code and getting interoperability and common behaviour for free. Also testing is positively impacted as tests clarity increases because models carry more semantics inside. What would be easier to test: 500 lines of written code or 12-15 lines of DSL or visual representation? Where are the errors simpler to catch?

Well crafted formal definitions reduce the implementation and design burden for the implementation team as they shunt most of the repetitive design and boilerplate tasks in an application (rich models already contain the logic that otherwise has to be expressed and maintained in code). Continuous effort of expressing behaviours in high level, formal languages and refactoring older code towards the same seems a way of increasing maintainability and quality. DSLs, modeling and low-code are in fact facets of the same base problem - how to capture and communicate succinctly, correctly and in a non ambiguous way domain knowledge both for people and for machines.

PS: There are better approximations of a process definitions than Amazon's StepFunctions or BPMN. To quote a friend - they are like GoTo programming and do not incorporate transactional aspects.

Thursday, July 29, 2021

Communication and innovation

Communication and innovation

Created on 2021-07-29 15:23

Published on 2021-07-29 15:55

More than a year ago we were talking about backups. My solution then was something based on duplicity and rclone and I was explaining it to one of my colleagues who had another approach. I was pretty happy with it although it had some quirks. Then one of the other colleagues asked us: "Have you tried restic?". I did not know about it and I found it quite interesting. Months passed and I had to implement again an off-site backup system. Although I had the old tools in the pocket I remmembered restic and I soon realised that is the perfect fit for my incremental backup task.

I imagine that I would have lost probably some days trying to use other tools and craft an adapter layer on top of the old tools. Listening to others brang me pretty much advantage both in terms of efficiency as well as in terms of knowledge - another tool on my belt.

One thing that I miss while working remotely is the posibility of sharing knowledge in an informal way. When enginers discuss around the coffee machine some crazy geekish ideas (from Raspberry Pi autonomous lawnmowers to a new programming language) or they lucidly dream some interesting features in their code then innovation is about to happen. Many things that are in an incumbent state are hard to find in mainstream publications and blogs but sometimes there is one who has a passion about "obscure-ish" (eg. https://qconsf.com/system/files/presentation-slides/control_theory_in_container_orchestration.pdf) subjects and a wise remark at the right moment can cut months of development because that insight solves in a creative way an otherwise hard task.

I had my share of these happenings: timer_fd (simplified state machines), NancyFX (instead of all OWIN stuff), Swig (for genetrating bindings) and many others but the key fact was that I was among peers who gave me back at one moment valuable information. This is one of the things that videoconference/Skype/Meet still cannot substitute yet. They are fantastic tools for this time but unfortunately they are still unable to create the context for this kind of communication. I remember that many ideas that were later translated into patents emerged in this kind of informal gatherings and sometime a "What about..." created the spark.

I have also found an interesting article (https://theconversation.com/companies-are-trying-to-connect-remote-workers-with-virtual-water-coolers-but-its-harder-than-it-sounds-146505) that reaches almost the same conclusions - apps cannot really mimmick spontaneity and generate context for innovation.

Quiet year

 I haven't said much lately on the blog nor on LinkedIn. It was quiet and dull-ish in day to day.
The year started pretty bad, my father-in-law died in January. In July my daughter had her exams for Gymnasium. 

Basically those two moments are what I can remmember from 2021 so far. 

Shards of  events:

1. Always backup SSH keys.

2. At my previous job they are still not able to impement some cloud strategy as I forecasted. People started to flee from there

3. I slacked with all my side projects

4. Lost the will of going out - something that was a pleasure slowly turned into a burden. Also lost most of the social skills.

5. Minor but reccurent health issues. No Corona though

6. Got the COVID-19 shot

7. Hackintoshed my Huawei - works decently well with Intel WiFi (10x itlwm)

8. Pleasant working experience in Fedora 34. 

9. Rust langusge is getting better and better

10. Svelte is pretty nice although it has quirks

11. Oracle seems to focus om Micronaut - probably a second bet that I won. 


Wednesday, March 17, 2021

Prepping

Photography by Roger Brown Photography  on Shutterstock

Prepping

Created on 2021-03-12 21:41

Published on 2021-03-17 17:17

"Anything that can go wrong will go wrong" (Murphy's Law)

"Preppers" are persons that spend a lot of time preparing for an imminent disaster. In case of doomsday, they might survive it as they planned minutely for any possible outcome long before. Preppers are stocking equipment, clothes, food and often create shelters so that they could face even the most atrocious situations with relative calm. Prepping puts some financial strain on people but not as much one might think as preppers' most important assets are knowledge and improvisation. Regardless how elaborated the disaster recovery plan is, it's never perfect so creativity should always be welcomed.

Preppers use some techniques that IT should learn from:

  1. Rehearsal - preppers really exercise their plan. They try surviving wilderness, with their bushcraft skills, thus testing their theories and equipment. In IT from time to time it's worthy simulating a nasty situation in the infrastructure. It doesn't need to be a complicated one that would imply the whole SimianArmy but at least simulate a disaster, solve it and take notes. Ask colleagues to stop a service or make a ethernet loop in a switch. Learn where to look for clues and when the incident is solved try to automate the solution. There are organizations that take rehearsals very seriously. I've been part of probably three fire simulation drills at Adastral Park in less than a month but I can say that It was the only occasion where I've seen so much calm and organization as people knew their drill.
  2. Share information - preppers are sometime organized in clubs and they share lots of relevant information on materials and techniques. Community driven DRP might be interesting as people come with different viewpoints that might reveal flaws or provide better solutions. Information sharing might be the most important source of learning.
  3. Use cheap and easy to find materials. For example many things can be simulated on scaled down versions of the real systems. A full K8S/EKS cluster can be simulated with a petty RaspberryPi running K3S. Most of the things will be the same but costs will be negligible. The differences can be documented and there are also lots of third parties that can simulate other services cheaply (Lambdas, Object Storage). In some situation one will have to improvise massively to recover from a disaster and then knowing what to do with almost off-the-shelve components becomes invaluable.
  4. Invest in understandable systems - preppers try to understand nature and mechanisms so that they increase their survival odds. Although automation is a must it should be transparent to the engineers and clearly present what's behind the scenes. If the state is extremely bad and automation magic stopped working then understanding the system might be handy at least to go back in a restorable state with minimum effort. I for one was in a similar situation when I discovered that the backup I made had to be restored on a disk with different physical characteristics. Knowing what was the backup process I was able to hack the restore on the new drive although the tool was not supporting it.
  5. Keep stashes - preppers often have hidden stashes of food and tools around the house. Engineers should also make stashes of resources if possible offsite or in different clouds. If something goes horribly wrong at least some things could be partially running. An OCI solution, replicated on a different vendor's platform, would make a lot of sense so if a provider is unavailable restoring the solution on another would not mean an entire rewrite but rather a reconfiguration. Vendor agnostic IAC solutions and CNI packed services are handy in this case - they can be easily run on different environments.
  6. Have spare batteries - preppers don' t rely on a single centralised solution – e.g. electricity; they always have alternate independent energy sources that they can use - batteries, generators, etc. In the light of the recent OVH fire let's imagine the following scenario: the company's automation solution (Chef, Puppet, Octopus) is also affected somehow then the situation will be similar to electricity or transport failure. On the other hand, having the IAC implemented through standalone tools (Terraform, Ansible) increase the resilience as there is no longer a single point of failure, any engineer could replicate the infrastructure from his/her own machine as long as one can connect to the datacenter/cloud.
  7. Learn how to use and build tools - preppers spend time in learning how to build bows, arrows, tents and also how to use them. Off the shelf tools are fine (backup/restore, IAC, etc) but sometime one needs to do specific operations (e.g. recover data from an ZFS formatted drive). Then building small and specific tools would be invaluable so a programming language that permits rapid development is handy - currently I see that people are looking towards Python and Go.

What I am trying to say is that pessimism and preparation are key for survivability in case of a disaster. Even a digital one.