I wrote most of my rants elsewhere. But this kind of sucks...
I wrote most of my rants elsewhere. But this kind of sucks...
Created on 2022-11-17 12:29
Published on 2022-11-19 19:56
The life that we are currently living is shaped in many ways by "legacy". In most cases, the word legacy carries positive connotations related to wealth, culture, and tradition. However, this is not the case with software. Here the word "legacy" has a lot of negative meanings associated with it. What is "legacy" in software? Why do we consider it bad?
Legacy software simply means that the code base is old, probably unmaintained, and hard to read. It says nothing about the value or the quality of the code, it just states that is old. Then why do we consider it bad? In the real world legacy is a source of wealth, something we grew upon. Probably the answer lies in our laziness. How many of us can fluently speak or write in Latin for example? Or classical Greek, old Norse, medieval French maybe? A few of us can, although is a pity, many fundamental works of mankind were written in those languages. Still, we do not read them in original, it is more convenient to read modern versions of those in a familiar language.
The same happens in software. Programmers are using contemporary languages and lost the ability to read older ones, so the code they stopped understanding became mysterious and potentially dangerous for them. As with classical languages, there is just of handful of people that have the patience to study and understand older code and those are the ones still able to explain the values that we, in our pride and ignorance, cannot see in the old code.
Many critical infrastructures and day-to-day codebases are "legacy", still, they work well, and they keep supporting our daily life. Banking still relies on Cobol and RPG, scientific computations still use Fortran, and operating systems still build on C.
I have to praise a relatively unknown "legacy" programming language and runtime: Concept. Concept is a 4GL (4th generation language) that started in Norway during the '80s. Googling it might yield 0 results, nevertheless, it delivers daily for a few million people. The language has all the features one would expect UI library, database connectivity, and it can run both server side and client side, it's kind of memory safe. The syntax seems a little dated but is still expressive enough to implement huge projects. As the community of Concept developers is not large it starts to lack new talent and also tooling. Despite this, there is still maintenance effort and is kept as much as possible in line with the latest industry trends: REST, JSON, MQs, 64bit code generation, containers, and such.
Understanding older languages is never easy. As I said before, due to our laziness we tend to ignore them and rewrite everything with no guarantee of a better job. Probably it is often a better idea to rewrite, but we first need to understand what we are replacing. We need tooling for understanding older code bases, especially when the original specifications of the software are lost. We need helper tools that guide us through the syntax and structure of the code and could provide handlers for the business logic already written in the old code bases enabling true reuse. Developing tooling that would transparently retarget old languages on new platforms will guarantee that the legacy received is still producing the expected results and we can build more interesting things in a more cooperative way. Rewriting software with feeble specifications or no specification at all, using the legacy system as a model but not fully grokking it, is in my opinion far more dangerous than keeping battle-tested code running.
Created on 2022-09-10 06:35
Published on 2022-09-10 14:15
Locality and Simplicity; Focus, Flow, and Joy; Improvement of Daily Work; Psychological Safety; Customer Focus - these are the five ideals of an organization that might blossom into a unicorn.
While some of them can be grown from the inside, starting from the development and operations teams and evolving them into a 'DevOps' culture, others are leveraged mostly by managers.
Helping teams improve for the first four ideals in the company creates more room for the latter. Those create a lot of turmoil and non-functional requirements but in the end, the price paid yields probably squared.
This is why managers that act toward the five ideals are as precious as mythical animals. This is also why when such a manager leaves it's quite sad.
Created on 2022-01-31 19:43
Published on 2022-01-31 21:37
In 2005 I had my first encounter with a low-code platform. It was Alcatel's SCE/SDE later known as PrOSPer. It was a pretty capable environment for that period targeted for the generation of IN services composed of SIBs (Service Independent Blocks). Most of the IN abstractions were encapsulated inside the SIBs, the service creator was chaining the SIBs visually to create a new service. When it was needed a new SIB could be also created by implementing a set of interfaces needed for a SIB to run in its SLEE environment or to be seen inside the SCE/SDE interface.
The system generated C/C++ code, compiled it and then created the deployment descriptors. The idea behind it was a great one. It offered to domain experts the means for describing an IN service in a high-level language, with a good visual interface. Howeve,r there were some issues with it. The quality of the generated code was not optimal, the generator was a write-onlyy" one that missed any form of round-trip engineering, sometime the generated code had to be patched by hand and that made the high level description obsolete because the patched code couldn't be imported back in the SCE. Versioning of the code was also a nightmare in the SVN/ClearCase environment. All in all the developers were avoiding the SCE/SDE and were trying to handcraft their own code with simpler call-flows and better control over the implementation, keeping only the parts that were absolutely necessary to communicate with the SLEE.
The next low code experience I had was with Apache NiFi. NiFi is a flow based programming system that is used for all kind of data manipulation. It is again composed by a set of blocks that perform various operations on the data streams that were sent to their inputs. New blocks can be easily added or generic scripted blocks could be inserted so that the system is easily extensible. Versioning is still not great but is not terrible also, the system works pretty well but it's still missing some advanced features as meta-models, types and so on. There are also several other data-flow systems as NodeRed but most of them are in the same category of "write only" generators.
A major step forward was made by the Eclipse foundation with its EclipseSirius modelling workbench. This one was indeed based on higher level abstractions. Being constructed on top of GEF/EMF it had superpowers as reverse engineering, round-trip engineering, and easy creation of user interfaces that were not limited just to a data flow like systems before it.
The interesting questions that those low-code solutions raise is in the sphere of languages. What is a language? What are the entities a language operates on? What are the rules that make the language correct?
Professor Jordi Cabot reaches the conclusion (see slide 13) that low-code is in fact a fancy syntax and a marketing term used for some model-driven architectures and development environments. All the "low code" systems described are in fact visual syntaxes that describe the interaction between some entities (called blocks, or SIBS).
Having models and meta-models make the "low code" even more interesting as now other abstractions can be created. Constructions in the new language can have some formal semantics, type systems can be applied. Testing can be moved from generated code to the visual syntax itself. Probably, as also Eric Evans describes in its Domain Driven Design book the most important quality of the "low code" is that it enables the human interaction and comprehension. Maybe lawyers will not operate with visual law designing tools (although it would be interesting) but for sure they could grasp models of law, fact, proof, patent that would permit them to use a low code solution in their own bounded context.
So far the "low code" solutions I referred to were visual but this is not the only way of having it. There can be for sure low code textual languages or DSLs.
The above example encapsulates lots of domain entities: tax payer, taxable income - entities that would be pretty hard to be manipulated in normal programming languages due to the amount of boilerplate code needed. If we make another step maybe the situation could look like:
If we accept that a "low code" visual solution is just an alternative syntax for a DSL then we can link the two worlds. Most of the problems that were hard to solve in the visual syntax can be now rewritten in the DSL so we get many advantages:
There are also two other aspects that also emerge here. One is the possibility of a "language engineering workbench" where domain specific languages to be created and augmented with thick semantic layers. This has huge applicability in domains where is a lot of formalism and already developed systems.
The other interesting one, that in my opinion has a major impact, is the "projectional editing" that would permit the language creator to manipulate the language internals easier, in some situations without resorting to traditional lexers or parsers as the language internals would be exposed as models in their own right. Modern language workbenches already offer great tooling for projectional editing making the development of DSLs easier.
What I am trying to conclude here is that "low code" is not a single term but rather a combination of paradigms so in my opinion evaluation of "low code" cannot be done in total separation from domain modeling and language engineering. My arguments are probably naive, but I see value in higher level systems description, although this is not always necessary or desirable. Higher level abstractions are beneficial not only in the human-machine languages but they are also great for human-to-human communication as they reduce the miscommunication. Well human engineered languages could result into better implementation.
Many thanks to Jennek Geels for introducing me to his concepts of domain modeling.
References:
https://modeling-languages.com/low-code-vs-model-driven/
https://martinfowler.com/dsl.html
https://martinfowler.com/bliki/DslBoundary.html
Dutch tax DSL: https://resources.jetbrains.com/storage/products/mps/docs/MPS_DTO_Case_Study.pdf
Language workbench: https://web.cecs.pdx.edu/~apt/onward14.pdf