To low code or to not low code
Created on 2022-01-31 19:43
Published on 2022-01-31 21:37
In 2005 I had my first encounter with a low-code platform. It was Alcatel's SCE/SDE later known as PrOSPer. It was a pretty capable environment for that period targeted for the generation of IN services composed of SIBs (Service Independent Blocks). Most of the IN abstractions were encapsulated inside the SIBs, the service creator was chaining the SIBs visually to create a new service. When it was needed a new SIB could be also created by implementing a set of interfaces needed for a SIB to run in its SLEE environment or to be seen inside the SCE/SDE interface.
The system generated C/C++ code, compiled it and then created the deployment descriptors. The idea behind it was a great one. It offered to domain experts the means for describing an IN service in a high-level language, with a good visual interface. Howeve,r there were some issues with it. The quality of the generated code was not optimal, the generator was a write-onlyy" one that missed any form of round-trip engineering, sometime the generated code had to be patched by hand and that made the high level description obsolete because the patched code couldn't be imported back in the SCE. Versioning of the code was also a nightmare in the SVN/ClearCase environment. All in all the developers were avoiding the SCE/SDE and were trying to handcraft their own code with simpler call-flows and better control over the implementation, keeping only the parts that were absolutely necessary to communicate with the SLEE.
The next low code experience I had was with Apache NiFi. NiFi is a flow based programming system that is used for all kind of data manipulation. It is again composed by a set of blocks that perform various operations on the data streams that were sent to their inputs. New blocks can be easily added or generic scripted blocks could be inserted so that the system is easily extensible. Versioning is still not great but is not terrible also, the system works pretty well but it's still missing some advanced features as meta-models, types and so on. There are also several other data-flow systems as NodeRed but most of them are in the same category of "write only" generators.
A major step forward was made by the Eclipse foundation with its EclipseSirius modelling workbench. This one was indeed based on higher level abstractions. Being constructed on top of GEF/EMF it had superpowers as reverse engineering, round-trip engineering, and easy creation of user interfaces that were not limited just to a data flow like systems before it.
The interesting questions that those low-code solutions raise is in the sphere of languages. What is a language? What are the entities a language operates on? What are the rules that make the language correct?
Professor Jordi Cabot reaches the conclusion (see slide 13) that low-code is in fact a fancy syntax and a marketing term used for some model-driven architectures and development environments. All the "low code" systems described are in fact visual syntaxes that describe the interaction between some entities (called blocks, or SIBS).
Having models and meta-models make the "low code" even more interesting as now other abstractions can be created. Constructions in the new language can have some formal semantics, type systems can be applied. Testing can be moved from generated code to the visual syntax itself. Probably, as also Eric Evans describes in its Domain Driven Design book the most important quality of the "low code" is that it enables the human interaction and comprehension. Maybe lawyers will not operate with visual law designing tools (although it would be interesting) but for sure they could grasp models of law, fact, proof, patent that would permit them to use a low code solution in their own bounded context.
So far the "low code" solutions I referred to were visual but this is not the only way of having it. There can be for sure low code textual languages or DSLs.
The above example encapsulates lots of domain entities: tax payer, taxable income - entities that would be pretty hard to be manipulated in normal programming languages due to the amount of boilerplate code needed. If we make another step maybe the situation could look like:
If we accept that a "low code" visual solution is just an alternative syntax for a DSL then we can link the two worlds. Most of the problems that were hard to solve in the visual syntax can be now rewritten in the DSL so we get many advantages:
- as before we can have type systems and meta-models
- we get an AST that facilitates transformations of the models
- we get sane versioning as textual representations are VCS friendly.
- we get somehow a human understandable/maintainable code
There are also two other aspects that also emerge here. One is the possibility of a "language engineering workbench" where domain specific languages to be created and augmented with thick semantic layers. This has huge applicability in domains where is a lot of formalism and already developed systems.
The other interesting one, that in my opinion has a major impact, is the "projectional editing" that would permit the language creator to manipulate the language internals easier, in some situations without resorting to traditional lexers or parsers as the language internals would be exposed as models in their own right. Modern language workbenches already offer great tooling for projectional editing making the development of DSLs easier.
What I am trying to conclude here is that "low code" is not a single term but rather a combination of paradigms so in my opinion evaluation of "low code" cannot be done in total separation from domain modeling and language engineering. My arguments are probably naive, but I see value in higher level systems description, although this is not always necessary or desirable. Higher level abstractions are beneficial not only in the human-machine languages but they are also great for human-to-human communication as they reduce the miscommunication. Well human engineered languages could result into better implementation.
Many thanks to Jennek Geels for introducing me to his concepts of domain modeling.
References:
https://modeling-languages.com/low-code-vs-model-driven/
https://martinfowler.com/dsl.html
https://martinfowler.com/bliki/DslBoundary.html
Dutch tax DSL: https://resources.jetbrains.com/storage/products/mps/docs/MPS_DTO_Case_Study.pdf
Language workbench: https://web.cecs.pdx.edu/~apt/onward14.pdf