Thursday, November 15, 2018
The usual rants
I had to work with a senior engineer (per our standards). He seems to have about 16 year of work experience on the same platform, .NET since its beginnings. Still it took him four days to make a simple NancyFX based RESTful API. And then about a week for running it in a container. Very hard to make him comprehend the documentation. Or understand conventions in software projects.
2. Architects
Yes, sometimes I feel ashamed that I am called an architect. It seems that wee are a peculiar breed that forgot that we do software. Some of my colleagues have their head so deep into their butts so that they cannot get out of their own paradigms. They consider that the single piece of software is the product on that they worked for years and it could solve all humanity problems by a matter of its configuration. They really have the philosophic stone. Too bad that their product doe not sell. Other consider them some ethereal creatures that do not have to do anything with code(that's peon work) and they stay in slideware land where they draw boxes. GO CODE DUDES! Draw nothing until you have tested at least partially your fucking suppositions. Don't stop on a single solution. Use the architectural methodologies that YOU say that YOU know (ATAM, CABAM, or other mambo-jambo).
3. Build managers
Well, Mordaks. Doing everything to hide their inability of understanding how software is constructed. They are never part of project but hinder projects by enforcing stupid policies and environments. They are irrational and cannot be convinced with arguments. They "know their drill" albeit nobody else does. Correct software is not created by pushing every night a compliant build environment. Correct software is with a good pipeline that means a good version control, fast compilers, TESTS and deploys. Many of them. Not only every 2nd year or so...
4. Myself
I really reached the conclusion that I speak in vain hence I have nothing to gain. So I should shut up and smile as Taylor Durden did when all the skyscrapers were collapsing. It makes no sense to make things right as long nobody cares. Everybody will continue getting its pay check. And nothing good or shippable comes out.
Tuesday, October 30, 2018
Scratch for engineers
Scratch for Engineers
Scratch for Engineers
Created on 2018-06-19 08:58
Published on 2018-10-30 11:23
I am often spending time at home playing in Scratch with my daughter. We can do so many wonderful things there. One of our favourites is to create a fairytale. We take princes, princesses, dragons and unicorns and make them interact. It is very nice to see dialogues, spells and fights based on the little visual language. It's truly addictive.
But what are in fact doing is programming an actor system. Actually Scratch is a very crude but effective actor system. It is not meant for doing production stuff as Akka but it can be used for showcasing some concepts and do quick, dirty and funny prototypes.
So instead of resorting to some heavy frameworks and dry prototypes why not make a funny prototype in Scratch? A scenario turned into in a fairytale? We could model some of the services as knights, some of the threats as dragons. Wouldn't it be funny to have a database princess? Or a unicorn dispatch service? The mental image of a fairytale might be more evocative on the long term and give some human touch to some abstract concepts. The prototypes get some story, they are no longer abstract proofs that a solution exists somewhere in the vast solution space.
One can read the above script in Gerkin terms.
Given that I am a dragon. When I receive "attack_dragon": Then I think "hmmm..." for 2 seconds. And I loose one head
Does it make sense? Which one is more acceptable? Which one is easier to remember?
I am really thinking that Scratch can help learning in a funny way about actors, messages and programming. And yeah, do some impressive presentations. Computer Science is full of these analogies, think about the "Dragon Book" or Valgrind. So, yes, I do not think that I am out of the line.
Wednesday, July 25, 2018
Winds of change
Winds of change
Created on 2018-07-24 14:25
Published on 2018-07-25 14:52
In 1964 Thomas Gladwin compared the way that Trukese people and Europeans sail on sea. The Europeans tried to follow a plan and stay "on course" while the Trukese navigators were going hop-to-hop towards the objective and decide ad-hoc what will be the next segment and how to tackle it. While plans are clear ways of presenting one's goals and share information, in a deterministic, stepwise approach, the plans require that every possible outcome is already thought of and scripted in it. Plans are great when you know the space in which you sail, the distance to the next shore exact position of islands and ability to track progress (astronomically). Plans are great when you have maps.
What if you go into the unknown, in places where you just have a vision but no paths, no charted seas? Change is always exploratory and vision driven. In an enterprise, when change processes starts, there is no chart of the process. Probably there are some war stories that people know about about successful changes or failures but a given organization is in uncharted waters itself. Not all the stories are true, sometimes sailors exaggerate and not all the dragons are known. Basically plans are just retelling of the stories, motivational parables that give courage and determination to those who are handling the changes.
What I am trying to state is not that plans are bad, they are good in known areas. I rather want to stress out that change should not follow a strict plan but a set of actions that are geared towards a goal, in some situations not the original one. Columbus plan was to reach India going westwards and staying on course. Polynesian people discovered countless islands and probably some continents by sailing in smaller increments in a given direction. Columbus' plan failed (albeit it's failure was a greater win) but this shows that exploration and change cannot be safely done by following a script because of the unknowns. Plans are the perfect methods for improvement, when quantitative data is available as flows can be maximized by following the script. Exploration is about qualitative aspects dangerous/safe, easy/hard decisions. Polynesian people were sending ships to check the seas, some ships came back with new information so that the others learnt of it. This is the equivalent of prototyping, of "throwing nuts". Change should be first isolated to some parts of the organization so that failure should not generalize while success of the smaller changes can be generalized and retold as magnificent epics that would spark the imagination of the followers.
The agile principle of "respond to change" seems to me a more suitable way of handling exploratory issues. The vision shall be followed but steps, "situated actions" should be taken every time something new comes in the change process. The context is always different from step to step and shortcuts may be possible. In the animation above, if we consider the green as situated actions and the red as a plan we can see that the plan is easy to be derived when enough of the unknown is charted out. Although it looks that it is a waste of resources to explore things exhaustively, in real life there are lots of heuristics that can be used to limit the effort and keep focus on vision. (A* pathfinding animation taken from: http://www.andrewsouthpaw.com/2014/05/28/trailblazing-and-graph-searches/).
I'd rather sail the in organizational winds of change from hop to hop, following a goal and trying to find best paths from my current position than following a long plan, probably based on heard stories, on uncharted waters. Management should help by keeping the vision consistent and reminding it often, and offering rewards for those who follow the winds.
Sunday, July 8, 2018
Surviving Vacation
Friday, April 20, 2018
The interesting bits of the day
2. Kata Containers interesting alternative to Docker
3. Rhei Clock:Fascinating clock with a ferro fluid display
4. Bad sectors scanning tool: mechanical drives are still cheaper than SSD and provide higher capacity
Friday, March 23, 2018
Lean documentation tooling
Lean documentation tooling
Created on 2018-03-21 12:55
Published on 2018-03-23 07:33
In order to get a job well done the tools should be carefully chosen. However tools have to chosen based on the size of the job one's trying to get done. If the job is as massive as digging the Panama channel, a shovel won't help much and also for a flower pot an excavator would be an overkill.
For a software project there are several kind of tools that are used to build and document the project. One important category here is communication and collaboration tools. The typical enterprise collaboration tools is Microsoft Sharepoint or something equivalent, probably Jira. Many of the requirements architecture, design and deployment artifacts are shared between team members and with stakeholders using Sharepoint as a repository for documents, spreadsheets, diagrams, presentations. However the process of editing these artifacts requires a check-out of the file, updating it with some items and then checking it back so that other people can see the changes. This raises a problem because concurrent modifications to a file are hard to do as it involves some kind of merge at document level. Moreover, there are some tools that are not really designed to work in this manner and they have their own internal versioning mechanism (e.g. Sparx Enterprise Architect that is using a database in order to offer some concurrent access to the same model). Also you might need to have the same tool installed at every member of the project in order to effectively be able to contribute with changes so more licenses might hence the costs would rise up.
What I find interesting here would be to increase the collaboration value and lower the total cost of tooling. One excellent example is Wikipedia engine (Mediawiki) that permits thousands of simultaneous users that can read and change the content concurrently. This is also an interesting enterprise approach inside a team because a wiki engine works as a version control for the content it holds. Instead of adding word files it would be easier just to edit some wiki pages, where the edits and merges happen concurrently. Security rules and workflows can be enforced on a wiki. Sharepoint permits the creation of wiki pages so it enables this kind of collaboration. As the information is less constrained in long check-out, modify, check-in cycles it tends to stay more up to date.
How to treat diagrams in this case? In fact diagrams can be treated the same as other textual information. Tools like PlantUML can be used to render diagrams in real time starting from text representations. The text representations of diagrams can be written by any engineer or even extracted from existing code or infrastructure. This information would be quite up to date with the current state of the project and would benefit from all the advantages mentioned above: versioning, merges, etc. There can be even a reverse flow where diagrams can be obtained from code or deployed servers and containers thus documenting live state of systems.
When the content should be given outside the team it can be exported to PDF or another format keeping or even enhancing the format and appearance of the document. For an organization that embraces a devops culture this approach is inline with their philosophy. The documentation and collaboration is part of the production pipeline, is always deliverable and versioned. It can be integrated both in the CI/CD tools as well in the tools (extensions for Eclipse, Idea and Visual Studio are readily available).
The cost for such an approach might be very well close to zero especially when wikis or similar solutions are already in the company. This is not mutually exclusive with engineering solutions as Rational Rose, Enterprise Architect or even Visio. The latter are extraordinary tools with a broader scope than mere collaboration and information sharing as they enable model based development, code generation and other high end features. A lean approach would reduce a high upfront investment in tools and use the expensive licenses on a more rational way, the funds saved this way might be used to bring more value to the product.
Tuesday, March 20, 2018
WebOS Open Sourced means opportunities
WebOS Open Sourced means opportunities
Created on 2018-03-20 17:32
Published on 2018-03-20 20:33
When LG acquired WebOS from HP I was quite skeptical that something interesting can come out from this. WebOS on its own is a powerful platform but it seemed that it did not get the momentum to position itself along Android or ChromeOS at that moment.
The first surprise was WebOS on LG TV sets. The WebOS stack was a huge hit as it offered lots of functionality at a decent speed on a quite restrained hardware. The UI was clean and the HTML + CSS + JS programming model was awesome. It really pushed the development on embedded devices to a new level.
Now LG opened up under an Apache license WebOS (indeed there was another project called OpenWebOS based on the HP source code) and we can see a modular stack that has well integrated components based on Chromium. Maybe EFL libraries from Tizen offer fancier graphics but WebOS offers ease of programming.
The most interesting part of WebOS is its possible ubiquity. It might be able to run on devices ranging from touch and touchless tablets to TV sets, car dashboards to building management, from small devices to virtual machines. It would make quite a nice UI for a fridge or a printer or why not ATM machines. Overall the new platform can improve both the UI and features of products using it as well as reducing the in-house development costs with custom UIs.
Moreover, having a community that sustains the development of this OS would add features quickly in the project and would enable ports on various hardware (I have already seen a port on RPi). A niche for a simple and clean UI is opened in places where a fully fledged OS is too much (Android or something else). Having a web-like programming model without the need for some 3rd party frameworks as Ionic is also a great opportunity as it would make it easier to attract developers on the platform.
This being said, I'm eager to see how WebOs will evolve as I feel it is in a very sweet spot right now.
Friday, March 16, 2018
And here is March already
Learnt a little bit of MiniZinc, that was interesting.
Today I have migrated sonar 6.2 to 7.0. that was awesome. The new version is far better, cleaner UI and works like a charm on not-so-great VM. Because this went well I also updated Jenkins and, alas, installed "Blue Ocean". This was kind of unfortunate. Blue Ocean is okay but highly incomplete for an enterprise. I tested it because I wanted something that could outperform M$ TFS 2017 in terms of usability and integration.
The current machines I am building with are:
1. A GitBucket ALM + Repository VM
2. A SonarQube 7.0 VM (standalone only because of Postgres DB)
3. A Jenkins build master VM
All these are 2CPU/4GB RAM/128GB HDD. Probably I should consolidate them on a single, more powerful machine.
4. A MacOS build slave
5. A Windows build slave
In the meanwhile I got fed up with Azure CLI for another project.
Sunday, January 14, 2018
Thanks PSI, Trug and Wildfire!
Somewhere around 1994 our school received the first 386 computer. It had also a SoundBlaster Vibra card. It was a fantastic machine compared to the old monochrome 86 and 286 machines we had in the lab. In order to test the machine we decided to play some games or install Windows... In the end somebody remembered that he has something extraordinary that runs only on 386 machines. It was Future's Crew Second Reality demo. Unzipped the two floppy archive. Started it. Watched it to the end. We were speechless. We watched it again. And then again. And again. It was clearly addictive for us. We wanted to do the same tricks as PSI, Trug or Wildfire.
Up to that point I was not much interested in programming. Although I was learning algorithms I didn't had any practical use for them. Second Reality changed that. It was clear that if we wanted to do something similar to what Future Crew did we had to learn. In 1994 obtaining documentation was quite hard. There was no internet available for us, no access for BBS or similar sources. The only way to get some info was from older friends that were already at the university and had access to information. I collected dozens of disks with text files about assembly language,VGA graphics, protected mode, interrupts, algorithms. It was the moment I started to understand algorithms - Bresenham's Line algorithm was one of the revelations. Then slowly, in about one year, my colleagues were able to replicate most of the things we have seen in the demo. It was a great achievement for us as we did not reverse engineered the code but created a similar demo using from scratch implementations. As the 386 machine was hard to reach the demo worked reasonable well on 286 processors.
I learned a lot in that year. From programming to system architecture, from peripheral handling, low level programming and basic DSP to objects and data structures. I learnt advanced algebra because I needed it for the code and 3D stuff and improved my physics due to the fact I had to understand also the internals of particle systems. It was probably the year in which I learnt the most in the software field. The knowledge gained then proved to be useful for many years as it eased my CS studies a lot.
Again I have to thank PSI, Trug and Wildfire for opening up a world of endless choices and possibilities! I am still impressed today of Second Reality as it continues to be stunning from every possible aspect graphics, sound code. It seems that others also consider it a marvel, the demo was included in Slasdot's top 10 hacks of all time. I learnt that if you have a goal and you are surrounded by smart colleagues (Dan, Vale, Cipi, Raul, Adi, Robi) things become possible and, through percolation, unexpected, marvelous results appear.
Thanks PSI, Trug and Wildfire!
Thanks PSI, Trug and Wildfire!
Created on 2018-01-14 09:37
Published on 2018-01-14 10:38
I will probably remember these call-names forever as they were the initial guiding lights of my developer career.
Somewhere around 1994 our school received the first 386 computer. It had also a SoundBlaster Vibra card. It was a fantastic machine compared to the old monochrome 86 and 286 machines we had in the lab. In order to test the machine we decided to play some games or install Windows... In the end somebody remembered that he has something extraordinary that runs only on 386 machines. It was Future's Crew Second Reality demo. Unzipped the two floppy archive. Started it. Watched it to the end. We were speechless. We watched it again. And then again. And again. It was clearly addictive for us. We wanted to do the same tricks as PSI, Trug or Wildfire.
Up to that point I was not much interested in programming. Although I was learning algorithms I didn't had any practical use for them. Second Reality changed that. It was clear that if we wanted to do something similar to what Future Crew did we had to learn. In 1994 obtaining documentation was quite hard. There was no internet available for us, no access for BBS or similar sources. The only way to get some info was from older friends that were already at the university and had access to information. I collected dozens of disks with text files about assembly language,VGA graphics, protected mode, interrupts, algorithms. It was the moment I started to understand algorithms - Bresenham's Line algorithm was one of the revelations. Then slowly, in about one year, my colleagues were able to replicate most of the things we have seen in the demo. It was a great achievement for us as we did not reverse engineered the code but created a similar demo using from scratch implementations. As the 386 machine was hard to reach the demo worked reasonable well on 286 processors.
I learned a lot in that year. From programming to system architecture, from peripheral handling, low level programming and basic DSP to objects and data structures. I learnt advanced algebra because I needed it for the code and 3D stuff and improved my physics due to the fact I had to understand also the internals of particle systems. It was probably the year in which I learnt the most in the software field. The knowledge gained then proved to be useful for many years as it eased my CS studies a lot.
Again I have to thank PSI, Trug and Wildfire for opening up a world of endless choices and possibilities! I am still impressed today of Second Reality as it continues to be stunning from every possible aspect graphics, sound code. It seems that others also consider it a marvel, the demo was included in Slasdot's top 10 hacks of all time. I learnt that if you have a goal and you are surrounded by smart colleagues (Dan, Vale, Cipi, Raul, Adi, Robi) things become possible and, through percolation, unexpected, marvelous results appear.
Tuesday, January 9, 2018
Talks, Tournaments and Pageants
Talks, Tournaments and Pageants
Created on 2017-12-11 09:45
Published on 2018-01-09 20:05
I have recently seen lots of adds inviting developers to compete in massive coding tournaments in order to get hired as architects for quite a lot of money, considering that it is remote work.
It evoked me a medieval tournament in which the knights were fighting and showing off their skills in order to conquer the heart of a lady and obtain her hand into marriage.The tournaments were always bloody and had only one winner. The skills proven there are not the skills that sustain a family, the skills for durable builds. It was just a display of weapon usage mastery, not a display of strategic vision, abilities to provide food or care for offspring.
Does this style of competition works for hiring architects or any other role?
For the sake of the argument let's define what is the role of an architect. It is quite hard to define it exactly but the architect should be a technical lead with hands on expertise, up to date with technologies and experienced enough to coach others. More important he has to be a good communicator.
So how would one measure these skills? You can measure the coding skills or the design abilities but for sure many of the other non-technical characteristics of an architect are hard to be measured objectively in a tournament. Coding is for sure a very important aspect of the job and should be measured the same way as with programmers. But how do one measures the technical insight? How do one measures experience? How about the capacity of teaching and explaining? These are also important qualities and often do make the difference between equally gifted technical individuals. Well, in a tournament how can one measure them as it is time boxed, zero sum game, check the correct answer type of contest. It reveals nothing about personality, it just proves that the candidate is a fast coder. Even in reputable companies as Google and Amazon the interview is in person, it has human touch, is customized for the candidate. I, for one, love to hear and learn from people. I love seeing their train of thoughts, debating solutions offering alternatives. Interviewing and hiring is not a pageant, is rather a matchmaking, it is not a zero-sum game. In the end the idea is that everybody wins. The "knight" will get the princess and the princess will get the most suitable candidate not the most muscular code-gladiator. Talking opposed to tournaments and pageants gives immediate feedback gives side channel information as body language, capacity to relate. Maybe these are not really important for remote workers but are important for these who have to interact with people.
If this applies for architects then for sure it applies to project managers, to creatives, to business analysts. Some skills cannot be judged just in the terms of a pageant/contest. Some things are not born in sparks of inspiration but come in time through idea sharing and debating, the spark is the result of percolation of a sufficient number of ideas.
Path towards future
Path towards future
Created on 2018-01-09 18:49
Published on 2018-01-09 19:54
I am a long time Unix (Linux mostly) user. I have started with Linux somewhere in 1995 having the first distribution on about 40 1.44 MBytes disks. After about one week of struggle I succeeded to run fvwm under X. Then I become a fan. Everything seemed natural and easier in Linux than in DOS or even Windows.
As the time passed I learnt more and more about Unices. I worked with FreeBSD, OS-X and Solaris. Each of these had some features that I really wanted to be available in Linux.
When I first worked with Solaris 10 I hated SMF. But after some time I discovered that it was better than the BSD init or SystemV init on Linux. I liked the declarative approach and simple dependency between services. I looked for some similar solutions on Linux but I was not able to find something at the same level. Apple's launchd was promising but licensing issues and plist editing were show stoppers. Upstart seemed always half cooked, somewhere between init and SMF. OpenRC was good but quite niched. So when "systemd" appeared I felt that it really happened for Linux to gain a major uplift. systemd solved many things I was missing in old init systems. Automatic daemon restarts and monitoring (I used to have monit for this), uniform syntax for service startup files (INI syntax is easier to maintain than XML especially over slow lines to remote systems using vim), device management, no more fancy shell scripting in the files, programability (it offered APIs). Working with systemd made some of the mundane tasks of creating installable packages bearable. When I started to use ansible tools I also remarked how easy was to control the behaviour of the system through systemd interfaces and tools. Farewell init! I have never looked back on it since then. Frankly I do not understand why people hate systemd so much. For me it solved most of the issues so I am on its bandwagon.
Another thing I wished for is easy install of application as OS-X drag and drop. On Unices this is still problematic. Deb, rpm and other package formats are managed by a myriad of tools (yum, dnf, apt, aur, zypper, pkg, ips, ...) each one with different syntax and options. A nightmare. Then, around 2008 I discovered GoboLinux, a distribution that really did something in the direction of simpler package management. Gobo abolished FHS and offered a better alernative. Every application is installed in its own directory and from there is symlinked to a central location. Traditional Unix directories are hidden from the user at kernel level hence offering a clean view over the system. Pretty neat. Even nowadays some things as flatpack cannot achieve this.
A third thing I wished for was dtrace on Linux. Solaris, FreeBSD and OS-X have it but on Linux there was no similar software. Not even something as mdb was available on Linux. Dtrace made the supervision of systems easy. It was lightweight and extensible. I was able to monitor realtime priority processes on Solaris with as little as 5% performance hit. Only recently dtrace started to be developed on Linux but it's not yet ready for primetime. That's kind of sad as it would be invaluable in the current context of cloud application where metrics and performance are crucial for scalability decisions. Dtrace would enable telemetry at a very low cost even for applications that were not built with telemetry options.
I have the impression that Linux is slowly moving towards something more future oriented while still keeping ties with its Unix origins. I do not feel that these are breaking Unix philosophy but rather offer an up to date tooling while holding the same old truths and principles in place.
As Arthur Schopenhauer said: "All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident". The same in my opinion will happen with some of these. They will become parts of the next generation Linux distributions overcoming all the present opposition.