Microservices - a fashion statement?

21/11/2016   Industry insights  

The technology sector is subject to the battle of ideas as any other industry and thus we can observe currents of specific thought in time. Open source surpassing proprietary systems, objected oriented overcoming procedural programming and so on. It is my contention that most people are followers and thus whether certain ideas take hold on a wider scale depends not only on the merit of the ideas themselves but also on who propagates them. Companies like Google, Twitter or Netflix have a considerable influence on what engineers consider cutting edge or good practices. It is also worth mentioning that the technology sector is no stranger to economic bubbles following a credit expansion. It is not difficult to see how one of the side effects can be a waste of resources (Dot-com bubble). A company that gives into certain architectural decisions because they are treated as a panacea, or 'pure' by their technical leaders can be easily blinded by a multi-million investment that allows them to continue in their approach despite the monumental waste and slow progress that often eventually ends in a failure. The ability to indulge in one’s personal aspirations and pet projects rather than being constrained by the necessity of a pragmatic approach due to limited resources can and does influence architectural decisions.

I believe the above has had an effect on how engineers perceive, perhaps in many cases unknowingly, what is commonly known as microservices – an approach that aims to design applications as independently deployable services. Even though containerisation of applications has existed for a number of years, it didn’t receive much attention and take off until Docker arrived. It’s my perception that microservices are widely deemed as desirable and in many cases even elevated to something that is absolutely fundamental to any good application design. My aim in this article is to share my findings and persuade certain adopters from repeating others’ mistakes. 

Engineers generally love greenfield projects. It is something most hiring personnel will mention to a prospective hire as an additional incentive. The ability to start without a legacy codebase that few want to maintain is a godsend to many. However, starting on a new project usually means the boundaries between the services are unknown. For people who decide to start with microservices from the beginning, the distinction between service A and service B at this point tends to often be only in the data domain of the service, and not in its architecture, workload, dependencies or the need to scale. In my experience, the situation where both services use the same language, base framework, dependencies, strategies and have a predictable almost 1-1 workload relationship has been way too common. The decision to split these two, what I consider to be components, into separate micro services alleviates developer’s problem of how to keep these components separated and organised within the codebase while accumulating a host of new problems further down the build and deployment stream for what is essentially developer’s responsibility. I say “split” but I often found that the developers conceptualised both of them from the very start as microservices rather than components and they just couldn’t wait to create yet another repository and a docker image. Often, the only rational for this at this point is to prevent developers from easily coupling both components within the codebase, something my colleague described as Fear of the Monolith.

It is now hard for me to see how starting with the above as the default architecture is desirable for a lot of startups, however, inexperienced teams have to learn on their own what costs are associated with mushrooming microservices. The experience has taught me that if you do not sufficiently automate your build, testing and deployment pipeline, especially early in the process, you will pay a high price for indulging in microservices at large. Having a good monitoring and logging from the start is essential as well as the system complexity quickly piles up - you’ll have to manage containers, load balancers, networking layers, security rules, message queues and so on. Many of these requirements simply do not exist in a setup where components are able to receive calls through a single endpoint. The lack of proper contracts between services (in my view often just components) will often force the engineers to test the microservices using a monolith that tests a specific build of most services at the same time as the trust in the deployment of an individual service is low. Likewise, in my experience, not enough attention has been paid to being able to identify workflows and their associated messages within microservices. The HTTP or a message queue boundary often causes a disconnect in tracking workflows and developers end up searching through millions of logs from various containers looking for traces.

The reader might have noticed by now that my argument is not against microservices as a concept but rather the application of it. A company with no requirement to scale to a large amount of traffic in the foreseeable future and choosing to create a microservice for each data domain, for example, without a DevOps team will soon find themselves spending more time maintaining and firefighting than developing new features. While the use and abuse of microservices generates demand for my services as a DevOps engineer, I believe scarce resources can be better employed by cautiously scrutinising the benefits of microservices before embarking on that journey.

Juraj Seffer

Juraj is a DevOps Engineer in London specialising in automation and monitoring of distributed systems.