In the software landscape, you can find anything from large-scale legacy monolithic projects to bleeding-edge reactive microservice systems and everything in between. Given the benefits to project maintainability and scalability, I would like to explore the ways in which serverless is a friend to monoliths and microservices alike, helping either without discrimination.
Context
The transition from monolithic style architecture to microservices has been in the works for quite some time at this point. One of the biggest drivers for this migration is the intense overhead that comes along with managing a large and complex singular codebase. When all of the application logic is hosted in one place, many flows may cross paths – sometimes in ways that could raise an eyebrow or two.
Microservices-based architecture helps solve these issues by giving you the choice of breaking up pieces of logic that can function as independent units. This makes it so you no longer have to tend to one big mess, but rather multiple smaller ones – which is definitely to be desired. However, this option is not without catches. From deciding how granular the splitting of functionality should be, to ensuring these independent pieces work together in a way that honors this architectural style, there is no shortage of brain puzzles to crack with microservices either. This holds true, especially when migrating an older system that you have to take apart little by little.
That being said, not all systems will get migrated to a different architecture. The case I would like to make is that, regardless if you’re dealing with a monolith or with microservices, sprinkling some serverless bits into the mix could prove beneficial for many reasons. Let’s first have a look at what serverless brings to the table.
Why serverless is pretty cool
- Functions are easy to maintain. Because they are usually responsible only for a bite-sized piece of business logic, you get to worry less about making sure that whoever works on it understands all the intricacies of the house of cards we sometimes call ‘our application’s business rules’. In the same vein, functions are quite permissive with what language they can be written in. For example, if your main course is a .NET API, you could have a nice side of Python functions to go with it.
- Functions can be very fast. If you can afford writing the ones you need in languages like Go or Python even the cold start times will be mind-blowingly fast (especially if you come from a framework-heavy background, like I do, with Java and Spring Boot). This means that the function can be quickly spun up, do its job, then gracefully shut down until it’s needed again.
- Serverless functions are generally substantially cheaper than EKS nodes or EC2 instances, and you are only paying for uptime or number of hits to the function – depending on your cloud provider. So if the function is only running for a couple of seconds each day, you will be billed for exactly that, which even on a specced out Lambda instance would not be very expensive.
A friend to both
Now that we know there are at least a couple of solid reasons to spice things up with serverless functions, let’s have a look at what they can bring to monoliths and microservices specifically, keeping two angles in mind: code complexity and resource usage.
Monoliths
For monolithic apps, serverless functions can serve as a means to prevent adding complexity into the code unless absolutely necessary. Even if most of your business logic resides in the same repository, decoupling even the smallest ounce of new (or old, if you’re feeling frisky) functionality can inch the codebase towards something easier to maintain.
Adding new functionality in the form of a serverless piece can also help with asynchronous execution and getting better use of your (precious and oftentimes expensive) computing power. Delegating some work to a function might in some cases even save you from having to scale up the number of instances, which in turn might save some money.
Microservices
In the event that your system is already quite decoupled and maybe even highly reactive, serverless functions will feel like a natural addition to it. While microservices are considerably more lightweight than a monolithic application, they still have some heft compared to something serverless. Most microservices have their own database and if they expose an endpoint or two, they probably also have a decent framework to handle that. When new functionality comes in and does not fit in with any of the existing services, it would be a good idea to weigh the pros and cons of adding another service versus adding the logic into a function before taking action.
From the resource usage standpoint, I have recently witnessed something rather interesting that made me think twice about my knee-jerk reaction to ‘just add another service’. When a Kubernetes update came in, all of the services running on the node had to boot up in a new node with the updated version. The individual startup time coupled with the sheer number of services caused quite a significant scale up in the number of nodes in order to support the operation. They did eventually scale down, but such operations have a cost and it makes you think if some of those services could have been a function after all. In our case, the answer was yes.
Conclusion
The perfect system does not exist, but we can always work towards one that is more scalable and easier to maintain. The steps we take don’t have to necessarily propel us lightyears ahead of our current situation – inch by inch, it’s a cinch, yard by yard, architectural changes are hard. I believe a well-placed serverless function would constitute one of these small, but impactful steps. I encourage you to give it a try and see for yourself! You never know, your monolith or microservice might just make a new friend.
About Ruxandra Bucos
Ruxandra Bucos has started as an intern in Maxcode over 4 years ago and is now a key member of the team in her role as a Java Developer. Focused on learning and growing, Ruxandra is keen on discovering new trends and implementing them in the projects she works on, as well as always looking towards delivering smart and quality software.