How to Choose Between VMs, Containers, and Serverless - Zerto

How to Choose Between VMs, Containers, and Serverless

September 9, 2020

Why choose?

Let’s start with the why. Why do we need to choose? Why does this choice even exist?

As technologies evolve, they tend to commoditize at the bottom of a stack. For instance, physical services and hypervisors have predominantly commoditized, meaning they look alike, the technology is mature, and the pace of innovation slowed to a series of smaller evolutions rather than a revolution.

In recent years, other technologies for running workloads have taken that revolutionary path. The popularity of containers has exploded, with good reason, and many suitable workloads are being migrated to containers and even ‘serverless’ functions.

All these technologies have different strengths and weaknesses, maturity levels, associated operational risk profiles, and expected future innovations.

For many applications in your landscape, the choice for running it in a Virtual Machine, Container or as a serverless function has a massive impact on the way you operate that application, manage its lifecycle and develop additional functionality.

The Differences

Virtual Machines are built to mimic a physical machine, with a complete operating system, full support for (virtual) hardware and peripherals and perhaps most importantly, full isolation between virtual machines. With the help of specialized instruction sets in the physical processors, virtual machines are nearly as secure as running a physical machine, bar any security issues in the hypervisor.

With containers, we move one level up in the stack, from the virtual hardware level to the virtual operating system level. With containers, applications run in an isolated space in the operating system. Containers running on the same host share the underlying operating system. A major advantage is that we now only need a single operating system, shared across many containers, which uses less storage and compute resources. This is, of course, favorable in the public cloud, where resources are paid-for on a metered basis. It’s also easier to use a standardized ‘template’ of an operating system, and as container images are stateless, lifecycle management of these (operating system) images is easier to do ‘as code’, something that is a more involved process with virtual machines.

With serverless functions, the abstraction layer is moved up the stack even further to a single process. This requires applications to be split up into individual functions, so they can be invoked separately. This continues the trend creating microservices: breaking up applications into its smallest constituents. The upside is that you do not need to manage any of the underlying infrastructure (like physical machines, virtual machines, operating systems, containers) to run these functions; all these are abstracted away, making running a function ‘serverless’. This means your application developers do not have to spend any time on any of these, don’t have to build the pipelines to create and test these infrastructure components or do any day-to-day operations on them. That frees up developers to focus on developing new code and improving existing functionality. The obvious downside is a little less control over that underlying infrastructure.

As we see, each of these move up a layer in the infrastructure stack, abstracting the complexity of the underlying components, at the cost of losing some flexibility. From a operational perspective, lifecycle management becomes so much less complex, and so much easier, though.

I&O is democratizing

As infrastructure and operations (I&O) is democratizing, allowing developers, data sciences and even business roles to define, deploy and configure infrastructure across the on-prem datacenter and public cloud, simplicity is key.

And it’s in this field of infrastructure and operations, that the impact of choosing between virtual machines, containers and serverless functions is felt, as making a decision for any of these three has a ripple effect on many qualitative aspects of I&O, like data protection, monitoring, manageability and automation.

That brings us to the supporting role IT Operations teams play while I&O are democratizing, as they are the experts on managing data protection, monitoring, manageability and automation. And as ‘citizen’ users (not unlike a citizen developer, which is a business user empowered to create business applications) of (cloud) infrastructure are making their choices, IT Operations will support the increasing diversity of virtual machines, containers and serverless.

Supporting the increasing diversity

And this brings us to the final consideration: how can IT Operations best support the growing need for citizen users of infrastructure?

Only part of the choice between virtual machines, containers and serverless comes down to the technical differences and technical possibilities of each. For a larger part, the decision comes down to the ecosystem of processes and tooling around each of these three, and if they can support the citizen user.

For instance, a proposed business application that transforms data between two different formats, used to exchange data between two existing applications triggered by a batch job on the originating application, would be perfect to run as a serverless function from a technical perspective. But the team working on this application may not have created the pipelines to support serverless functions, have any presence in a public cloud with a serverless service offering, or may have data protection concerns.

The team instead chooses to re-use their built-up knowledge and experience in running containers and create the application as a single-purpose container, running on their Kubernetes platform. That platform is continuously being monitored, has data protection set up, is subject to (and compliant with) all relevant security guidelines and fully supported by the IT Operations teams. Choosing the path of a container, even though a serverless function was technically possible, is the smarter choice, as containers are portable between datacenters and public clouds. This allows the team to run the same container wherever they need. Serverless functions are not as portable, and have more lock-in in terms of accessing data, which is often limited to using vendor-specific storage in the public cloud.

For IT Operations, these considerations also mean they are always innovating and applying their Infrastructure and Operations expertise to new ways of running applications, integrating monitoring, data protection, lifecycle operations like upgrades, security compliance into container-based workloads, as well as looking at ways to support what’s around the corner, like serverless functions.

Wrapping up

In this post, we looked at the why and how of choosing between virtual machines, containers and serverless.

And while there are technical differences between these three, deciding between virtual machines, containers and serverless comes down to asking what is the best way to support your (citizen) developers; the technical aspects of these three are only a small part of deciding which of these to use for a new project.

For most, if not all organizations, containers offer the most flexibility and portability without vendor lock-in. Containers have a mature ecosystem supporting monitoring, data storage and protection, security and operations.

Containers strike the best balance of the three between flexibility and configurability for delivering developers the infrastructure they need to run their applications without bogging these teams down with a lot of operational work.

Deepak Verma

Deepak Verma is Director of Product Strategy at Zerto. He is responsible for managing the release of next-generation products at Zerto. He has 20 years of experience in the IT industry with focus on disaster recovery and data protection. He has lead product management teams responsible for building and delivering solutions for cloud platforms at multiple companies. Deepak holds a Master of Computer Science and a Bachelor of Engineering. He is certified in AWS, Microsoft Azure and Google Cloud.