Architecting Applications for Kubernetes - Zerto

Architecting Applications for Kubernetes

March 1, 2021
Est. Reading Time: 5 minutes

Container-based applications are designed with different assumptions about the underlying infrastructure. In this post, we will dive into architecting applications for Kubernetes-environments.

Portability Makes the Most of All Clouds

The design approach for container-based applications is vastly different than architecting virtual-machine-based applications. In the latter, the goal is to prevent failure, not accommodate it. Container-based application designs embrace failure. Instead of trying to prevent failure, the focus is on recovering from failure gracefully, leading to resilience, instead of availability.

Containers allow individual application components to scale up and down as needed by spinning up multiple instances of the same container image and balancing the load across them. These identical copies make components resilient, as any one container failing does not impact the application’s or even the application component’s availability. Because container images, due to their smaller size and standardized format, are very portable, it is easy to spin up additional containers on another platform, increasing portability (and preventing infrastructure lock-in), as well as resilience by spreading risk across multiple cloud providers.

Container-based application designs embrace the scalability of public cloud, resilience in numbers, and portability for a true multi-cloud experience.

Declarative Configuration Improves Scalability

All this leads to a highly automatable application deployment. For instance, application components can be scaled out or in automatically by setting of monitoring the user load and rules for adding or removing container instances as needed.

These automations are done using declarative configuration language, like HashiCorp Configuration Language (HCL2) used in Terraform or simply YAML. These languages help define the desired end state of the deployment, without defining all the intermediary steps. This is immensely helpful to cope with the complexity that comes with dynamic environments that are constantly responding to changes, for instance to scale out or in or to self-heal to keep the application functioning optimally.

Simplifying Change Management

Because container image formats are standardized, containers can run anywhere: from a developer’s local laptop to any public cloud or even PaaS platforms. Kubernetes is the de-facto orchestration engine for running containers across all these platforms without making changes to the deployment code. This vastly simplifies the software development lifecycle, and by addressing portability, scalability and simplicity are unambiguously implied.

An elevated level of automation is also an opportunity for simpler operational processes. Declarative configuration provides a system of record which eliminates the need for change control documentation. Instead, peer review and merge requests help organizations apply the four-eyes principle; and automated pipelines test the code for common pitfalls.

From a data protection standpoint, that does mean that the deployment code, as well as the pipelines and deployment automation, are a crucial part of the application, and must be continuously protected to capture the most recent consistent application state.

Persistent Storage

Long gone are the days of stateless-only containers. Container runtimes and orchestration platforms have matured and offer well-rounded persistence features: the Kubernetes community has adopted the CSI abstraction, and many storage vendors offer (sometimes proprietary) snapshot and replication support.

Dependence on vendor-specific data protection makes portability and scaling difficult. When architecting container-based applications, it makes sense to look at portability for persistent storage, so that it becomes easier to solve data inertia challenges when an application needs to move between cloud environments or scale up across different clouds.

CSI is a common denominator across all Kubernetes clusters and does not depend on any one cloud or storage vendor, maximizing portability and compatibility. This also solves the data visibility problem, which occurs when developers use 3rd party object storage services that are defined in the application code, rather than the infrastructure-as-code deployment model.

Scaling Operational work

With most workflows described as code in a pipeline, operational work changes. Instead of working in different UIs for different purposes (like a backup UI for managing backups), all operational aspects, like backup, become part of the application configuration specification and the application deployment pipeline.

Self-service and Policy-based operations are the key tenets of scaling up operational work. Operations experts define the policies, application developers (or product owners and DevOps Engineers) apply these policies to their applications without any dependence on IT teams. IT teams, in turn, update and fine tune these policies centrally; new versions of policies are applied at the application level automatically. Naturally, restore tests, recoveries and failovers work in a similar self-service manner.

Operationally, this gives ‘consumers’ of backups policies to choose the right policy for their application, without any organizational dependencies. This increases coverage and quality of data protection, without increasing operational overhead.

This does mean that the scale of the data protection platform should be considered. Like the auto-scaling of the applications based on load, the data protection platform should be autoscaling with the applications it protects and be aware of Kubernetes platforms it interoperates with.

Wrapping Up

We have looked at how to architect applications for Kubernetes. In well-architected applications, protecting the container images, the deployment pipelines that bring the application to production and any persistent storage makes up the whole story.

The cluster itself, assuming it is built by infrastructure-as-code pipelines itself, is, paradoxically, not crucial to protect and recover. Simply spin up a new cluster based on your pipelines in your cloud of choice: there is always a cluster readily available to use.

Reality can be messy, however. Operationally, data protection solutions may not integrate directly into Kubernetes, support diverse types of persistent storage, or may not scale alongside the clusters it protects. It is important to integrate natively into the developer’s workflows to allow for self-service and policy-based consumption of data protection. Why should data protection not play by the same rules as the application it is trying to protect?

Deepak Verma

Deepak Verma is Director of Product Strategy at Zerto. He is responsible for managing the release of next-generation products at Zerto. He has 20 years of experience in the IT industry with focus on disaster recovery and data protection. He has lead product management teams responsible for building and delivering solutions for cloud platforms at multiple companies. Deepak holds a Master of Computer Science and a Bachelor of Engineering. He is certified in AWS, Microsoft Azure and Google Cloud.