The CIOs Guide to Preparing for Containers - Zerto

The CIOs Guide to Preparing for Containers

November 19, 2020
Est. Reading Time: 5 minutes

There’s no way around it: containers have become the preferred way to run workloads in the enterprise, and container platforms like Kubernetes are quickly becoming the standard for running applications. No wonder that the majority of IT teams embrace containers for running production workloads. The advantages of doing so are clear, but the road to get there may not be. In this guide, we’ll walk you through the steps to be truly ready for container workloads in production.

Think fast, act with agility

The world of containers is very different from the virtual and physical environments running our application today. While there are technical differences that have a positive impact on resource usage (and thus cost), advanced workload orchestration tooling like Kubernetes offer a robust and complete ecosystem with platform software from monitoring to storage and networking, to configuration management and naturally, data protection.

The biggest strength of containers is the decomposition of infrastructure and applications due to the way containers and container images work. This decomposition has simplified software development and IT operations alike, allowing for a nimbler, quick-on-your-feet and more agile approach to delivering IT to your business

Regardless if we’re talking about Corporate IT or using IT to deliver digital services and products to your customers, it’s now possible to align the speed of IT delivery with the speed of business, vastly reducing the inertia and friction traditionally associated with IT.

Think applications, not infrastructure

Containers allow organizations to spend less time on Infrastructure and Operations, and instead focus more on applications. This requires a change of mindset, though. In the old days of virtualization, the infrastructure often played a leading role in IT. Infrastructure was expensive, difficult to change, and mistakes were costly or had a high-impact. These are the oil tankers of our industry: hard to get moving, and once in flight, hard to change course.

The shift to public cloud, containers and even serverless (or other PaaS-like runtimes) has a major impact on the field of IT Infrastructure and Operations, shifting the way we think about IT from an infrastructure-focused perspective to an application-focused perspective.

In other words: containers and public cloud remove and hide most of the infrastructure-layer complexity, freeing you up to focus on applications. In our analogy we’re not steering a single oil tanker, but instead we’re able to let each team or business unit run with their own power boat.

This shift in mindset has clear advantages; instead of being bogged down on the day-to-day operations and the toil to keep things running that takes up most, if not all of your time, IT can now take a more strategic seat at the table, spending time on what the business wants and needs, and aligning the lifecycle of applications to these requirements. This helps optimize the potential of applications and its data, instead of suboptimization on the infrastructure level.

So how do you get there?

The key is to let go of the complexity of running your own infrastructure. Whether this is using public cloud, a PaaS platform or even outsourcing, freeing up your team’s time is crucial. Break up that hard-to-manage and hard-to-steer oil tanker into many smaller power boats, which your teams are capable of managing themselves.

Multi-Cloud

And this brings us to the second recommendation: be ready for multi-cloud. If each team has the technical capability and the organizational freedom to choose their own power boat, there’s one thing you know for sure: they each have their own requirements, and want what’s best for them, in their situation, at that time.

Inevitably, this will lead to adopting multiple cloud platforms to cater to the wide range of requirements, both technical, financial and regulatory.

Leveraging the unique capabilities in each cloud, like a lead in Machine Learning and Artificial Intelligence services like speech or vision recognition, serverless functions, managed databases, will help teams make a difference and create that competitive edge in the market.

While multi-cloud is a massive topic to discuss, in the context of containers it really means two things:

  1. A unified control plane
  2. Application data portability

Unified Control Plane

Kubernetes is the de-facto standard as a container control plane. In its vanilla state, containers can run on any Kubernetes platform, regardless of public cloud vendor. But Kubernetes by itself is not a production-grade system for running containers. It needs additional tools for storage management, network virtualization, network security, monitoring, data protection and more.

This combination of ecosystem tools is what makes multi-cloud difficult, and your ability to create a stack that is cloud-agnostic is a key differentiator in becoming multi-cloud and by extension, providing the most value to your teams.

While discussing the ‘how’ of creating a unified control plane for container workloads is a little too much to dive into in this blog post, we’ll highlight an oft-forgotten aspect: application data portability.

Data portability

Having data portability is the key enabler for a cost-effective multi-cloud strategy that is compliant, agile and application-focused. Because while your application container images are portable, there is no guarantee your application data is portable as well.

Because of what use is your multi-cloud strategy, if application data is not portable between these clouds? In order to have multi-cloud, we need to be cloud-agnostic from an application perspective. Cloud-agnostic means being able to transport application data that natively lives in a cloud storage service to another cloud’s storage service, as well as transporting the container image itself.

The true challenge

And so, the true challenge for container adoption is not ‘getting started with Kubernetes’. That’s the easy part.

The true challenge is figuring out how to keep application data from being locked into a public cloud, immobilizing your data and preventing you from making the most of each cloud’s strengths and keeping your IT Operation agile.

To learn more read the report by 451 Research.

Deepak Verma

Deepak Verma is Director of Product Strategy at Zerto. He is responsible for managing the release of next-generation products at Zerto. He has 20 years of experience in the IT industry with focus on disaster recovery and data protection. He has lead product management teams responsible for building and delivering solutions for cloud platforms at multiple companies. Deepak holds a Master of Computer Science and a Bachelor of Engineering. He is certified in AWS, Microsoft Azure and Google Cloud.