Patrick Smith, Field CTO EMEA, at Pure Storage has exclusively penned a thought leadership article for July’s edition of CNME, in which he makes the case for widespread adoption of containerisation by 2025.
It is in the midst of a tectonic shift. Almost everything about the way organisations deliver and build applications is changing, in what has become known as digital transformation.
That digital transformation can be characterised as having three main elements. Firstly, it sees the digital enablement of processes within organisations and outwards to customers and partners. Secondly, it is heavily cloud-influenced, by literal use of cloud resources or by use of cloud-like operating models. Thirdly, the way in which application development takes place is changing too, to a continuous integration and deployment model allowing for frequent iterative changes.
At the pinnacle of these three elements is containerisation, which brings together the ability to build applications on a continuous development model and which are supremely self-contained, highly-scalable and portable while being granular in terms of the services components they encapsulate.
It’s no exaggeration to say that containerised applications — deployed and managed via an orchestration platform like Kubernetes — will play a pivotal role in the next decade’s worth of IT evolution. According to Gartner, 85% of organisations will run containers in production by 2025, up from 35% in 2019.
Containers can be run at much higher density than traditional virtual workloads, meaning fewer servers are required. This has the knock-on effect of reducing licensing costs and, importantly, power requirements. For these reasons we’re starting to see containerisation underpin cost reduction initiatives and wider business cases, with organisations targeting 25% to 40% of apps as a common starting point.
But what about storage, data protection, backups, snapshots, replication, HA and disaster recovery? These are vital to an organisation’s application infrastructure, but can be a challenge in containerised operations. Before we look at ways to resolve that, let’s look at why containers are so important and how they work.
The agility of containerised application deployment
Say an organisation’s core business is centred on frequent launches of many new products with rapid peaks in demand, and accompanying analytics requirements. It might be a ticketing operation, for example, with sudden and massive spikes in sales. Traditionally-built applications on a three-tier (client-server-database) architecture would be slow to deploy, not scale well and creak under high levels of demand. Containers are conceived of to deal with exactly such a situation.
That’s because containers encapsulate the myriad components of an application — meaning many such microservices are reusable as new applications are developed — and can rapidly multiply to meet the demands of scaling. In addition, containers hold all the API connectivity to those they depend upon and can be ported to numerous operating environments.
So, for example, that sudden rapid spike in event ticket demand could be accommodated by rapid reproduction of interconnected containerised service instances and burst to multiple datacentres including in the public cloud.
The technical underpinnings of containers — much simplified — are that it is a form of virtualisation. Unlike virtual servers, they run directly on the host operating system, and without an intervening hypervisor. That means containers are a much more granular, lightweight virtual machine that usually provides discrete components of the whole application, connected by code (ie, APIs).
While there’s no hypervisor, and no consequent overhead, containers do benefit from an orchestration layer, provided by tools like Kubernetes, which organises one or more running containers — each with their code, runtime, dependencies and resource calls — into pods. The intelligence to run pods sits above them in one or more Kubernetes clusters.
The Kubernetes storage and backup challenge
But one of the biggest challenges to be overcome with Kubernetes is storage and data protection. The roots of the issue go back to the origin of containers, which were originally intended to run on a developers’ laptop as an ephemeral instance and for which data storage only persisted as long as the container executed.
Since containers became a mainstream enterprise approach to application development, however, that just wouldn’t do. The majority of an enterprise organisation’s applications are stateful, meaning they create, interact with, and store data.
Orchestration above the orchestrator
So, customers that want to deploy containers with enterprise-class storage and data protection need to be looking at a newly-emerging set of products.
This is the container storage management platform, from where they can run Kubernetes and provision and manage its storage and data protection needs.
What should customers look for in this product category?
A key thing to look out for is that any Kubernetes storage product should be container-native. That means that an application’s storage requirements are themselves deployed as containerised microservices in which provisioning, connectivity and performance requirements are written as code, with all the dynamism and agility that implies. That’s in contrast to other methods — such as Container Storage Interface (CSI) — which rely on hard-coded drivers to storage allocated to containers.
Meanwhile, a software-defined container-native Kubernetes storage platform should provide access to block, file and object storage, and be able to make use of cloud storage too. In doing so, it should emulate the core characteristics and benefits of containerisation and Kubernetes. That means the data should be as portable as the containerised app, it should be managed via a common control plane, and should scale and heal autonomously.
When it comes to data protection, such a product should provide all the key methods of securing data, including backups and snapshots, synchronous and asynchronous replication and migration functionality. Again, this should allow for the cloud as source or target in these operations.
To handle the scalability of Kubernetes environments, the product should be able to manage clusters, nodes and containers that run to hundreds, thousands and hundreds of thousands respectively, with manageable storage capacity in the tens of petabytes.
Lastly, it should be intelligent, with rules-based automated management that, for example, creates, replicates and deletes containers as determined by pre-set monitoring triggers as well as provisions and resizes storage as required.
Once you find and implement a solution which ticks all of these boxes, you’ll soon see for yourself why 85% of organisations will be relying on containers by 2025, and wonder why you didn’t take the leap sooner.