As more workloads are deployed in containers, IT teams will need to assess how to manage container sprawl, reduce cloud bills and support databases
Despite the option to move essentially ephemeral computing resources and data between public, private and hybrid clouds, there is still an all-encompassing push to deploy unmodified monolithic applications in virtual machines (VMs) running on public cloud infrastructure.
However, it is more efficient to break down an application into functional blocks, each of which runs in its own container. The Computer Weekly Developer’s Network (CWDN) asked industry experts about the modern trends, dynamics and challenges facing organisations as they migrate to the micro-engineering software world of containerisation.
Unlike VMs, containers share the underlying operating system (OS) and kernel, which means a single OS environment can support multiple containers. Put simply, containers can be seen as virtualisation at the process (or application) level, rather than at the OS level.
Those essential computing resources include core processing power, memory, data storage and input/output (I/O) provisioning, plus all the modern incremental “new age” functions and services, such as big data analytics engine calls, artificial intelligence (AI) brainpower and various forms of automation.
Although the move to containers provides more modular composability, the trade-off is a more complex interconnected set of computing resources that need to be managed, maintained and orchestrated. Despite the popularisation of Kubernetes and the entire ecosystem of so-called “observability” technologies, knowing the health, function and wider state of every deployed container concurrently is not always straightforward.
Migrating to containers
“The question I am often asked is how best to migrate applications from a VM environment to containers,” says Lei Zhang, tech lead and engineering manager of Alibaba’s cloud-native application management system, Alibaba Cloud Intelligence. “Every customer is trying to build a Kubernetes environment, and the ways to do it can seem complex. However, there is a range of methods, tools and best practice available for them to use.”
Zhang recommends that the first thing organisations looking to containerise their VM stack should do is create a clear migration plan. This involves breaking the migration into steps, beginning with the most stable applications, for example their website, and leaving the more complex applications until the container stack is more mature.
According to Lewis Marshall, technology evangelist at Appvia, the mitigation of risk alone is a huge benefit that seemingly makes the decision to containerise legacy systems easier to make. “Using inherently immutable containers with your legacy systems is an opportunity to remove the bad habits, processes and operational practices that exist with systems that have to be upgraded in place, and are therefore non-immutable,” he says.
In Marshall’s experience, containers have the capacity to increase security while decreasing operating and maintenance costs. For instance, some legacy systems have a lot of manual operational activities, which makes any sort of update incredibly labour-intensive and fraught with risk.
Marshall recommends that IT administrators try to ensure that the cost of operating legacy systems trends downwards, towards zero. “If your system is a cost sink while adding limited business value, then updating or upgrading it should become your priority,” he says. “If your system is dependent on a few individuals who regularly put in lots of overtime to ‘keep the lights on’, that should be a huge red flag.
“It is also worth remembering that as a system ages, it generally becomes more expensive to maintain and the security and instability risks rise.”
Challenges of containerisation
The immutable nature of container-based services, which can be deleted and redeployed when a new update is available, highlights the flexibility and scale they present. But, as Bola Rotibi, research director CCS Insight, pointed out in a recent Computer Weekly article, while containers may come and go, there will be critical data that must remain accessible and with relevant controls applied.
She says: “For the growing number of developers embracing the container model, physical computer storage facilities can no longer be someone else’s concern. Developers will need to become involved in provisioning storage assets with containers. Being adept with modern data storage as well as the physical storage layer is vital to data-driven organisations.”
Douglas Fallstrom, vice-president of product and operations at Hammerspace, says applications need to be aware of the infrastructure and where data is located. This, he warns, adds to the overall complexity of containerisation and contributes to the need to reconfigure applications if something changes. Also, the idea of data storage is not strictly compatible with the philosophy of cloud-native workloads.
“Just as compute has gone serverless to simplify orchestration, we need data to go storageless so that applications can access their data without knowing anything about the infrastructure running underneath,” he says.
“When we talk about storageless data, what we are really saying is that data management should be self-served from any site or any cloud and let automation optimise the serving and protection of data without putting a call into IT.”
From a data management perspective, databases are generally not built to run in a cloud-native architecture. According to Jim Walker, vice-president of product marketing at Cockroach Labs, management of a legacy database on modern infrastructure such as Kubernetes is very difficult. He says many organisations choose to run their databases alongside the scale-out environment provided by Kubernetes.
“This often creates a bottleneck, or worse, a single point of failure for the application,” he adds. “Running a NoSQL database on Kubernetes is better aligned, but you will still experience transactional consistency issues.”
Without addressing this issue with the database, Walker believes that software developers building cloud-native applications only get a fraction of the value offered by containers and orchestration. “We’ve seen great momentum in Kubernetes adoption, but it was originally designed for stateless workloads,” he says. “Adoption has been held back as a result. The real push to adoption will occur as we build out data-intensive workloads to Kubernetes.”
Beyond the challenges of taking a cloud-native approach to legacy IT modernisation, containers also offer IT departments a way to rethink their software development pipeline. More and more companies are adopting containers, as well as Kubernetes, to manage their implementations, says Sergey Pronin, product owner at open source database company Percona.
“Containers work well in the software development pipeline and make delivery easier,” he says. “After a while, containerised applications move into production, Kubernetes takes care of the management side and everyone is happy.”
Thanks to Kubernetes, applications can be programmatically scale up and down to handle peaks in usage by dynamically handling processor, memory, network and storage requirements, he adds.
However, while the software engineering teams have done their bit by setting up auto-scalers in Kubernetes to make applications more available and resilient, Pronin warns that IT departments may find their cloud bills starting to snowball.
For example, an AWS Elastic Block Storage user will pay for 10TB of provisioned EBS volumes even if only 1TB is really used. This can lead to sky-high cloud costs. “Each container will have its starting resource requirements reserved, so overestimating how much you are likely to need can add a substantial amount to your bill over time,” says Pronin.
As IT departments migrate more workloads into containers and put them into production, they will eventually need to manage multiple clusters of containers. This makes it important for IT departments to track container usage and spend levels in order to get a better picture of where the money is going.
Read more on Containers
Avoiding architectural blank cheques
By: Cliff Saran
Containerisation in the enterprise – Infoblox: Embracing cluster-addon operators
By: Adrian Bridgwater
Containerisation in the enterprise – SUSE: The deployment surface shifted, pack up for portability
By: Adrian Bridgwater
Containerisation in the enterprise – VMware Tanzu: Superfood syndrome & rubberducking
By: Adrian Bridgwater