Containers offer great flexibility in how we deploy applications in the cloud. A good thing that is too: modern applications require unprecedented levels of scale. This necessitates the ability to rapidly scale up in the face of heavy traffic, and to scale back down as the traffic lessens. It’s not uncommon for such applications to run on tens, hundreds or even thousands of servers in the cloud, responding to spikes in traffic, automatically healing themselves when failures occur, and being updated on the fly.

How do you handle all this complexity?

Container orchestration helps IT professionals manage fleets of containerized applications deployed on one—or many—servers (a cluster). An orchestration engine handles the placement of container instances (which server to run on); scaling up and down; the security of the cluster and its containers; and recovery from failure modes.

In effect, an orchestration engine allows IT professionals to manage a cluster of cloud servers as if they were a single computing resource. In that sense, I think of it as an operating system for the cloud. With a single script, DevOps can deploy code to servers across an entire cluster of servers. And, DevOps can manage it similarly, either through command-line utilities or a web-based portal.

Let’s imagine such a system, with a series of container instances across three servers. Spanning those servers is the orchestration engine, and at the bottom is a particular “command post” that monitors and manages the deployment of containers across the servers (scheduler) and maintains configuration information, including the “desired state” (config manager).

Today, there are several such orchestration systems available, including Docker Swarm, Rancher, Apache Mesos and others.

However, the fastest growing platform today, with the largest ecosystem of partners and developers is Kubernetes. First introduced by Google in 2014, Kubernetes (Greek for “helmsman” or “pilot”) is an open-source project with over 35,000 individual contributors to date and over 148,000 code commits.1 Kubernetes is the flagship project of the Cloud Native Computing Foundation, which seeks to advance state-of-the-art in cloud-native technologies.2 And as a member of the CNCF, Accenture is working with the organization to drive growth in this space.

Introduction to Kubernetes

In Kubernetes, the servers that host the application-level containers are called worker nodes, and the “command center” is the main node.

Worker nodes

The pods in the worker nodes represent the units of execution for Kubernetes. Each pod typically contains a single container instance (although for specific purposes it can wrap more than one container). Each pod gets assigned a unique IP address.

Kubernetes places two components on the worker node: the kubelet, which is responsible for connecting with and communicating with the main node; and the kube-proxy, which manages networking services on each node.

The main node

The main node, as the name implies, controls the Kubernetes cluster. Controllers, which run inside the kube-controller-manager, manage Kubernetes resources (such as pods). A job controller launches one or more pods to accomplish a task of some sort and lets them run to completion. A deployment launches and maintains a set of pods or replica set, and so on. (The controller pattern is extensible, enabling developers to create custom controllers.)

The Kubernetes scheduler finds the best worker node for a new pod to run on. If there are several available worker nodes, the scheduler scores them and chooses the one that’s most ready.
The Kubernetes configuration is stored in a key-value database called etcd.

Here is an important point: the main node as shown in the illustration is a single point of failure. If the main node fails, it will be impossible to issue commands to the cluster and to manage it. Therefore, effort should be made into ensuring high availability for the main node.


Finally, a command-line tool called kubectl accepts commands from an administrator. It communicates through an API. This enables many third-party and add-on vendors to create web-based GUI consoles as well.

<<< Start >>>

<<< End >>>

Kubernetes security

In general, think about Kubernetes security in the same way you think about any application in the cloud. You should be familiar with your cloud provider’s security posture, including physical security, policies surrounding encryption, and identity and access management capabilities. Think about who has access to your application cluster; if databases are encrypted; and if the containers are scanned for vulnerabilities; and follow secure development methodologies for new applications.

Kubernetes provides a wealth of configuration options—some of which enable better security than others. There’s a wealth of Kubernetes “security checklists” available on the internet to help you—make sure to consult them.

Kubernetes: Made for DevSecOps

How do you define how many servers you have in your cluster, which ones run which applications and how scaling works?

Containers and orchestration have given rise to a movement in IT operations called “infrastructure as code,” in which the configuration of applications is itself managed as code. With a formal syntax and structure, the configuration can be controlled, versioned and rolled back if need be. And, by doing away with the risky process of direct patching or updating running systems, infrastructure as code improves reliability and predictability.

<<< Start >>>

<<< End >>>

Summary: The Cloud-Scale Operating System

Container orchestration, and Kubernetes in particular, makes the value of containers, Continuous Integration/Continuous Delivery and DevOps evident. Kubernetes has emerged as the de facto standard open-source orchestration engine and is supported by all major cloud vendors which avoids vendor lock-in. And, DevOps experts have endorsed Kubernetes—enabling them to think of a large number of servers as a single computational resource with dynamic scalability, resilience and infrastructure-as-code for faster and more reliable updates.

However, deploying Kubernetes “out of the box” can be challenging. It is complex and requires new skill sets. The main node, as I have noted, is a single point of failure and must be architected for high-availability.

As part of the founding class of Kubernetes Certified Service Providers (KCSP), Accenture actively participates in the open source community and helps enterprises successfully adopt Kubernetes.




Miha Kralj

Cloud Migration and Modernization Lead

Subscription Center
Subscribe to Software Engineering Blog Subscribe to Software Engineering Blog