All the world’s abuzz about containers these days. And why not? Containers underlie many of the cloud’s most scalable and robust applications, enabling them to quickly scale up when demand increases, and down again – saving money – as the load goes down.
In this article, we’ll describe the basics of containerized applications: what they are, why they are better suited for the cloud than other options, and how to create one. In subsequent posts, we’ll talk about some of the ways you can use containers to make modern cloud applications.
Early days of the cloud
When the cloud first emerged as a new model of enterprise computing, organizations were quick to migrate their virtual machines (VM’s) more or less intact to the cloud. However, this approach, while straightforward, resulted in some unexpected consequences. IT staff were still required to maintain copies of system software (operating system and database) including upgrades and patches.
Moreover, each instance of each application came with its own copy of the operating system (OS), which meant that on any given physical server there could be any number of running operating systems – taking up memory space and dragging down the CPU.
The results? Cloud servers were utilized inefficiently; startup times for applications were slow; scalability was limited; and – most notably – costs often exceeded budgets. Far from reducing costs, running VM’s in the cloud – so-called “IaaS” – could cost more than in a corporate data center.
Containers: The basics
Think for a moment about a shipping container on a cargo ship. The goods being transported are placed in a specifically designed box – the container. A single platform, the cargo ship, may carry hundreds, and it can efficiently carry all the diverse goods because they have been containerized.
In an analogous way, applications can be wrapped in digital containers. The process of containerizing takes an application and packages it with a uniform set of libraries and API’s so that the application can be executed on a platform, in this case an operating system.
In a system supporting such containerized applications, there is one copy of an operating system, and potentially many applications. With a set of supporting libraries, the OS provides the necessary foundations for each containerized application to run – as if it were the only application on the server.
How are containers different from processes you might ask? After all, your laptop can run many applications at the same time. The difference lies in the isolation; each application appears to itself and to others as on different physical machines; each container’s localhost is unique.
Advantages of containers
It’s easy to see that containers make much better use of system resources; with one instance of the operating system, they contribute to reducing the CPU load and memory requirement.
Additionally, because (unlike VM’s) loading a container does not require an operating system to load and initialize, launching an application is much faster than a VM (seconds versus minutes). And when the container is no longer needed, it Is disposed of with no residual “fluff” (e.g., OS or registry settings) left behind to cause problems for another container.
Containers are also portable. A containerized application can run on (essentially) any cloud and even on-premises if that is desired. And, that being the case, as an application scales up, it can pull a new instances of a container image from a repository (a registry) and then instantiate it.
Container registries come in public and private flavors. Use a public registry like Docker Hub to find a base image – for example, a web server like nginx or MySQL -- and download it to your site.
Or use a private registry on your site (that is, in your cloud account) to hold the container images of your applications. All the major cloud vendors today support such registries.
Building and deploying containers
Container images are immutable, meaning, unlike VM’s, they cannot be patched. This is actually a good thing, because it means that updating a container implies building a new container image – which means that if something goes wrong, it’s easy to roll back to an older version.
As such, containers lend themselves well to a continuous integration/continuous deployment model of development. In a CI/CD pipeline, new versions of software are automatically (through scripts, or under the control of support software like Jenkins) pushed to a container repository, where, again automatically, they are subjected a battery of automated tests. If the new code passes all the tests, then it can be automatically “promoted” to a staging server and then deployed, as shown below:
We’ll go into much more detail about continuous integration and continuous deployment pipelines in a future post.
Moving forward …
Containers offer many advantages – and they’re relatively easy to create and operate. With all these benefits, perhaps it makes sense to migrate on-premise legacy applications directly to containers, and bypass the disadvantages of migrating to IaaS VM’s. That’s the topic for our next post. Stay tuned!
 Diagram adapted from https://success.docker.com/article/dev-pipeline