Migrate from on-premise VM’s to cloud containers
March 10, 2020
March 10, 2020
Conventional wisdom holds that you should migrate your virtual machines (VM’s) in the data center directly to the cloud, in the so-called IaaS model. But is that really the best approach? Or is there a way by which you could gain more of the benefits of cloud computing – immediately – by moving your VM’s to containers?
Yes!
In our last post we discussed some of the benefits organizations have realized by adopting containers. Some of these benefits include:
In this post, we’ll talk about how to move legacy applications to containers and what that new way of thinking entails for your organization.
Let’s take a quick moment to examine how to “containerize” an application. (Notice that we said, “an application.” It’s a common misconception that you containerize your VM; in fact, the container only holds your application, so you’ll need your application executables – and only those – to create the container.)
It’s a fairly straightforward (and by now, well-known) process. A file, called a Dockerfile, describes how to bind your application together with needed support libraries (for example, the Java JDK, or .NET libraries, and/or a Tomcat web server – and so on) and create a container image. (See the example below.)
Now, of course, this example above represents the simplest case; we’ve only scratched the surface of container creation and execution. However, it underscores our point – that migrating applications to containers is in most cases straightforward, and no harder, in many cases, than migrating to a cloud VM. And, having done this, we can get all the benefits of containers we’ve described above.
Of course, there are some scenarios in which containerization may not be the best choice for your application. Generally, these fall into three buckets:
As you plan your migration to containers, you’ll of course have to consider how your data will migrate as well. Docker environments support volumes, which reside on storage devices and appear to containers as addressable drives.
For applications that use a common relational database such as MySQL, it’s easy then to download a container image from the registry and connect to the volume. Your task then consists of two containers, one holding your application (“MyApp” in the example above) and the other, the database container. A containerized service is some number of instances of the containers – the exact number of which can be adjusted based on the load.
Of course, another approach to data in the cloud is to make use of a managed database service such as AWS’ AuroraDB, Microsoft’s SQL Azure, or Google’s Cloud SQL. To take advantage of such a service will normally require changes to the application code, but in return you’ll get the benefits of high availability, elastic scale, and dramatically reduced administration overhead, among others. (We’ll be discussing cloud data management in depth in a future post.)
Moving your applications to a container environment represents your first step into the world of applications running in the cloud. Look now for the efficiencies and optimizations you can already get across your development and deployment processes.
For example, your applications teams no longer need to worry about patching and updating operating systems. Your development, test and production servers no longer need to be exact images of each other
Most interestingly, you can check in your Dockerfile into source control, where it can be versioned – which means that if you ever have problems, it’s relatively easy to roll back to an earlier working version. Think of this as your first foray into the world of “infrastructure-as-code” – another topic we’ll cover in a future post.
Containerizing your applications is only your first step toward taking advantages of the opportunities afforded by cloud computing. In our next post, we’ll discuss microservices, which provide a means by which you can decompose a monolithic application into smaller, more manageable, independently developed pieces.