Skip to main content Skip to Footer
Chris Shen
Chris Shen
AWS Architect Manager
November 14, 2017

Tips on blending Amazon ECS and Docker for DevOps Brilliance

Containerization has caught on like wildfire in the past few years with the transition to agile software development processes. Start-ups led the way with the lightweight virtualization option but enterprises soon followed with many adopting Docker, a company that provides an open source platform for containerization, as their de-facto tool set for DevOps. Recently Accenture announced an expanded alliance with Docker to embed its container services into the Accenture Cloud Factory to further ratchet up the speed of cloud migration for our clients.

Docker and Containerization 101
Docker is the foundation in containerization toolsets. The approach provides a lightweight layer of abstraction on top of the operating system (OS) for applications and workloads to operate in. Like most virtual machine technologies, Docker allows workloads to be isolated from the underlying hardware on which they run. Its abstraction from the host OS allows workloads to be isolated from dealing with various compatibility and portability issues migrating between OS hosts. Docker’s lightweight architecture also allows companies to use their existing hardware resources more efficiently by running multiple Docker containers on shared hardware/OS to fully utilize given resources.

As described on the Amazon Web Services (AWS) site, the benefits of using containers are numerous: environment consistency, operational efficiency, developer productivity and version control. To amplify the benefit, however, companies can leverage Amazon EC2 Container Service (Amazon ECS) on top of Docker for container management and orchestration on Amazon EC2 clusters. Enterprises already operating in AWS Cloud have even easier access to this recently improved tool by AWS, which also helps address a few issues that Docker implementations can pose.

The Accenture AWS Business Group (AABG) has combined Amazon ECS with Docker into client solutions to achieve impressive results. This blog provides an overview of the considerations and methodology for combining the two tools along with a few of our lessons learned to expedite the efforts of cloud-native developers.

Too much of a good thing?
One of potential issues with using containerization is the proliferation of containers that can stack up on each physical server, virtual machine or EC2 instance. Think of it this way: What happens if you multiply the number of software developers in a single global enterprise by the easy-to-use containerization technologies? The result can quickly become overwhelming—not to mention tricky to manage.

Several open source container orchestration tools like Apache Mesos and Kubernetes (developed by Google) have been introduced to address this need. Enterprises can use these tools to schedule batch jobs on demand, provision containers on servers, run processes, and then turn down the containers so that other processes can leverage the same server capacity.

Blend Amazon ECS & @Docker to take #DevOps higher? Brilliant! Accenture's Chris Shen blogs: #reInvent

 
 

AWS “spot instances” provide differentiating edge
Amazon ECS orchestrates these same procedures in AWS Cloud—and takes it up a notch. The tool was designed specifically to help customers leverage Docker in the AWS environment. Within the AWS solution architecture, the major difference is that companies can quickly identify and leverage “spot instances” of EC2 servers on top of Docker containers, which can yield greater server utilization and even lower costs. Unlike regular Amazon EC2 servers, which companies pay for on-demand by the hour, AWS uses an algorithm to unlock and dynamically price unused “spot instance” capacity on all available Amazon EC2 instances. Essentially, it’s a global auction for the most cost-effective cloud capacity available at any one point in time.

There is a caveat with spot instances, however, in that they are best for running batch processes and reporting jobs, not workloads that require high availability 24/7. If an Amazon EC2 server is allocated to a company for a specific job at a certain time, another company’s spot instance job could get stopped mid-stream if the process takes longer than expected. In general, the recommendation for using spot instances is to run batch processes that can be rerun if needed.

Solution combines Amazon ECS plus Docker for high availability, auto-scaling app
Accenture and AABG developed a consumer-facing web application for an innovative grocery retail client already integrated with and running on AWS Cloud. The project used both agile development methods and microservices with Docker for containerization and Amazon ECS for orchestration.

Since the client needed guaranteed high availability for the online service, the Accenture/AABG project team leveraged Amazon ECS to manage the multiple container instances, with at least two containers running at any one time. AABG also applied Amazon ECS to auto-scale more Docker instances or even Amazon EC2 servers if the client’s web application began approaching maximum capacity.

From this experience, here are a few things to consider when adopting Docker and Amazon ECS:

  • Understand application characteristics. Is the application CPU and/or memory intensive? Is the service an online, Internet-facing REST API service or a long-running, back-end batch process? How many and what services will be hosted in each cluster? Answering these questions will determine the size of the ECS cluster and placement of ECS services.

  • Use AWS Application Load Balancer (ALB) to serve an Amazon ECS cluster for online REST API services when possible. ECS’s dynamic porting mapping feature, as a part of the integration with ALB, provides service discovery capability for companies without having to ensure each service has its own unique port and security group rules. For more information, see this page on the AWS site.ly

  • Understand existing logging strategy. Determine how to extract and aggregate logs from applications hosted inside Docker containers to fit the logging strategy. In some cases, the use of CloudWatch log agents or third-party log aggregation tools would greatly reduce the complexity of log aggregation and management.

  • Determine monitoring strategy, including what to monitor. For example, an online web application requiring 24/7 availability should provide alerts if the service dips below a minimum threshold.

For more information about adopting Amazon ECS into your enterprise DevOps practice, visit the AABG website.

Get the biggest stories of the week, delivered to your inbox.