You started migrating applications to the cloud. You also began to containerize some of your applications to take better advantage of cloud server resources.

Now it’s time to really take advantage of the power of cloud application models. We’re going to take the first steps in becoming cloud-native.

To do so, we’ll continue what we in enterprise computing started years ago. We’ll tear down large, monolithic applications into their logical components.

Evolution of decoupled architectures

Remember SOA?

In the 90’s and the 2000’s, enterprise developers began to adopt the principles of Service-Oriented Architecture. This was the first step toward decomposing large, monolithic applications into services.

What’s a service? In SOA, it’s a functionality which provided business value to users. This could be “payments,” or “lead management.” Services can be composed of a part of a traditional application, a full application, or several applications working together.

Such services are accessed via API’s, or service brokers. SOA enables different parts of the service to live on different servers, applications and databases. It also allows services to be supplied by different teams or vendors and to be independently managed.

SOA represented a great step forward. It created the notion of a service abstraction, meaning that how a service is provided is hidden from users.

More importantly, SOA demonstrated that business services could be composed from different, interoperating software components. And, it showed that services are accessible only via well-defined interface. For example, an API enables a service to be updated without affecting consumers of the service.

Microservices: The cloud-native application architecture

For application developers, the cloud presents new opportunities and challenges that did not exist in the days of on-premise computing.

Today’s applications require the ability to scale on demand, to continue operations in the face of system failures, to be updated frequently and reliably, and to extend their reach globally.  Such requirements imply a new application model suited for cloud-scale.

Microservices is the name for an architecture pattern that describes independent, fine-grained, loosely coupled code components that typically handle a single business capability. This business capability could be accepting a payment or issuing a boarding pass, for example.

Logical architecture

In the days of monolithic applications, a commerce application would be built as a series of tightly coupled modules (i.e., hard-linked together)—order management, payment, fulfillment and so on. A failure in any of them would cause the entire application to crash.

However, cloud microservices provide the freedom to design a solution as a series of independent, interoperating functions which can be described in terms of the business.

Here we have a microservice that loads products into the product database; a catalog microservice that supports users browsing; and a search microservice to help users find what they need. Once a product is selected, the cart microservice maintains the shopping cart for the user. When the user goes to check out, the payment microservice handles the financial transaction—and so on.

Many approaches exist to enable microservices to communicate—direct calls to API’s, for example. Here we’ve chosen a message bus for “publish and subscribe” messaging.

Now consider the weeks leading up to a holiday. For a time, it’s likely that users will be picking out items, and there will be heavy use of the catalog and search microservices. As the holiday approaches, and purchases are made, the load on the cart, payment, and fulfillment microservices will spike. In a cloud microservices architecture, each of these components can scale as needed. Moreover, if a given instance of a microservice fails, other parts of the application continue to run, and, properly configured, other instances of the microservice can ensure continuity of the application as a whole.

<<< Start >>>



<<< End >>>

Technical implementation

As we said above, microservices are an architecture pattern, and are not tied to any particular technology. However, containers are probably the most common way to implement them. Containers, which can be independently built, deployed and scaled, provide great foundational underpinnings for microservices.

In the diagram above, we’ve placed our microservices, developed as containers, into pods in a Kubernetes deployment, which provides the runtime execution environment for the pods and containers. Properly managed, each microservice can scale independently of the others.

Microservice communication

Of course, our loosely coupled microservices must communicate with each other. The containers can be equipped with a remote procedure call (RPC) mechanism such as gRPC to enable direct communication. Or, they can all connect to a common message bus as we described above for store-and-forward or publish-and-subscribe messaging. (Examples of such technologies include AWS’ Simple Queuing Service; Azure Service Bus; Google Cloud Pub/Sub, Apache Kafka, and RabbitMQ).

Logging and monitoring

A fully realized, scaled out, and dynamically adapting microservice deployment can present challenges in detecting transient errors or system bottlenecks. However, a number of techniques and tools can be used to shed light on microservices’ performance.

Synthetic transactions, or simulated operations, can be of great help in following the path of a request through the system. A synthetic transaction usually is a specially tagged request and/or script that signals the code to log relevant information, such as the time of receipt or length of processing time required.

A number of tools allow you to collect and visualize the flow through your microservice instances, a process called end-to-end tracing. The open source project Jaeger, which tracks transactions in distributed computing ecosystems, is an example of one of the many tools available to help optimize your microservices deployments; Prometheus, an open-source time-series database and alerting system, used to monitor large-scale applications, is another. We’ll talk more about these sorts of tools in a future post.

Tips on microservice development

Because of their compartmentalized nature, microservice development lends itself well to small, focused development teams. By using modern infrastructure-as-code techniques (in which configuration can be stored as documents in source control, updated and rolled back as needed), code updates for microservices can be deployed quickly, reliably and securely. This makes it possible to quickly respond to changing market conditions.

We recommend microservices developers be familiar with modern software engineering methodologies—notably agile development, the 12-factor pattern, and continuous integration and continuous deployment (CI/CD). Implementing these will help you get the most from microservices. We’ll have a lot more to say about these in future posts!

Miha Kralj

Cloud Migration and Modernization Lead

Subscription Center
Subscribe to Software Engineering Blog Subscribe to Software Engineering Blog