4 deployment strategies for resilient microservices

Turn to these cloud-native routing techniques to test and mitigate risk in your microservices deployments

4 deployment strategies for resilient microservices
LogRhythm

Building apps with microservices provides developers with greater speed and agility than traditional architectures. However, each code change still incurs risks, setting the stage for potential failures if code quality issues aren’t discovered and addressed. To mitigate those risks, applications teams should implement modern, cloud-native routing strategies that make it easier to test for danger and ensure that applications are truly ready to be deployed in production environments.

The following four deployment strategies use routing techniques to safely introduce new services and features, test functionality and make iterative improvements, identify and eliminate vulnerabilities, and more. Together, these approaches are a virtual toolbox that applications teams can reach into for reducing risk during the development and deployment of microservices-fueled applications. Understanding their differences and similarities will be key to knowing how to take best advantage of them in your own environment.

Canary deployments

Named after the historical practice of sending actual birds into coal mines to see whether the air quality was safe for humans, canary deployments are a way to test actual production deployments with minimal impact or risk. The so-called canary is a candidate version of a service that catches some subset percentage of incoming requests (say, 1%) to try out new features or builds. Teams can then examine the results, and if things go smoothly, gradually increase deployment to 100% of servers or nodes. And if not? Traffic can be quickly redirected from the canary deployments while the offending code is reviewed and debugged.

Canary deployments can be implemented via integrations with edge routing components responsible for processing inbound user traffic. For example, in a Kubernetes environment, a canary deployment can tap the ingress controller configuration to assign specified percentages of traffic requests to the stable and canary deployments. Routing traffic this way ensures that new services have a chance to prove themselves before receiving a full rollout. If they don’t, they’re sent back to have issues remediated and then put through another round of canary deployment testing when ready.

A/B testing

A/B testing is similar to canary deployments, with one important difference. While canary deployments tend to focus on identifying bugs and performance bottlenecks, A/B testing focuses on gauging user acceptance of new application features. For example, developers might want to know if new features are popular with users, if they’re easy to discover, or if the UI functions properly.

This pattern uses software routing to activate and test specific features with different traffic segments, exposing new features to a specified percentage of traffic, or to limited groups. The A and B routing segments might send traffic to different builds of the software, or the service instances might even be using the same software build but with different configuration attributes (as specified in the orchestrator or elsewhere).

Blue-green deployments

The blue-green deployment pattern involves operating two production environments in parallel: one for the current stable release (blue) and one to stage and perform testing on the next release (green). This strategy enables updated software versions to be released in an easily repeatable way. Devops teams can use this technique to automate new version rollouts using a CI/CD pipeline.

With the blue-green strategy, developers deploy a new service version alongside the existing instance that currently handles production traffic. The CI/CD pipeline should be set to perform automated smoke tests to verify that the new version succeeds in its key functionality. Once the new service has passed the last tests, traffic can then be safely and automatically redirected to it, using software routing to seamlessly manage the traffic cutover from blue to green. Of equal importance is that, in the case of critical, last-minute issues, it’s simple to roll back the deployment to the blue version if critical issues arise.

Traffic shadowing

Traffic shadowing is similar to blue-green deployments, but rather than using synthetic tests to validate the “green” environment, routing technology duplicates all incoming production traffic and mirrors it to a separate test deployment that isn’t yet public. Thus traffic shadowing creates an accurate picture of what would happen if the new version were deployed, based on genuine traffic. At the same time, traffic shadowing ensures that tests have no impact on actual production. In practice, developers can choose to duplicate a set percentage of requests to a test service, where they can then perform integration testing and performance benchmarking (either manually or within the framework of an automated CI/CD pipeline).

Enterprise developers already leverage a range of testing techniques designed to make sure that new application code meets certain requirements. Unit and functional tests, for example, remain essential measures that code must clear. However, the nature of microservices-based architectures makes end-to-end integration testing more crucial than ever. Given the volume of interdependencies and the risk of long-term interface drift that are inherent to microservices architectures, synthetic tests still have value but will ultimately fall short of accurately representing all of the interactions between services in production environments.

Four strategies, one goal

These routing techniques all offer distinct, yet related methods of aiding in the discovery, mitigation, and testing of defects in microservices-based applications. They are potent tools for addressing bugs, performance issues, and security vulnerabilities, particularly when deployed as part of an end-to-end continuous integration and delivery (CI/CD) pipeline.

Which of these methods is most appropriate for your own case will largely depend on what concerns are most crucial. For example, a major UI overhaul can benefit greatly from A/B testing, while a blue-green deployment could be invaluable to see how a new feature might impact the performance of an existing data store.

Often, a combination of these techniques may offer the best coverage. However, it is important to consider how well each will integrate with your existing development model. Canary deployments of individual features might be better suited to agile development methods than blue-green deployments of full versions, for example. And while traffic shadowing can give excellent visibility into application performance pre-deployment, it can be difficult and time-consuming to implement and costly in terms of computing resources.

However you employ them, routing techniques such as these can be an invaluable part of the software development process, particularly as the industry moves away from traditional, monolithic applications toward cloud-native systems based on microservices. By applying one, some, or all of these techniques while remaining mindful of their specific advantages, applications teams can better ensure the integrity and success of their projects and move more confidently into production.

Manuel Zapf is the head of product OSS at Containous, a cloud-native networking company behind the open source projects Traefik and Maesh.

New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to newtechforum@infoworld.com.

Copyright © 2020 IDG Communications, Inc.

How to choose a low-code development platform