The term “microservices” might be relatively new, but the practice of breaking applications into single-function, discrete services has gone on for years -- long enough for best practices to emerge over time.
Recently I spoke with Owen Garrett, head of product for Nginx, an open source company whose Web server powers roughly one in five of the top one million websites. Nginx is also major player in the Docker ecosystem, the rampant growth of which has been accelerated by the microservices trend. As Garrett notes, the Nginx Docker image is one of the most downloaded images on Docker Hub, the go-to repository for prepackaged Docker apps and components.
Garrett has had a unique opportunity to witness how Nginx is being used in microservices architecture across a broad range of customer deployments. When asked for a good illustration of microservices architecture in action, however, Garrett picks the familiar example of Amazon.com:
When you go to Amazon.com and type in “Nike shoe,” over 170 individual applications get triggered potentially from that search -- everything from pricing to images of the shoes to reviews of the shoes to recommendations of other products you may want to purchase. Each of those were individual services or subfeatures, if you like, of an application or an overarching experience, and all those were connected via HTTP. Each might be built in different languages. Each of those may have different requirements in terms of the data store, in terms of scaling and automation. Those were the attributes that we saw that were the fundamental anatomy of microservices architecture.
Microservices architecture is a direct descendant of SOA (service-oriented architecture) and is often described in similar terms: The simple REST protocol for APIs replaces SOA’s complex SOAP and microservices tend to be more granular, but the general notion of assembling applications from services remains the same. Garrett, however, zeros in on another important difference: SOA required “heavyweight middleware” such as ESBs (enterprise service busses), which microservices architecture rejects:
What we’re seeing in terms of traffic flow … is that Layer 7 traffic, HTTP, should naturally and natively live within the application, not within the network. One of the things that Nginx has been able to deliver for developers is control. In the past, they had a bottleneck, where everything they needed to do in terms of bringing on these services or configurations had to go through a network engineer. With Nginx they can manage that traffic within their own application themselves. They can load balance; they can do AB testing. They can send ghost traffic to a mobile version of the app to test its performance. All of that is now becoming part of this development environment and under the accountability and authority of the development team.
You might call this devops in the real world: Part of both the microservices and devops trends is that developers end up shouldering responsibilities that operations once held. But how, I asked, can such basic requirements as the reliable delivery of messages among services be guaranteed in this new model?
I think what you’re referring to is the old architecture of strong consistency. Developers now accept eventual consistency … We don’t have to have lots of large transactions, we don’t have to have the system blocked while a number of transactions go through and they’re all connected before the state is changed. Instead, for each piece of data, various architectures nominate one microservices component to be in charge of that piece of data … If something is out of sync, it is often decided by a rule developers created. That gives the developer a lot more control.
As Garrett implies, a key best practice in developing a microservices architecture is to begin by breaking down data into different types -- and then build microservices to handle each type. Behind the scenes at Amazon.com, for example, one microservice is responsible for deciding the price for each item in inventory. “That data will sit purely under the control of that one microservice, and it’s responsible for pushing updates to other microservices that need that information,” says Garrett.
There’s no escaping, however, that the end result is a whole bunch of services to manage. Whether dev or ops maintains those services, keeping them all spinning seems like a formidable challenge. Garrett acknowledges this point:
People are still feeling their way to understand the right solutions. The patterns that we see are that microservices are split up to be as independent as possible, so that the relationships between them are minimized and clear. There is a clear contract of how one microservice depends upon another.
We see solutions like circuit breakers deployed. A circuit breaker is something that Adrian Cockcroft talks about quite a bit: The idea that if an individual microservice fails catastrophically, then the application needs to be able to isolate that component and somehow work around the failure … If [a microservice that recommends related products] fails, then you might end up with a small blank space on the Web page, but the application as a whole would continue to operate.
With so much written into the application, however, how do you manage the development of so many services at scale? The problem of ensuring developers are “doing it right” across thousands of microservices goes to the heart of a cultural change in application development as a whole:
That brings us to a people problem. You have to be prepared to give up some control. You can’t afford to be too restrictive over your development staff. There is a great saying that I believe came from Amazon, the “two-pizza” team. Two pizzas should be enough to feed a development team. With three or four people you can have a small, independent team that can focus on building the service … Then, if they’re given the freedom to implement that as they see fit using the skills and tools they have at hand, the expectation is that they will do it quicker, they will be more satisfied as employees, and you will have a more reliable system as a result.
With microservices, then, a distributed architecture also means distribution of responsibilities. Basically, cloud infrastructure and devops tools have given developers the control they need to take that responsibility without depending much at all on operations. The division of labor also encourages domain specialization -- the pricing service developers really know how pricing works and have an open line of communication with the pricing folks on the business side.
Microservices architecture is no panacea, and determining the boundaries between services generally takes multiple iterations before you get it right. Plus, clear divisions of responsibility work well until key people leave -- and you discover documentation about how a given service works is lacking. Moreover, many teams struggle with determining the best practices for versioning hundreds of services. Last but not least, the service orchestration tools are today far from mature.
But the payoff is manifold: Services can be updated individually, new applications can be built quickly from existing services, and management actually has greater visibility into who is responsible for what. Practically speaking, it's still too early to tell whether microservices architecture will succeed where SOA and earlier, similar schemes failed. In the end, the greater authority and satisfaction enjoyed by developers may be the x-factor that ultimately drives its success.