To the cloud! Real-world container migrations

Forward-thinking organizations offer real-world lessons for containerizing enterprise apps for the cloud

We hear a lot from vendors and service providers about the wisdom of migrating applications and workloads to the cloud. The potential benefits include lower capital costs and increased flexibility.

With the advent of containers, porting your organization’s legacy code to the cloud should be a snap -- in theory. But how feasible is a “lift and shift” strategy when it comes to legacy apps? At this point, the method appears to be largely in the early and experimental stages.

Here we take a look at two forward-thinking organizations that are using containers to port their applications to the cloud. Their setups and hard-earned lessons should give you a better sense of how -- and where -- to take advantage of this emerging application migration strategy.

The promise of containers to the cloud

While under way at some organizations, the lift-and-shift strategy for porting legacy apps to the cloud via containers is very much at a provisional stage.

“This is not a common scenario, as we do not recommend using containers as a vessel for legacy applications and workloads -- either for on-premises or as a means to migrate them to the cloud,” says Traverse Clayton, research director of cloud and application platforms at Gartner.

The reason is that containerizing a legacy app brings along all of the “technical debt” with the container, Clayton says. “The exception is when you can reap the app lifecycle benefits of containerizing an app,” he adds, noting that because they enable faster development iterations, containers can extend the lifecycle of your organization’s applications.

While the research firm is not seeing a large push toward porting apps with containers, there are certain business drivers for doing so. For one, organizations are shifting from current virtualization technologies to the containerization of applications. For another, there is a push from development teams to standardize on containers as the de facto unit of deployment across the entire lifecycle of an application.

A potential benefit is cost optimization. The “perception is that containerizing applications will increase your density on servers, leading to cost savings,” Clayton says. “While this strategy has been proven successful for mega web-scale companies and cloud providers delivering services, there is not any quantifiable evidence this will be a tangible benefit for traditional enterprise IT shops doing this.”

Major cloud service providers currently offer container as a service (CaaS), which enables development and operations teams to take advantage of containers at scale without the overhead of managing the underlying container orchestration and management infrastructure, Clayton says.

Another possible driver is that PaaS vendors now offer customers the ability to run containers on their platforms as a standard unit of deployment.

“Most customers [that] are delivering applications on a PaaS are looking to take advantage of cloud characteristics and maximize their investment over IaaS,” Clayton says. “The ability to run containers in this environment is appealing because it reduces the friction between the development teams and operations teams.”

In addition, there are innovations happening at the network, storage, and security layers to supply development teams with plug-ins to smooth the transition of running legacy applications in containers.

Here are a few examples of organizations that are moving ahead with strategies to port apps and workloads to the cloud using containers.

LinkedIn

While LinkedIn provides more of a web-scale organization’s approach to the cloud, its experience with containerizing apps is instructional nonetheless.

The business social media provider’s site is primarily served up by hundreds of Java-based microservices that the company has created over the years, says Steve Ihde, director of engineering at LinkedIn.

“There are many instances of each microservice,” Ihde says. “When we move them to our private cloud, we move them from an environment where humans spend time deciding how to place each job on a server and how much load a server can handle, to an environment where an automated system places the job where it fits best and ensures that the job is guaranteed a bounded slice of server resources.”

LinkedIn relies on the Docker project's open source runC solution, which implements the recommendations of the industry standard Open Containers Initiative, to handle all the details of setting up the containers on each host.

“For most other aspects of the system, including topology management -- how to place jobs on hosts -- service provisioning, and image management, we have developed our own solution known as LPS or LinkedIn Platform as a Service, in order to leverage the extensive systems we already have in place,” Ihde says.

Over the past few years, LinkedIn has been exploring ways to get more resources out of a smaller hardware footprint, while at the same time increasing productivity by making its software stack more application-oriented.

“Our bet was that by abstracting the problems of deployment, resource provisioning, and dependency management at scale, we’d be massively increasing productivity for our software engineers and SRE [site reliability engineering] team members,” Ihde says.

LinkedIn's foundation and data teams provide the platform that the rest of the company uses internally to create products and services for LinkedIn members. “Our goal was to make these back-end systems ‘invisible’ to the people that work on them, without adding complexity or impacting performance,” Ihde says.

The company has seen “dramatic” improvements in productivity for everyday tasks such as scaling up a microservice by deploying more instances, because this is now a much more automated process. “We've also seen that we can improve our hardware utilization by deploying the same number of microservice instances on fewer than half the number of servers as before,” Ihde says.

As for the challenges in porting applications to the cloud using containers, one of the biggest is meeting users’ expectations.

“We've found that the users of the system, our internal developers and SREs, have very high expectations, and rightly so,” Ihde says. “Providing perfect isolation of jobs from each other in every dimension, including difficult areas such as guaranteeing a certain level of bandwidth to spinning hard disks on the host, is not a solved problem in the industry. But when we tell our users we are providing isolation, they expect the very best.”

Cornell University

The university has been using Docker for containerizing its workloads since September 2013 as it ramped up an effort to move its IT infrastructure to the cloud. Cornell’s on-premises infrastructure “suffered from all the classic problems,” says Shawn Bower, cloud architect.

That included regular breakdowns, time-consuming fixes, and the associated costs. “We knew moving to the cloud and embracing ephemerality was the right thing and we wanted to embrace immutable infrastructure, which is where Docker came in.”

Prior to the release of Docker 1.3, “there was no exec command, which meant there was no way to change the running container if it was not running” Secure Shell (SSH), a cryptographic network protocol for operating network services securely over an unsecured network.

“We started to architect our systems with the explicit goal of not needing SSH, and Docker was the cornerstone of this effort,” Bower says.

The migration to Docker is part of the university’s cloud migration effort. “We are looking to move the majority of our on-premises datacenter to the cloud over the next three years,” Bower says. A number of factors led to the decision to move to the cloud.

“It’s impossible for a small IT organization to keep pace with cloud vendors such as AWS,” Bower says. “They are pushing the boundaries of server management and security further than we could ever hope to. By embracing the [agility] of compute resources and the ability to automate repetitive operation tasks, we have seen both a reduction in hardware cost as well as employee time” dealing with IT infrastructure issues.

One of the first containerized applications the university moved into production in the cloud was its wiki, which runs on Confluence collaboration software. “In the six months before this project we spent 1,770 staff hours supporting Confluence,” Bower says. “In the six months after we spent 178 hours.”

As with adopting any new technology, there have been difficulties. “After our first application went live we ran into a memory leak in the logs buffer,” Bower says. “The learning curve for Docker can be steep. The open source project releases new engines rapidly, more rapid than we can keep up with. Having commercial support for the Docker engine has been key to our success.”

In porting apps via containers, Cornell has found that breaking the applications into their components helps simplify the process. “It gives us the ability to test, patch, and maintain each component independently,” Bower says. “We have built the basic building blocks of our services and through composition we can assemble new services.”

Related articles

Copyright © 2016 IDG Communications, Inc.