A year after Oregon's Multnomah County deployed an on-premises portfolio management application, the two IT staffers dedicated to it resigned. Other staff struggled to maintain the specialized server environment. Left with no other option to guarantee support of the mission-critical tool, the county leapt into the cloud.
"All of our IT projects are tracked through Planview," says Staci Cenis, IT project manager for Multnomah County, which includes Portland. "We use it for time accountability and planning. Monitoring scheduled and unscheduled maintenance shows us when staff will be free to take on another project."
[ Stay on top of the current state of the cloud with InfoWorld's special report, "Cloud computing in 2012." Download it today! | Also check out our "Private Cloud Deep Dive," our "Cloud Security Deep Dive," our "Cloud Storage Deep Dive," and our "Cloud Services Deep Dive." ]
Initially the county had two dedicated Planview administrators, Cenis explains. But over a period of around three months in 2009, both left their jobs at the county, "leaving us with no coverage, " Cenis says. "We didn't have anyone on staff that had been trained on the configuration of our Planview instance or understood the technical pieces of the jobs that run within the tool to update the tables," among other things.
Cenis hadn't considered the cloud before that issue, but agreed to abandon the in-house software in favor of Planview's software-as-a-service (SaaS) offering after assessing the costs. Training other IT staffers on server, storage, backup administration, recovery and upgrades alone would have compounded the on-premises software expenses, Cenis says.
Nowadays, with the infrastructure and application administration offloaded to the cloud, IT can handle most configuration, testing and disaster recovery concerns during a regularly scheduled monthly call. "I wish we had gone with the cloud from the start because it has alleviated a significant burden," Cenis says, especially in the area of software upgrades.
Each upgrade handled by the application provider instead of her team, she estimates, adds numerous hours back into her resource pool. "What would have taken us days if not weeks to troubleshoot is generally answered and fixed within a day or two," she adds. At the same time, users can access the latest software version within a month or two of its release.
Multnomah County's embrace of the cloud is one of five models becoming more common today, according to Anne Thomas Manes, vice president and distinguished analyst at Gartner.
Gartner categorizes them as follows:
- Replace, as Multnomah County did by ripping out infrastructure and going with SaaS;
- Re-host, where IT still manages the software, but it is hosted on external infrastructure such as Amazon, HP or Rackspace public or private cloud servers;
- Refactor, where some simple changes are made to the application to take advantage of platform-as-a-service;
- Revise, where code or data frameworks have to be adapted for PaaS;
- Rebuild, where developers and IT scrap application code and start over using PaaS.
"Not a lot of companies rebuild or do a lot of major modifications to migrate an application to the cloud. Instead, they either replace, re-host or refactor," Manes says.
Primarily, enterprises view the cloud as an escape hatch for an overworked, out-of-space data center. "If you're faced with the prospect of building a new data center, which costs billions of dollars, it certainly saves money to take a bunch of less critical applications and toss them into the cloud," Manes says.
Problems in paradise?
However, since first observing the cloud frenzy years ago, Manes recognizes companies have taken their lumps. "Many business leaders were so eager to get to the cloud that they didn't get IT involved to institute proper redundancy or legal to execute proper agreements," she says. Such oversights have left them vulnerable technologically and monetarily to outages and other issues.
Companies that moved applications and data to the public cloud early on also didn't always plan for outages with traditional measures such as load balancing. "Even if an outage is centralized in one part of the country, it can have a cascading effect, and if it lasts more than a day can cause a real problem for businesses," she says.
But Dave Woods, senior process manager at business intelligence service SNL Financial, disagrees. SNL Financial aggregates and analyzes publicly available data from around the world for its clients. Despite having a sizeable internal data center, the company's homegrown legacy workflow management application was testing its limits.
"Our data center was full" with both internal and customer-facing applications and databases, Woods says. The company didn't do a full-on analysis to find out whether it was server space or cooling or other limitations -- or all of the above -- but at some point it became clear that they were running out of capacity, and cloud software became attractive.
Though he briefly considered rebuilding the application and building out the data center, the costs, timeframe and instability of the code dissuaded him. "The legacy application lacked the design and flexibility we needed to improve our processes," Woods says. The goal, in other words, was not just to rehost the application but to do some serious workflow process improvement as well.
To accomplish this, SNL Financial adopted Appian's cloud-based business process management system. Although the annual licensing cost was similar to the on-premises software the firm had been using, the clincher was avoiding the $70,000 in hardware costs that would have been needed to update the application at the time. (SNL has since built a "spectacular new onsite data center," Woods says, so it's no longer an issue.)
SNL Financial is expanding its workflow processes to more than 500 banks in Asia, with Woods crediting the cloud for allowing this type of scalability and geographic reach. "We wouldn't have been able to improve our legacy workflow in this way. There was a much longer IT development life cycle to contend with. Also, the application wouldn't have had as much capability," he says.
"These platforms are mission-critical to us, not a side project," Woods explains. "They affect our business engine at our core and they have to enable us to fulfill our timeline guarantees to our customers," he says.
The processes Woods refers to are those involving collecting, auditing and reviewing data and news for specific industries -- the information that SNL sells to clients, in other words.
That's not to say there haven't been some bumps on the road to the cloud. Woods says that while IT was brought in at the start of the decision-making, his process-improvement team missed the mark on making sure IT was fully informed. "We found that no matter how much we thought we were doing a good job communicating with IT and networking, over-communication is the order of the day," he says.
Building up trust in the cloud
NASA's Jet Propulsion Laboratory (JPL) has a similar stick-to-it attitude with the cloud. With more than 100 terabytes spread across 10 different services, JPL's trust in the cloud built up over time.
Its first foray was in 2009, when reality sunk in that the 30-day Mars Exploration Rover (MER) mission would last far longer than originally thought, and demand far more resources than the internal data center could handle. (MER is still sending data back to Earth.)
"All of our IT systems had filled up. We either needed to build new IT systems internally or move to the cloud," says Tom Soderstrom, CTO.
Soderstrom and his team of technicians and developers used Microsoft's then-nascent Azure platform to host its "Be a Martian" outreach program. Immediately, JPL saw the benefits of the elasticity of the cloud, which can spin up resources in line with user demand.
In fact, outreach has proven a fertile playground for JPL's cloud efforts, such as using Google Apps as the foundation for its "Postcard from Mars" program for schoolchildren. Soderstrom calls the platform ideal because it enables an outside-the-firewall partnership with developers at the University of California, San Diego.
External developers are simply authorized in Google -- by JPL's IT group -- to work on the project. "If we used the internal data center, we would have had to issue them accounts and machines, get them badged by JPL, and have them go into schools to install and manage the application code," Soderstrom says. "The cloud approach is less expensive and more effective."
JPL also taps Amazon Web Services for various projects, including its contest for EclipseCon, the annual meeting of the Eclipse open-source community. "All testing, coding and scoring is done in Amazon's cloud so our internal data centers don't have to take the hit," he says.
The cloud benefits internal projects, too, including processing data from the Mars missions. To tile 180,000 images sent from Mars, the data center would have to spin servers around the clock for 15 days or more. JPL would have to foot the cost of that infrastructure and spend time on provisioning specifications down to the type of power plug required.
In contrast, the same process took less than five hours using the Amazon cloud and cost about $200, according to Soderstrom.
As cloud use grows in popularity and criticality, JPL continues to beef up its cloud-based disaster recovery/business continuity, using multiple geographic zones from a single service provider as well as multiple vendors. "We always have failover for everything and consider it as insurance," he says. For the summer Mars landing, JPL instituted a double-failover system. "All cloud vendors are going to have outages; you just have to determine how much failover is required to endure it," he says.
For its data on Amazon, JPL switched on load balancers to move data between zones as necessary. "Previously, network engineers would have been needed to do that kind of planning; now app developers can put in these measures themselves via point and click," Soderstrom says.
There have been hiccups along the way, such as trying to match the application to the cloud service. "Cloud services used to be a relationship between a provider and a business leader with a credit card," Soderstrom says. Now, "we make sure IT is involved at every level," he explains.
To accomplish this, JPL has standardized its cloud provisioning overall, creating an online form that business leaders and developers fill out about their project. Based on pre-set templates created by IT, their plain-English answers to questions such as "are you going to need scalability?" and "where is your customer and where is your data?" guide which cloud service and the level of resources they will need.
The move to self-service provisioning has meant retraining system administrators to be knowledgeable about cloud-use cases. Also, IT security staffers serve as consultants for the cloud environment, vetting and hardening operating system and application builds.
Though this sounds like a complicated evolution, Soderstrom says the technical challenges presented by the cloud have been easy compared with the legal ones. Legal is front and center in all negotiations to ensure appropriate licensing, procurement and compliance deals are struck and adhered to.
In all its cloud contracts, JPL includes language about owning the data. In case of service shutdown, a dispute or other agreement termination, the provider must ship all data back on disks, with NASA picking up the labor tab.
Overall, though, Soderstrom says he is glad he made the leap. "Cloud is changing the entire computing landscape and I'm very comfortable with it. Nothing has been this revolutionary since the PC or the Internet."
Tips for getting to the cloud