Stonebridge Bank averts a capacity crisis

This small regional bank cut its server count by a factor of five – and reduced hands-on overhead through remote administration

It's a dilemma faced by IT administrators everywhere. "We ran out of rack space, air conditioning capacity, and UPSes at the end of 2004, but we needed more servers," recalls George Rapp, senior vice president of IT for Stonebridge Bank, a regional institution in Pennsylvania. Getting more power in and more heat out was just not an option for the bank's datacenter, so Rapp consolidated multiple Unix servers into one box to reduce the physical footprint and delay the crisis. "But it got us only part of the way," he notes.

In 2005, EMC released VMware ESX version 2, giving Rapp a practical server virtualization option for the first time. By converting physical servers to virtual ones, several of which can run at a time on one physical server, the bank has dodged the capacity bullet. It has gone from having 131 physical servers to just 26, even though it has more server instances, and thus needs just two of its five racks and 37 percent of its UPS capacity. Even as the bank grows, it will be at least five years before server growth pushes its datacenter's physical space, power, and heat limits, Rapp says.

Today, the bank uses three Hewlett-Packard DL-585 servers to run VMware, each with four dual-core CPUs and 32GB of RAM. Rapp chose the AMD-powered DL-385 over the Intel-powered DL-380 model because his tests showed the DL-385 could run about 25 VMs while the DL-380 could run just 12. "We have 77 critical VMs on the three servers," Rapp notes.

Putting so many eggs in one basket can increase risk if a physical server fails, so he hasn't fully loaded any of the DL-585s. Worst case: One physical server could run all the mission-critical VMs, Rapp says. To further ensure availability in case of disaster, the bank has connected all the servers via SAN, so it can quickly move the server images to other equipment if necessary, using the EMC VMotion utility. (VMotion requires that all servers have the same CPU family, forcing a full refresh for all VMware servers, rather than permitting an incremental replacement strategy, Rapp notes.) The VM configuration software also resides on the SAN, so it is available no matter what VMware servers are up. "We realized that it [the VM image] is just data," he says.

The deployment of virtual servers as image data on a SAN also has reduced data management costs and the IT staff's travel requirements, Rapp notes. The bank needs fewer backup agents and defragmentation applications because it has fewer physical servers, he says, citing a cost decline from $230 per server per month to $31. Travel needs are reduced because if a server doesn't come up properly, IT staff can just reload the image from headquarters, saving a drive of as much as 260 miles round trip. "It's easier to manage remote branches now," Rapp says. Even at headquarters, "we don't even go into the server room any more," he says. "We don't even turn on the lights." Before using virtualization, IT staff had to go in several times a day to deal with server maintenance.

But there are some servers that virtualization just can't handle, Rapp notes, so they continue to reside on dedicated physical servers. Among those nonvirtualized servers are the VoIP servers, whose I/O needs are too high. The bank also keeps file servers on dedicated boxes to get the extra I/O speed — because virtualization makes several server instances share disks, CPUs, buses, and network connections, an I/O slowdown occurs. In many cases, that slowdown doesn't affect overall performance, but it can be an issue in I/O-intensive functions involving file servers, e-mail servers, and database servers, Rapp notes. "Everything that doesn't require sophisticated timing, specialized hardware, or high performance now goes onto VMware servers," he says.

In some cases, the VMs' operating systems can be tweaked to allow deployment as a virtual machine, Rapp says. That's the case for his RSA security-token servers, which needed to be fine-tuned to ensure there was no lag between the issuance of a new key and processing of a key response from a user. Sometimes, the issue is physical. For example, telephone-banking servers require special hardware cards for telephony functions, which can't be plugged into a VM.

Beyond better utilization and lower power consumption, virtualization lets the bank deploy servers faster, since it can create them in 10 to 20 minutes from standard images, rather than hand-install them as in physical environments — a two-week process, Rapp says. But the ease of deployment does have a dark side, he notes. Because it's so easy to make a new server, "proliferation is an issue," Rapp notes, one that could drive up overall management overhead.

Virtualization special report

Case study: Credit Suisse

Case study: Nationwide Insurance

Case study: Purdue University

Case study: Transplace                  

Join the discussion
Be the first to comment on this article. Our Commenting Policies