Bringing branch office servers and file shares back to the data center is generally easy to do, especially with the use of WAN optimization solutions. But sometimes -- for the sake of performance, practicality, or politics -- a server simply must remain in the branch. It was for these intransigent servers, and to satisfy the needs of both server-hugging branch offices and control-hungry IT, that Riverbed Granite was born.
By pairing appliances at the edge and at the core, Granite allows IT to "project" virtual machines and iSCSI storage volumes out to the branch office while keeping the actual assets in the data center. Through innovative technologies, Granite closes the gap between physical servers in the branch office and storage in the data center. As a result, VMware ESX and Microsoft Hyper-V servers running in the branch can launch virtual machines across the WAN, and the VMs can write back to storage located in the data center.
[ Also on InfoWorld: Review: Riverbed Steelhead closes the WAN gap | Use server virtualization to get highly reliable failover at a fraction of the usual cost. Find out how in InfoWorld's High Availability Virtualization Deep Dive PDF special report. ]
Granite is available as a stand-alone product or as a bundled component on a Steelhead EX appliance. When used in combination with Steelhead WAN acceleration, performance improves dramatically, especially on subsequent VM launches, to rival the speed of true local storage.
Store centrally, execute locally
Granite is a different kind of creature than Steelhead. Steelhead accelerates a wide range of TCP and UDP traffic over a WAN through application- and protocol-specific optimization engines. Steelhead also reduces bits over the WAN through data deduplication, and it compresses data to get more on the wire. Granite, on the other hand, is specifically designed to export iSCSI storage resources located in the data center across the WAN and present them as local storage.
Although Granite will work with any file system, prediction and prefetching is limited to VMFS (VMware's file system) and NTFS (Microsoft's file system). Granite leverages awareness of how storage blocks are mapped by these file systems to anticipate which blocks will be needed and proactively send them to the edge. For instance, Granite can recognize when an operating system is booting or when a large file has been accessed, then respond with all of the necessary blocks, even before receiving the requests.
If you pair Granite with Steelhead, you'll also reap the benefits of data deduplication over the WAN. Among other things, this means the bits comprising the branch office's virtual machines will be delivered locally from the Steelhead EX cache. Thus, virtual machines stored in the data center -- where they can be easily backed up and maintained by IT -- can perform nearly as well as if they were stored locally.
Tested from coast to coast
I tested Granite in my lab in Florida against a storage array located completely across the country. My Granite Edge instances ran on two Steelhead EX appliances, deployed in a hot-standby configuration, and connected across a VPN back to a Granite Core appliance and an EMC storage array located in San Francisco. The EMC system was carved into multiple LUNs, some with Windows Server 2008 virtual machines already in place and others providing raw storage. From my local VMware ESX server, I was able to connect to four different LUNs and add each virtual server and disk into my inventory.
To test performance, I then booted a VM over the VPN and timed the event. The first launch of Windows Server 2008 took approximately 13 minutes to get to the log-in prompt. Once the VM was running, there was a slight delay whenever Windows Server performed a task for the first time, such as opening Server Manager, as the new bits made their first trek over the VPN. But after these initial delays and all the OS bits had been cached locally in the Steelhead, Windows Server worked at or near the speed of local storage. After just a short while, navigating the Windows Server UI and using various applications felt no different than if they had been running on local hardware. A reboot of the server was much faster, needing only approximately 1.5 minutes to boot up because of Steelhead's caching.
While Granite can export any iSCSI LUN, its optimizations are specific to VMFS and NTFS. This means that whenever a request is made for data from Granite Edge, if the LUN's file system is VMFS or NTFS, Granite Core will predict and prefetch the desired blocks (such as the boot blocks when a Windows OS is being launched). Other file systems, such as EXT3, can be used, but they don't benefit from pre-fetching.
An unpinned LUN is one that consumes only cache storage in the Granite Edge; the rest of the data from the volume, if and when requested, is fetched over the WAN from Granite Core. This is a plus from a security standpoint, because the only complete copy of the volume stays safe in the data center. Performance can also be very good, as in my Windows Server 2008 boot tests. After all, the most active data is served locally from the Granite Edge cache.
One feature you have to like is the ability to pin and unpin a volume as needed, even while the system is operational. For example, whenever a planned outage for the WAN circuit was scheduled, you could pin any unpinned LUNs to the edge to allow users to work during the maintenance. After the WAN goes back into service, and the edge LUN resyncs with the core, you could unpin it again.
Building the blocks
There are a couple of steps necessary to get the LUNs into Granite. The first is to create an iSCSI connection from Granite Core to the storage array. Here you create iSCSI connections to the iSCSI portal (the storage system) just as you would with any normal iSCSI system. Granite supports all common iSCSI initiator options such as header and data digests, CHAP authentication, and MPIO. Once the storage is connected to Granite Core, you can give each available LUN a friendly name and map it to your branch office. For my test, I added a new LUN and assigned it the name VOL1. I then mapped it to my Granite Edge appliance and left it unpinned so that the volume stayed in the data center. The last step is to grant access to the LUN. By default, all newly created LUNs are unassigned and therefore unavailable for use. I added the group "all" to the allowed list, and my iSCSI volume was then available for my branch office.
Another very useful -- and necessary -- feature is Granite Edge's active/standby fail-over capability. For offices that need maximum uptime, Granite Edge appliances can be deployed in fail-over pairs. Each appliance is kept in sync with the other through a local Gigabit Ethernet connection. If the primary fails, the other takes over, and the exported volumes remain up and available. I tested this with my pair of Granite Edges by pulling the power to the primary unit. The secondary appliance picked up the slack, and my Windows Server 2008 virtual machine kept working as if nothing had happened. When power was resumed to the primary, it became the backup unit and synchronized to the running appliance. This fail-over works for both pinned and unpinned volumes.
Riverbed Granite is a unique solution for running virtual machines over the WAN. With Granite, IT can collapse server resources -- VMs and storage volumes -- back to the data center while still providing server resources at the branch office. The ability to export storage volumes across the WAN and provide excellent performance at the branch is groundbreaking. Setup and configuration are minimal, and being able to pin and unpin volumes provides excellent flexibility. It's possible no one ever dreamed of running VMs over the WAN, but now there is a solution -- and Granite is its name.