Review: Flash your way to better VMware performance

PernixData FVP clusters server-side flash to improve virtual machine performance and reduce SAN latency

1 2 3 Page 2
Page 2 of 3

PernixData FVP works with vSphere 5.0 and vSphere 5.1 hosts. It installs into the vSphere kernel via the VMware update utility and accelerates I/O requests between VMs and the SAN. Configuration consists of creating the flash cluster and adding the server-side flash devices to the cluster. You then designate the datastore to be accelerated. This is a bulk operation -- you do it once for the entire VM datastore on the SAN, not for each VM. The VMs residing in the datastore will automatically inherit the attributes of the FVP flash cluster, as will any VMs added to the datastore later. Alternatively, you can add individual VMs to the flash cluster so that only the I/O of chosen VMs is accelerated.

You can create the flash cluster using any PCIe or SSD flash device found on VMware's hardware compatibility list. Best of all, the server-side flash can consist of a heterogeneous mix of devices. You don't have to install the same flash hardware, or even the same capacity, in each host. As of this 1.0 release, FVP only works with block-based network storage, such as iSCSI and Fibre Channel SANs. Support for file-based storage (that is, NFS) will be available in future versions, as will support for additional hypervisors beyond VMware vSphere.

The size of the flash cluster is based on the I/O activity of the running VMs and not the underlying storage footprint. Thus, for a multiterabyte datastore for all of your VMware VMs, you won't have to break the bank and install multigigabyte or multiterabyte flash devices. However, as with any caching system, PernixData FVP will suffer some read misses during VM startup and for a short initial period.

The write stuff

The flash clusters in PernixData FVP work in either write-through or write-back mode. A write-through flash cluster accelerates reads from the SAN but not writes. Because writes are committed to the server-side flash and the SAN simultaneously, write performance is still bound to the write latency of the SAN.

The write-back policy accelerates both reads and writes, with writes committed to the cache first, then copied to the SAN in the background. Keeping cached I/O data safe is essential because there's always a chance that uncommitted data will be lost in the event of a host failure. FVP prevents this by replicating writes to other cache nodes. Write-back mode allows for zero, one, or two replicas to be stored across the cluster to help prevent data loss. You configure this on a per-VM basis, so you can make the cache fault tolerant for some VMs but not others.

PernixData FVP
The PernixData dashboard provides excellent real-time views into how the flash cluster is performing. This VM IOPS chart shows an effective IOPS of nearly 60,000, with local flash contributing roughly 50,000 and the SAN only about 9,000.
1 2 3 Page 2
Page 2 of 3