Review: VMware Virtual SAN turns storage inside-out

VMware's VSAN 1.0 combines easy setup and management with high availability and high performance -- and freedom from traditional storage systems

1 2 3 4 5 6 Page 5
Page 5 of 6

In order to generate large amounts of traffic, VMware suggests using multiple I/O Analyzer VMs on each node in the VSAN cluster. To test both the four-node Supermicro cluster and the three-node Lenovo cluster, I used eight VMs on each node -- for a total of 32 worker VMs on the four-node cluster, and 24 on the three-node cluster -- with an additional I/O Analyzer VM on each serving as the controller node.

I/O Analyzer comes with a list of different workload types supporting a wide range of I/O sizes from 512 bytes to 512KB. Iometer provides the ability to specify the types and percentage of I/O operations, reads, and writes, along with the amount of time to run each test.

To compare my two clusters, I ran two different I/O Analyzer workloads to measure high write performance and a mixture of reads and writes. The Max IOPS test used a 512KB block size for 100 percent sequential read, while the combo test used 4KB blocks and a mix of 70 percent reads and 30 percent writes. The results of the two tests tell two different stories. Whereas the three-node cluster held its own against the four-node cluster in the Max IOPS test (roughly 154K vs. 190K maximum total IOPS), the four-node cluster proved vastly superior (yielding roughly double the performance) in the mixed workload test. The results of the mixed workload test are presented in the chart below. 

VMware VSAN performance results
With more RAM, more CPU, larger SSD, and 10GbE networking, the three-node Supermicro cluster more than doubled the read and write performance of the three-node Lenovo cluster.

The single most important factor in VSAN performance will be the size of the SSD cache. If the data your workload requires is not found in the flash cache, but must be accessed from rotating disk, then I/O latency will shoot up and IOPS will fall dramatically.

Note that the results for the mixed workload test shown above make use of 4GB target virtual machine disks, which (when multiplied by eight I/O Analyzer workers per node) did not exceed the SSD cache size in either cluster (100GB SSD in the Lenovo nodes, 400GB SSD in the Supermicro nodes). When I ran the same benchmark using 15GB target disks for the Lenovo cluster and 50GB target disks for the Supermicro cluster (exceeding the SSD cache size on all cluster nodes) IOPS plummeted on both clusters.

In short, when configuring your VSAN cluster hardware, be absolutely sure to include enough flash in each node to exceed the size of the working data set. Naturally, more RAM and 10GbE networking are nice to have. VMware recommends 10GbE for most deployment scenarios. After all, the cost has dropped considerably over the last few years, and 10GbE offers significant improvements in performance over 1GbE.

At a Glance
  • VMware Virtual SAN 1.0

1 2 3 4 5 6 Page 5
Page 5 of 6