Startup skirts datacenter bottlenecks with cache

Gear6 CACHEfx's cache-in-a-box approach serves data quickly to I/O-heavy apps

Seeking to alleviate the bottleneck woes of I/O-intensive apps, startup Gear6 today announced CACHEfx, a scalable cache appliance that makes as much as 5TB of cached data available to applications without having to retrieve it from storage.

The appliance, which is founded on what the company is calling "centralized storage caching," pools together a vast amount of high-speed RAM in an effort to shrink the server-storage performance gap fueled in part by the server virtualization trend.

"Server power has been increasing at an exponential rate. You've got Moore's Law, multicore CPUs, and now server virtualization making the server side increasingly powerful," said Jack O'Brien, director of marketing at Gear6. "The disk industry has done a good job increasing capacity and keeping the cost per gigabyte down, but there hasn't been a similar increase in performance in terms of IOPS and latency. This is a major bottleneck in the datacenter and is where CACHEfx fits in."

The appliance complements existing NAS and NFS storage infrastructures, connecting to the network via Gigabit Ethernet to serve data at memory latency, thereby making it available 10 to 50 times faster than via mechanical disk, according to Gear6. Throughput for CACHEfx, which also supports 10 Gigabit Ethernet, scales from 2Gbps to 16Gbps.

Gear6's caching methodology provides a contrast to the I/O virtualization approach recently announced by fellow startup 3Leaf Systems, one that establishes a virtualization layer through which servers connect to the storage resources in an effort to better serve applications' I/O needs.

By circumventing datacenter I/O bottlenecks, CACHEfx is also aimed at reducing the need to overprovision storage networks, and thereby reduce overall hardware expenditures, O'Brien said.

"The most common way to deal with performance issues today is to throw hardware at the problem. Buy more disks, spread the data across them; buy more controllers to get more throughput; and overprovision," O'Brien said. "By offloading all the I/O going through the filers in the disks and putting it in CACHEfx, you don't need to overprovision your disks anymore."

Currently deployed in data- and I/O-intensive environments, such as financial analytics, animation rendering, large-scale Web apps, and EDAs (event-driven architectures), CACHEfx virtualizes the abundance of RAM housed within the appliance, presenting it to application servers as a centralized pool of aggregated cache. The appliance manages latency and ensures high performance by scheduling and serving multiple requests in parallel. A cache services layer distributes the data throughout the appliance and manages requests automatically.

"One of the elegant things with cache is that it doesn't need a lot of management," O'Brien said. "If the application needs the data, it will request it, and the data will remain in cache as long as the application is continuing to request it."

CACHEfx comes with a Web-based GUI, as well as a command-line interface. It includes SMP hooks to connect with other datacenter management frameworks. Analytics for CACHEfx allow administrators to drill down into the I/O profile to examine interactions between the appliance's cache and the applications it serves.

Pricing for the appliance begins at $400,000. CACHEfx comes in 0.25TB, 0.5TB, and 1TB iterations, which can be added in building-block fashion to increase caching capacity on the fly.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies