The university now uses conventional storage arrays, primarily from NetApp and Hitachi Data Systems. Plankers sees Coho's micro-arrays forming a middle storage tier between his high-speed, high-availability systems at the high end and inexpensive disk-based platforms at the low end. That tier could eventually account for 80 percent of the university's storage, he said. Coho can actually match the high-end systems for speed, but replacing those platforms with the new gear would require rethinking functions such as replication, Plankers said.
Delivering a cloud-like storage resource inside an organization is one thing Coho's architecture could help IT departments do, Enterprise Strategy Group analyst Mark Peters said. This can offer a public cloud's ease of use while avoiding concerns about security, wide-area network speed and service-provider relationships, he said.
Warfield and another co-founder at Coho helped to write the Xen open-source hypervisor. They want to bring the same kinds of virtualization capabilities now found in computing, and just emerging in networks, to storage.
Coho's modular expansion and commodity hardware can drive the per-gigabyte cost of its storage down below subscription prices for services such as Amazon S3, according to Warfield. Cloud storage and SaaS (software-as-a-service) providers may also use Coho to power their commercial offerings, Warfield said.
The system has been deployed in some large enterprises and will be generally available later this year, according to Coho. The minimum initial installation is a 2U enclosure that includes two micro-arrays, with a total capacity of 40TB. Pricing will start at US$2.50 per gigabyte, the company said.