Enabling multisite file access with the cloud


Become An Insider

Sign up now and get free access to hundreds of Insider articles, guides, reviews, interviews, blogs, and other premium content from the best tech brands on the Internet: CIO, CSO, Computerworld, InfoWorld, IT World and Network World Learn more.

Nasuni's Data Continuity Service offers fresh approach to file storage, backup, and enabling distributed access to unstructured data

If you run a multisite network, one of the most irritating problems to deal with is handling large amounts of distributed file-sharing data. The simplest and most often used solution is to place file servers or NAS devices at each site to handle local file sharing duties. But this has serious drawbacks.

First, you have to deal with protecting the data at those sites -- often involving error-prone remote-site backup solutions or bandwidth-heavy WAN-backup schemes. Next, you need to deal with user complaints about file sharing performance when they have to pull their data over the WAN. There are roll-your-own solutions available to deal with these issues, such as using Microsoft's DFS-R (Distributed File System Replication), but these tend to be fairly complex -- and generally deal with either the data protection or data availability/portability problem, but not both.

Recently, I got a chance to take a quick look at a cloud-based product designed to solve both problems: Nasuni's Data Continuity Service. The Natick, Mass.-based startup has been offering a cloud-backed NAS gateway for quite a while and recently added the ability to make a single volume available to multiple sites simultaneously. Though it's not without its own limitations, Nasuni's innovative use of cloud-based storage to provide multisite access to unstructured data may be a light at the end of the tunnel for those struggling with the challenges of multisite file sharing and data protection.

Popping the hood on Nasuni

Since I was testing on an Internet connection with limited upstream bandwidth, I tossed a small collection of files onto the appliance and gave it a little while to go through its initialization process. Within an hour or so, it had synchronized with Nasuni's servers and created its first cloud-based snapshot -- that is, a point-in-time copy of the volume was now being stored both on the appliance and in the cloud. Next, I changed a few files and removed some others. By default, the appliance will create a new snapshot every five minutes; by the time I took another look, a new snapshot had been created and my changes had already been shipped up to the cloud.

After I changed up a few more times and allowed a few more cloud-based snapshots to be created, I tested restoring files from snapshots. Though you can access the contents of snapshots through the Web GUI as an administrator, you can also use Microsoft's client-based Volume Shadow Copy functionality to restore files from snapshots -- fairly impressive. Simply right-clicking on a file in Windows Explorer on a client workstation allowed me to select "Restore previous versions" and see a list of all of the previous versions of that file that had been stored as I had modified it.

Multisite access

Next, I deployed a second Nasuni appliance to emulate a second site. Allowing the second filer to serve up the same data as the first was easy enough, requiring a quick setting change to allow so-called remote access on the first filer, then asking the second filer to connect to that shared volume. I had the choice of allowing the volume to be read/write or read-only and could potentially control that access on a per-filer basis. In this case, I opted to allow the second filer to have read-write access. Within a few minutes, the second filer had pulled down the most recent snapshot of the volume from the cloud and it was accessible to my test clients.

Cloud back end and pricing

By default, evaluation accounts use cloud storage provided by Amazon S3, but paying accounts are given a choice of providers that Nasuni has qualified through what appears to be a fairly stringent internal QA process. Interestingly, Nasuni does not ask you to go sign up and pay for cloud services directly. Instead, they allocate the storage themselves and pass the cost on to the customer through a flat rate fee that includes both the storage and licensing for the Nasuni software. They're effectively selling cloud storage and backup, not just a software product.

Pricing generally runs at a rate of about $10,000 per TiB per year, scaling lower as the amount of data you host scales up. This fee includes all aspects of the cloud storage: software, 24/7 support, storage, transfer, transactions, and so on. Nasuni says most of its enterprise customers pay in the neighborhood of $7,000 per TiB or about $6.80 per GiB each year. That may initially seem like a lot, but when you look at it from the perspective of needing to provide not only enough primary storage to hold replicas of all of your data at all of your sites, but also a data protection mechanism (complete with operating costs), it ends up being competitive.

This reseller model makes the pricing scheme very easy to understand, because all supported cloud storage providers and availability zones are offered at the same simplified price. But note that Nasuni sits in the driver's seat in the relationship with the back-end provider; if Nasuni were to disappear overnight, your access to your data might do the same. True, Nasuni has a comprehensive SLA that offers service credits in the event that your data is even momentarily unavailable, but that won't do you much good if they cease to exist. Such are the hazards of all cloud-based storage providers -- especially those still being funded by venture capital.

Putting it all together

To continue reading, please begin the free registration process or sign in to your Insider account by entering your email address:
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
You Might Like
Join the discussion
Be the first to comment on this article. Our Commenting Policies