Some technologies work so well that they've become immortal -- not, perhaps, because they are perfect, but instead because newer technologies have not improved on their advantages enough to unseat them, even if they may make inroads.
One example of this would be NIS. Though there are a host of newer network authentication mechanisms available, NIS is still ubiquitous. Another would be IPv4. Even though IPv6 is far more extensible and modern, most of us are still working with IPv4 and will be for a long time to come.
[ Virtualization roulette: One 10G switch is never enough | Virtualization showdown: Microsoft Hyper-V 2012 vs. VMware vSphere 5.1| Get virtualization right with InfoWorld's Server Virtualization Deep Dive PDF guide and High Availability Virtualization Deep Dive PDF special report. ]
Then there's NFS, which is turning 30 next year. NFS's usefulness as a distributed file system has carried it from the mainframe era right through to the virtualization era, with only a few changes made in that time. The most common NFS in use today, NFSv3, is 18 years old -- and it's still widely used the world over.
It wasn't always that way. There was a long time where NFS was used solely in Unix land, serving up files to Solaris, Linux, and FreeBSD servers in various places but eschewed by many as being too old and insecure to be of much use otherwise. Even the advent of virtualization didn't immediately call on NFS for much other than a fallback option. iSCSI was on the rise, Fibre Channel was the go-to medium for providing fast network storage access, and NFS was just sort of there. But with the adoption of 10G networking and the subsequent price drops of 10G ports, NFS has seen a resurgence, specifically in the virtualization space.
Sure, there are still millions of Unix boxes using NFS, but now there are also millions of virtualized Windows servers that are running from NFS storage through the hypervisor. More and more storage vendors are recommending NFS over iSCSI for virtualization deployments for a wide variety of reasons.
For one, NFS is far less cumbersome to use and manage than iSCSI. You don't have to cut LUNs for each set of virtualization hosts (or in the case of some hypervisors, cut LUNs for each VM); instead, you can simply export a file system on a dedicated, closed storage network, and any host can play in the game. Sure, you won't have CHAP authentication, but in many cases, that's not necessary. In many data centers, authentication for iSCSI exists simply to prevent problems with hosts accessing LUNs they shouldn't while scanning.
Presenting storage through iSCSI rather than a file system places the onus of managing simultaneous host access on the hosts themselves. All locking and write management has to be handled outside the storage array, meaning you can run into problems that cause catastrophic effects when one host goes pear-shaped.
On a few occasions, I've lost an iSCSI LUN completely when a war between several ESXi hosts resulted in a horribly corrupted VMFS volume on the LUN. I had to destroy the volume and re-create it from backups. With NFS, all of the tasks at the file system layer are handled by the array itself, leading to a more cohesive environment for multiple systems to access -- as in the case of virtualization.