7 ways Windows Server 2012 pays for itself

These new and improved 'supersaver' features offer the biggest return on your Windows Server 2012 investment

Page 4 of 5

Windows Server 2012 supersavers No. 4: Failover clusters
With previous versions of Windows Server, clustering was confined primarily to the realms of high-performance computing and high-availability services such as SQL Server. It required a special license and additional installation for the necessary components. Windows Server 2012 includes clustering in the Standard edition, making it possible to build a fault-tolerant, two-node cluster for a very modest price.

"Continuous availability" is Microsoft's new buzz phrase for providing fault-tolerant resources, and clustering is the key piece that makes it possible. For continuous file resources, there's version 2 of cluster shared volumes (CSV), which define a single name space that presents clients with a consistent path to connect to. CSV volumes look like directories and subdirectories underneath a ClusterStorage root directory. CSV v2 includes support for Volume Shadow Services (VSS) for hardware and software failover of CSV volumes.

A new feature called cluster-aware updating (CAU) allows you to perform patches and updates to running cluster nodes without interrupting or rebooting the cluster. Each node will receive an update, then restart, if necessary. You would need more than two cluster nodes for CAU to work without breaking the cluster continuity. That said, you'll definitely save on downtime and administration costs with the CAU feature.

Previous versions of the OS had limitations for virtualizing Domain Controllers (DCs). This issue has totally gone away with Windows Server 2012. Hyper-V 3.0 now supports the cloning of virtualized Domain Controllers. You can also do a snapshot restoration of a DC to get back to a known state. This is especially helpful in a development or lab setting where you need to build an environment from scratch or start over from a known point.

Windows Server 2012 supersaver No. 5: Data deduplication
It's easy to see how duplicate copies of the same data files can cost you time and money in backups and primary storage. Data deduplication is not a new technology, of course. It has been available from both backup and storage vendors for some time. But with Windows Server 2012, deduplication is now a part of the base OS. Note that data deduplication works only with NTFS volumes, not with the new Resilient File System or with Cluster Shared Volumes. 

Heavy users of virtualization or virtual desktop infrastructure (VDI) implementations stand to see the biggest gains here. Microsoft quotes numbers of 2:1 for general file server storage and 20:1 for virtualization (VHD) libraries. Individual files are replaced with stubs pointing to data blocks stored within a common "chunk" store. Data compression can also be applied to further reduce the total storage footprint. All data processing is done in the background with a minimal impact to CPU and memory.

The data deduplication feature is also tightly integrated with BranchCache, helping to save on overall bandwidth consumption when distributing data over a WAN. In addition to dramatically speeding up file transfers, deduping data that travels the WAN can greatly reduce costs for dedicated or metered network circuits.

| 1 2 3 4 5 Page 4
From CIO: 8 Free Online Courses to Grow Your Tech Skills
View Comments
Join the discussion
Be the first to comment on this article. Our Commenting Policies