sponsored

How to Do More With Less in the Data Center ... No, Really!

istock 537445612
iStock
By Bharath Vasudevan, Director of Product Management, Hewlett Packard Enterprise Software-defined and Cloud Group.

“We have to do more with less.” If you’re an IT pro who has never heard this phrase, consider yourself lucky. When IT budgets are flat or decreasing, that simple sentence becomes gospel. Even when budgets are on the rise, no one wants to spend more money than they have to. In either financial situation, IT is expected to solve data center and business pain points. Because of this reality, many organizations are turning to specialized hardware to improve data center metrics.

Total cost of ownership (TCO) and return on investment (ROI) are phrases that are often thrown around when discussing any type of new technology. Quite simply, they mean, "how much will this technology set me back, and how long before it pays itself off?" While these phrases may at first sound like pure marketing mumbo jumbo, they are actually directly related to data center performance – or more specifically, virtual machine (VM) density.

Virtual machine density refers to the number of VMs you are able to fit onto a single host. The more VMs you can fit, the fewer hosts you need to buy. Having fewer hosts equals less money spent on hardware. The cost savings aren't just limited to spending less on the original investment, however. Fewer hosts also means you end up spending less on licensing. After all, there's no need to buy vSphere licenses for hosts that don't exist! Fewer hosts also means savings in electricity, cooling, and data center rack-and-floor space. So, now armed with the knowledge that less hosts equals more money in the bank, how can you take advantage and maximize VM density?

Maximizing VM density without degrading workload performance can become a complicated balancing act. That’s because many IT departments find it difficult to predict how workloads will function in a different environment when they’re in the process of specifying hardware for the new infrastructure. To combat this, many vendors use deduplication as a means to save storage space and get the most out of each VM. Simply put, deduplication saves storage space and maximizes VM density by only storing unique blocks of data, as opposed to saving full, repeat, copies of data.

Two deduplication techniques are typically used: post-process deduplication and inline deduplication. Post-process deduplication saves full copies of data at first and then goes back after-the-fact to remove the repeat data. The disadvantage of the post-process approach is that it originally stores the data fully intact, which sometimes causes capacity issues. Inline deduplication, on the other hand, occurs in real-time and only backs up data that has changed, which alleviates capacity concerns during deduplication.

Both techniques of deduplication have their merits...and their drawbacks. For lowering TCO and doing more with less, inline deduplication may be the better choice — especially when paired with hardware acceleration. IT is able to cram more VMs onto a single host, peak and predictable performance is still achieved, and, as a result, the company saves on licensing, data center footprint (power, cooling, etc.), and initial host costs.

With hyperconverged infrastructure gaining so much steam and becoming a major part of data centers around the globe, the important thing for IT teams is to pick a solution that satisfies their needs. As a leader in the IT infrastructure industry, HPE offers a variety of hyperconverged form factors to suit the needs of any business — from a local, small business to a global Fortune 50 enterprise. Find out how hyperconvergence can transform your business and simplify the lives of your IT team in this eBook.

Related: