Some believe that high levels of software customization in a distributed server environment is the answer to the problem. Instead, this often leads to wasted and unused resources, built-in inefficiencies, energy and floor space concerns, security issues, high software license costs, and maintenance nightmares.
Enterprise-grade servers that are well suited for modern big data analytics workloads have:
- Higher compute intensity (high ratio of operations to I/O)
- Increased parallel processing capabilities
- Increased VMs per core
- Advanced virtualization capabilities
- Modular systems design
- Elastic scaling capacity
- Enhancements for security and compliance and hardware-assisted encryption
- Increased memory and processor utilization
Superior, enterprise-grade servers also offer a built-in resiliency that comes from integration and optimization across the full stack of hardware, firmware, hypervisor, operating system, databases, and middleware. These systems are often designed, built, tuned, and supported together -- and are easier to scale and manage.
For example, many large financial institutions have embarked on aggressive programs to use predictive analytics technology to enhance their revenues. This is placing greater demand on existing compute resources. Using an enterprise-grade server helps these institutions to run thousands of tasks in parallel to deliver analytics services faster, as well as create a virtualized environment that improves server utilization and shares server resources across business units. Server consolidation and virtualization helps reduce the number of physical servers, saving data center space and yielding savings through reduced power and cooling, hardware maintenance, software licensing, and management costs.
To lay it out in more technical terms, there are three important computing requirements for big data workloads: