The idea of managing down to the processor level isn't a new concept. What is new is that data centers can add, move, and remove virtual processors and memory for spikes in usage or maintenance. They don't have to pay for extra capacity, but only pay for the servers they need. They also pay only for a portion of the full cost of processors and memory up front. IBM estimates the cost of these pools is $0.67 per hour, based on per-day costs for processor and memory allocations. Data center operators can manually adjust the service levels for an application as often as they want, then use those service levels for automation.
3. Object storage: no more playing with blocks
When it comes to data center scale, traditional file storage systems can be limiting. Think of an upstart social network. When there are a few hundred users, the storage system can keep up with the number of images and video posted online. Scaling to a few million users suddenly becomes a management chore -- data center managers have to manage multiple volumes.
"File systems are designed for people to collaborate on the same data without modifying it at the same time," says Tom Leyden, a spokesman for DataDirect Networks. "If two people access a Word document at the same time, they will lock the file. Those locking mechanisms make it complex to scale the file system. A file system is slow when it's locked."
The answer, says Leyden, is object storage. The idea is to use a simplified ID system for files. The ID crosses multiple storage volumes and refers to where that object is stored. Metadata is also attached to the file to make it more searchable across volumes. There's no hierarchy and no locking mechanism, says Leyden. This helps with scaling because object storage can create "clusters" of data that scale as a company grows. Object storage creates a single storage management system - one that's easier to manage.
4. Auto-tiering: scale up, scale down
Data center managers need to automatically adjust storage as application needs change. The goal is to accommodate high-performance apps, but the challenge is knowing when to scale up for demand and then when to scale down.
Auto-tiering analyzes actual app data frequency of use. In an infrastructure that uses Dell EqualLogic arrays, for example, 80 percent of data becomes inactive after a month. Auto-tiering matches this legacy data for the lowest-cost storage option, rather than keeping it on faster drives too long.