As more companies begin to move their applications into the cloud, many are finding out that not all applications are created equally. By that, I mean not all applications are tailor-made to migrate into the cloud. Part of the problem for enterprise organizations is the workload demand that certain applications require: either a large number of processing cores or larger amounts of memory than traditional cloud architectures can accommodate. While virtualization technology is normally the answer to many cloud-related questions, this time virtualization has to be used and thought of in a different way in order to meet the challenge.
When we typically talk about virtualization, we do so in terms of partitioning or making many smaller virtual machines out of a single larger physical machine. In doing so, we optimize the workload of a single physical machine, but what happens when you need to go the other way? What happens when you need to optimize the workload demand of an application that requires more processor cores or more memory than a single physical machine has to offer?
[ Doing server virtualization right is not so simple. InfoWorld's expert contributors show you how to get it right in this 24-page "Server Virtualization Deep Dive" PDF guide. ]
ScaleMP is one company providing such a technology. Its new virtualization technology is being described in terms of virtualization for aggregation rather than consolidation.
To find out more, I spent some time with Shai Fultheim, the founder and president of ScaleMP, and we talked about the differences between scaling up and scaling out with virtualization.
InfoWorld: You and I were talking about different ways of viewing server virtualization. Can you explain these a bit more?
ScaleMP: The best analogy for the two ways to view server virtualization is to think of how IT, for years, has viewed storage. There are two ways to purchase and implement storage: large storage arrays and smaller in-system, JBOD or NAS. In the former, companies have partitioned large storage arrays for particular workloads, users, or departments. The single large array provides them an easier way to manage and distribute storage. In the latter, there have been storage management tools available to concatenate the distributed smaller arrays to appear as a large pool or single storage resource. This is exactly what is now occurring in server virtualization.