How virtualization is lifting us to the cloud

Server virtualization has been a huge win for the data center. Nimboxx CTO David Cauthron explains how the next phase will deliver dramatic benefits in the cloud

Over the past decade, the whole world seems to have embraced virtualization. Is there nothing left to conquer? Hardly. Virtualization technology itself is changing very fast. And the right solutions to address the challenges of legacy application support and migration for modern applications can be tough to find.

This week in the New Tech Forum, David Cauthron, co-founder and CTO of Nimboxx, gives us a bit of virtualization history, how that relates to the current reality of the commodity hypervisor, and his take on where it's all going from here. -- Paul Venezia

The hypervisor is a commodity -- so where do we go from here?
Virtualizing physical computers is the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multitenancy, and more.

Early virtualization methods were rooted in emulating CPUs, such as the x86 on a PowerPC-based Mac, enabling users to run DOS and Windows. Not only did the CPU need to be emulated, but so did the rest of the hardware environment, including graphics adapters, hard disks, network adapters, memory, and interfaces.

In the late 1990s, VMware introduced a major breakthrough in virtualization, a technology that let the majority of the code execute directly on the CPU without needing to be translated or emulated.

Prior to VMware, two or more operating systems running on the same hardware would simply corrupt each other as they vied for physical resources and attempted to execute privileged instructions. VMware intelligently intercepted these types of instructions, dynamically rewriting the code and storing the new translation for reuse and fast execution.

In combination, these techniques ran much faster than previous emulators and helped define x86 virtualization as we know it today -- including the old mainframe concept of the "hypervisor," a platform built to enable IT to create and run virtual machines.

The pivotal change
For years, VMware and its patents ruled the realm of virtualization. On the server side, running on bare metal, VMware's ESX became the leading Type 1 (or native) hypervisor. On the client side, running within an existing desktop operating system, VMware Workstation was among the top "Type 2" (or hosted) hypervisors.

No longer a technology just for developers or cross-platform software usage, virtualization proved itself as a powerful tool to improve efficiency and manageability in data centers by putting servers in fungible virtualized containers.

Over the years, some interesting open source projects emerged, including Xen and QEMU (Quick EMUlator). Neither was as fast or as flexible as VMware, but they set a foundation that would prove worthy down the road.

Around 2005, AMD and Intel created new processor extensions to the x86 architecture that provided hardware assistance for dealing with privileged instructions. Called AMD-V and VT-x by AMD and Intel respectively, these extensions changed the landscape, eventually opening server virtualization to new players. Soon after, Xen leveraged these new extensions to create hardware virtual machines (HVMs) that used the device emulation of QEMU with hardware assistance from the Intel VT-x and AMD-V extensions to support proprietary operating systems like Microsoft Windows.

A company called Qumranet also began to include virtualization infrastructure in the Linux kernel -- called Kernel-based Virtual Machine (KVM) -- and started using the QEMU facility to host virtual machines. Microsoft even eventually got into the game with the release of Hyper-V in 2008.

A new industry is born
When virtualization essentially became "free" -- or at least accessible without expensive licensing fees -- new use cases came to light. Specifically, Amazon began to use the Xen platform to rent some of its excess computing capacity to third-party customers. Through their APIs they kicked off the revolution of elastic cloud computing, where the applications themselves could self-provision resources to fit their workloads.

Today, open source hypervisors have matured and become pervasive in cloud computing. Enterprises are venturing beyond VMware, looking to architectures that use a KVM or Xen hypervisor. These efforts are less about controlling costs and more about leveraging the elastic nature of cloud computing and the standards being built on these open source alternatives.

The future: High-performance elastic infrastructures
With the commoditization of the hypervisor, innovation is now focused on the private/public cloud hardware architectures and software ecosystems that surround them: storage architectures, software-defined networking, intelligent and autonomous orchestration, and application APIs.

Legacy server applications, which have been conveniently containerized into virtual machines, are slowly retiring to give way to elastic, self-defining cloud applications that truly are the future of computing -- although both will operate side by side for some time.

1 2 Page
Mobile Security Insider: iOS vs. Android vs. BlackBerry vs. Windows Phone
Join the discussion
Be the first to comment on this article. Our Commenting Policies