In case you didn't know, VMware offers an enterprise-grade hypervisor for free: VMware ESXi. It's essentially a stripped-down hypervisor with limited hardware compatibility compared to its big brother ESX, but it will install and run just fine on most modern server-class systems. It offers most of the goodness of ESX without the bothersome licensing, but its feature set is restricted without the purchase of VMware vCenter and requisite licenses. For instance, you can't clone or template virtual machines on a stand-alone ESXi deployment, use vMotion, or a number of other restrictions. Given the price, though, you get far, far more than what you pay for.
There are ways to stretch ESXi beyond the limits. But be warned -- these methods will drag your ESXi installation into unsupported territory, and you'll be on your own for any and all tech support. But sometimes that needs to happen.
Case in point: I recently had a situation where an ancient VMware Server installation on a Linux host was dying due to the age of the hardware underneath it. Running VMware ESX would obviously be a better idea than replicating that same scenario on fresher hardware, and there was an entire VMware VI3 farm at another location. However, this was a remote office, and the budget was locked down tight. It was time to break out the thinking caps.
The "new" hardware was actually a repurposed HP ProLiant DL585 with four dual-core AMD Opteron 880 CPUs and 16GB of RAM -- not a hugely powerful box by current standards, but plenty for the needs of the remote site. VMware ESXi was installed on the local RAID5 array in a matter of minutes, and the four gigabit Ethernet interfaces on the box were tossed into two Etherchannel trunks to the datacenter switches. Tada, instant hypervisor.
However, the virtual machines running on the elderly Linux host were not compatible with ESX, since VMs and virtual disks created on VMware Server cannot be directly imported to ESX or ESXi. However, there is a Linux and Windows-based ESXi remote CLI client that can be used to muck about with some of the internals of ESXi. Unfortunately, this CLI client either doesn't support or specifically blocks several of the commands required to make this particular magic happen, and the ESXi host couldn't convert the disks via this method -- so outside the lines we go.
First, I shut down all the VMs on the VMware Server box and exported the directory containing them via NFS. Using the vSphere client, I added that NFS datastore to the ESXi box. Then I manually created the VMs on the ESXi box, but didn't assign or create any virtual disks for them. Next was the fun part.
If you go the ESXi console and type Alt-F1, you'll get a system console, but not a shell or login prompt. However, if you type unsupported there, you'll get some warning text and a password prompt. If you type the system root password at that prompt, you'll wind up with a shell. Here there be dragons if you're not familiar with Linux. If you are, it's basically an ash shell like you'd find on any number of embedded Linux devices.
If you then edit /etc/inetd.conf and remove the hash mark before the ssh line, and then kill -HUP the inetd process, you can then ssh into the server as root. Now, everything's much simpler.
From this access point, it's trivial to use vmkfstools to take the original VMware Server virtual disks and clone them to ESX-compatible VMDKs (vmkfstools -i /path/to/source /path/to/dest). You can also use that method to clone disks of existing ESX VMs, turning this method into a poor-man's template and cloning mechanism. There are some funky pieces missing from ESXi when accessed through this method, so beware. However, if you're not concerned about running unsupported gear, you can do a whole lot more with ESXi than the GUI will let you.
If you don't know what you're doing, you can also brick your ESXi installation.
However, this particular tale has a happy ending, as all the VMs were transitioned and are running without a problem on a better box with a better hypervisor. The next time the budget comes around there will be licenses purchased and this server will be able to come in from the cold and join the vCenter party. Until then, the gap has been bridged -- and isn't that the majority of what we do?