Catharsis via hypervisor

So for all the virtual infrastructures that I've built in the lab and in the field, for every time I've PXE booted a new VMware ESX server into a production environment, I hadn't virtualized my own core systems... until yesterday. It was a spur of the moment kinda thing, and I went with it. I started the day with five physical core servers of varying age, and ended the day with two, having collapsed the others o

So for all the virtual infrastructures that I've built in the lab and in the field, for every time I've PXE booted a new VMware ESX server into a production environment, I hadn't virtualized my own core systems... until yesterday.

It was a spur of the moment kinda thing, and I went with it. I started the day with five physical core servers of varying age, and ended the day with two, having collapsed the others onto a single Dell PowerEdge running VMware ESX 3.0.2. These were a few Windows boxes, three Linux boxes, and a FreeBSD system, all running on relatively ancient hardware.

It might be odd to think that just feet away from an Intel reference system running the new Stoakley platform and several 8- and 16-core servers from HP and Sun, a 6-year-old HP Kayak XU800 sat, a Fedora Core 3 system with a Pentium III-866 and two 40GB PATA drives in software RAID1, running Cyrus IMAPD for my 5GB mailbox, as well as primary DNS, DHCP, and NTP tasks for the entire lab on all VLANs. Around the corner from that were several other servers running various backup tasks, public Web apps, and so forth. I decided that everything had to go, and by 5pm, I'd rebuilt all the systems on the VMware box, including a somewhat annoying Berkeley DB 4.2->4.3 migration for Cyrus, since the new server was built on CentOS 5. Essentially, this was a 5-hour server consolidation project from conception to reality, and I don't seem to be the worse for it. In fact, I've lightened the power and heat loads in the lab, and am making far better use of the Dell PowerEdge 2800 that's holding down these new systems.

The PE2800 isn't the highest-spec box, especially for VM tasks. It has two HyperThreaded single-core 3.6 Ghz Xeons with 4GB RAM and a bunch of disk in a RAID 5, but it's now running 6 VMs like a champ, including my Asterisk PBX build. I plan on moving over a few more boxes today, but the bulk of the work is done.

One of the only issues that I have with the new environment is that all the VMware management tools are Windows-based. I really wish that VMware had continued their practice of producing Windows and Linux management tools, since my only Windows XP system is a VM running on my workstation and I now have a slightly disconcerting dependency there. Add to that the fact that the VirtualCenter server is running on a VM of its' own on another physical system under VMware Server, and I probably should build a standalone Windows server to handle those tasks. It would be ideal if VMware could bring simple VM management for ESX hosts into the VMware Server Console. I don't need all the bells and whistles there, but I do need to be able to powerup/powerdown servers and access their console. Since there's been consternation regarding the management split between VMware Server and ESX, this might actually happen, but I'm not holding my breath.

On the other side of the coin, migrating a VMware Server 1.0.3 VM to ESX 3.0.2 was very simple -- move the files into place on the ESX host, run vmkfstools on the vmdk, and import the VM via VirtualCenter. I found that it's best to delete the NIC from the VM and re-add it since there's some issue with variable syntax in the .vmx file between VMware Server and ESX, but all told, it was quick and easy, just the way it should be.

Recommended
Join the discussion
Be the first to comment on this article. Our Commenting Policies