Even in blade systems this is a problem, as evidenced by a recent trip I took down CPU-matching lane with a Sun 6000 chassis and X6250 blades. When purchased late last year, I spec'd the top-end 2.66GHz quad-core Intel X5355 CPUs with the hope that since they were the top end at the time, they would still be available when more blades were required. When new blades were needed six months later, Sun informed me that those CPU kits were no longer produced. After many gyrations with Sun and a reseller, matching CPU kits were magically located, but they came with a premium, including $250/hr for a Sun tech to manually downgrade the BIOS on the blades. Talk about heading in the wrong direction. The other option was to bump all the other servers up to the new CPUs at a very significant cost. That can really put a damper on ROI.
Another problem with virtualization is the all-your-eggs-in-one-basket issue. Running a farm of host servers with DRS and HA options enabled can produce lots of peace-of-mind, since VMs will migrate around to even out load, and if a host goes down the VMs that had been running on that host will boot from other servers. However, the controller making those decisions must be available. In the case of VMware, this is VirtualCenter server, which is a Windows service. Several times in the past month, I've found myself manually restarting the VC service on a Windows 2003 VC server due to lockups. Everything comes back together after one of these situations, but when you're trying to put a host into maintenance mode to upgrade RAM and there are a few VMs "stuck" during a migration between two ESX hosts for 20 minutes, it can be nerve-wracking. It's at that point that you fully realize that if there's a big problem with VC (or its equivalent on other virtualization platforms) you're not just looking at rebooting a single server, you might be forced to reboot a dozen or more. Thankfully, with all the virtualization work that I've done, I've never had to punt to that particular solution, but more than a few times with more than a few virtualization platforms I've come close.
In that particular case, I had to manually remove the offending VMs from the host using the VMware CLI tools on the hosts themselves, along the lines of this synopsis. It worked, but it wasn't without a few tense moments.
As virtualization infrastructures mature, the management stability problems will hopefully decrease, and perhaps at some point in the future migrations between hosts with differing CPUs will be possible (but probably at a significant performance cost, if this is even possible at all).
Until then, don't let this information deter you from moving toward virtualization -- just keep your eyes open while you walk down the path.