The flip side of that advice is that you should always have enough physical servers to survive the loss of a single server -- and ideally, the loss of several physical servers if the implementation is large enough. While modern servers are proving less likely to catch fire, it does still happen, and you need to be prepared in case of catastrophe.
You also absolutely need a suitable safety net for routine maintenance. If you cannot take a physical host offline for 15 minutes to replace a failed DIMM because the remaining servers cannot adequately handle the RAM or processing load caused by the loss of that server, you're in trouble, and you're actually losing out on one of the prime benefits of server virtualization: a reduction in scheduled downtime. When you drop a physical server for maintenance, you want to avoid having to power down some number of virtual servers in order to decrease the overall load. Bad idea! While running N+1 is a minimum, expanding beyond that is even better.
Any realistic virtualization platform should be built on shared storage. Without this, each server is essentially a silo, and the VMs running on those siloed servers cannot be protected against physical server failure. Plus, building and expanding the virtualized infrastructure gets harder and more tedious without shared storage. In fact, unless we're talking about a very, very small virtualization build, the use of shared storage is not an option -- it's a hard and fast rule.
To that end, make sure that your shared storage solution is as robust as possible. Whether you plan on using iSCSI, NFS, or Fibre Channel, take a good look at your disk I/O needs before you start buying switches, HBAs, and disk. In many cases, SATA drives are more than adequate for general-purpose server virtualization, and in some cases, NFS will outperform iSCSI for day-to-day computing needs. This may lead you in a different direction than your storage vendor wants you to go, but unless you're talking about a heavy transactional disk workload, you probably don't need to get the SSD or even SAS-based arrays to start with.
In fact, unless you're talking about pushing 10G to each server, the use of these speedier storage mechanisms may be pointless. And with the proliferation of cheap disk, don't stick with traditional RAID5; go with RAID6 or ideally RAID10 on your array. Yes, you'll be giving up space, but the performance and reliability of those choices make them worthwhile.