InfoWorld review: Cisco UCS wows

Cisco's Unified Computing System is a more manageable, more scalable, and essentially superior blade server system, despite 1.0 warts

Page 3 of 6

You also need to worry about firmware revisions. You can load several different versions of firmware for all blade components into the FIs themselves and assign those versions to custom definitions, ensuring that certain blades will run only certain versions of firmware for every component, from the FC HBAs to the BIOS of the blades themselves. Because UCS is so new, there are only a few possible revisions to choose from, and loading them on the FIs can be accomplished through FTP, SFTP, TFTP, and SCP. Once present on the FIs, firmware can then be pushed to each blade as required. You also can set up predefined boot orders -- say, CD-ROM, then local disk, followed by an FC LUN, and PXE (Pre-boot Execution Environment). These can also be assigned to each server instance as required and can include only one element if desired.

You can also define VLANs to present to the blades and which VLAN should be native. It's assumed that each server will trunk those 10Gb interfaces, but native VLAN assignment means that that isn't a hard and fast requirement. In production, it's likely that each blade will trunk, so that assumption is valid. However, the FIs don't play nice with VTP (VLAN Trunk Protocol), so VLAN definitions are manual, not derived from the rest of the switched LAN. If you have a pile of VLANs that you need to present to your servers, be ready for lots of clicking and typing. Cisco hopes to remedy this in an upcoming release.

Although the Fabric Interconnects don't speak VTP to the rest of the network, you can define VLANs that will match up with the larger LAN. Click to enlarge.

There are a few other odds and ends, such as scrub policies. These exist to determine what action to take when a service policy is pulled from a physical blade with local disk -- in other words, whether the local disk should be erased or left alone. Unfortunately, this "scrub" really isn't -- it just destroys the partition table, without actually overwriting the disks.

Once you've created your pools, you can start building your blades into actual servers. The options for building out servers are simple: Either a blade boots from the SAN or PXE, or it boots from local disk. Managing storage is outside the scope of UCS, so let's assume you have a competent storage administrator, and you need a bunch of LUNs assigned for our budding UCS installation. Through the UCS GUI, you can pull up a simple list of all WWNN and WWPN assignments and immediately export that list to CSV, making it extremely simple to pass that information off to the admin for the storage configuration. Talk about handy.

But I digress -- we haven't even built a server yet.

Service profiles
Server builds are defined in service profiles, which are themselves derived from service profile templates. Service profile templates allow you to define specific server instances and automatically provision one or more servers. Once you've created one global profile, you can duplicate that profile to however many servers you may need to fulfill that task. The configuration profiles determine the firmware revision for each blade component; the WWNN, WWPN, and MAC pools to choose from; the boot orders you may have defined; and even the boot policy -- boot from SAN, local, or what have you. All of this is surprisingly simple to organize.

The view at top shows which service profiles are currently assigned to which physical blades, along with the assignment status of all service profiles. The listing below shows a WWNN pool and which service profiles are using which pool addresses. This list can be easily exported in CSV format, which is extremely handy. Click to enlarge.
| 1 2 3 4 5 6 Page 3