News

Review: Cisco's Unified Computing System wows

Revolutionary. Cutting edge. State of the art. These words and phrases are bandied around for so very many products in the IT field that they become useless, bland, expected. The truth is that truly revolutionary products are few and far between. That said, Cisco's Unified Computing System fits the bill.

To fully understand what Cisco has done requires that you dispense with preconceived notions of blade servers and blade chassis. Rewire your concepts of KVM, console access, and network and storage interfaces. Reorganize how you think of your datacenter as islands of servers surrounded by storage arrays and networks. Cisco had the advantage of starting from scratch with a blade-based server platform, and it's made the most of it.

In short, UCS is built around a familiar concept — the blade chassis — but rearchitects it to enable both greater manageability and greater scalability. For the architectural background, read my summary, “How Cisco UCS reinvents the datacenter.” This article focuses on the nitty-gritty details of UCS, and my experiences working with the system in a recent visit to Cisco's San Jose test labs.

UCS building blocks

A Cisco UCS chassis provides eight slots for half-width blades, each equipped with two Intel Nehalem processors, up to 96GB of RAM with 8GB DIMMs, two SAS drive slots, an LSI Logic SAS RAID controller, and a connection to the blade backplane. In addition, each blade is outfitted with a Cisco Converged Network Adapter, or CNA. The CNA is essentially the heart of the system, the component that makes UCS unlike traditional blade systems.

The CNA is a mezzanine board that fits a QLogic 4Gb Fibre Channel HBA and an Intel 10Gb Ethernet interface on a single board, connecting directly to the chassis network fabric. The presentation to the blade is two 10Gb NICs and two 4Gb FC ports, with two 10Gb connections to the backplane on the other side. The initial release does not support multiple CNAs per blade, or really even require one. But the CNA is integral to how the entire UCS platform operates, as it essentially decouples the blade from traditional I/O by pushing storage and network through two 10Gb pipes. This is accomplished through the use of FCoE (Fibre Channel over Ethernet). Everything leaving the blade is thus Ethernet, with the FC traffic broken out by the brains of the operation, the Fabric Interconnects (FI).

So we have some number of CNA-equipped blades in a chassis. We also have two four-port 10Gb fiber interface cards in the same chassis and two FIs downstream that drive everything. It's not technically accurate to call the FIs switches, since the chassis function more like remote line cards populated with blades. No switching occurs in the chassis themselves; they are simply backplanes for blades that have direct connections to the FIs. Physically, the FIs are identical in appearance to Cisco Nexus 5000 switches, but they have more horsepower and storage to handle the FCoE to FC breakout tasks. They offer 20 10Gb ports, and they support a single expansion card each.

The expansion cards come in a few different flavors, supporting either four 4Gb FC ports and four 10Gb Ethernet ports, or six 10Gb Ethernet ports, or eight 4Gb FC ports. This is in addition to the twenty 10Gb ports built into each FI. There are also three copper management and clustering ports, as well as the expected serial console port. The FI is wholly responsible for the management and orchestration of the UCS solution, running both the CLI and GUI interface natively — no outside server-based component is required.

Connecting the dots

Perhaps a mental picture is in order. A baseline UCS configuration would have two FIs run in active/passive mode, with all network communication run in active/active mode across both FIs and each chassis. (Think of a Cisco Catalyst 6509 switch chassis with redundant supervisors — even if one supervisor is standby, the Ethernet ports on that supervisor are usable. The two FIs work basically the same way.) They are connected to each other with a pair of 1Gb Ethernet ports, and they have out-of-band management ports connected to the larger LAN. The blade chassis is connected by two or four 10Gb links from each FEX (Fabric Extended) in the chassis, a set to each FI. That's it. A fully configured chassis with 80Gb uplinks will have four power cords and eight SFP+ cables coming out of it — nothing more. Conceivably, an entire rack of UCS chassis running 56 blades could be driven with only 56 data cables, 28 if only four 10Gb links are required on each chassis.

From there, the pair of FIs are connected to the LAN with some number of 10Gb uplinks, and the remainder of the ports on the FI are used to connect to the chassis. A pair of FIs can drive 18 chassis at 40Gb per chassis with two 10Gb uplinks to the datacenter LAN, allowing for eight 4Gb FC connections to a SAN from an eight-port FC expansion card.

The basis of the UCS configuration is the DME (Data Management Engine), a memory-based relational database that controls all aspects of the solution. It is itself driven by an XML API that is wide open. Everything revolves around this API, and it's quite simple to script interactions with the API to monitor or perform every function of UCS. In fact, the GUI and the CLI are basically shells around the XML configuration, so there's no real disparity between what can and can't be done with the CLI and GUI, or even external scripts. UCS is a surprisingly open and accessible system. Following that tenet, backing up the entirety of a UCS configuration is simple: The whole config can be sent to a server via SCP, FTP, SFTP, or TFTP, although this action cannot be scheduled through the GUI or CLI.

The initial setup of a UCS installation takes about a minute. Through the console, an IP is assigned to the out-of-band management interface on the initial FI, and a cluster IP is assigned within the same subnet. A name is given to the cluster, admin passwords are set, and that's about it. The secondary FI will detect the primary and require only an IP address to join the party. Following that, pointing a browser at the cluster will provide a link to the Java GUI, and the UCS installation is ready for configuration.

Build me up, Scotty

The first order of business is to define the ports on the FIs. They can either be uplink ports to the LAN or server ports that connect to a chassis. Configuring these ports is done by right-clicking on a visual representation of each FI and selecting the appropriate function. It's simple, but also cumbersome because you cannot select a group of ports; you have to do them one by one. Granted, this isn't a common task, but it's annoying just the same. Once you've defined the ports, the chassis will automatically be detected, and after a few minutes, all the blades in the chassis will be visible and ready for assignment.

This is where it gets interesting. Before anything happens to the blades, various pools and global settings must be defined. These pools concern Fibre Channel WWNN (World Wide Node Name) and WWPN (World Wide Port Name) assignments, Ethernet MAC pool assignments, UUIDs (Universally Unique Identifiers), and management IP pools for the BMC (Baseboard Management Controller) interfaces of the blades. These are open for interpretation, as you can assign whatever range of addresses you like for the UUID, WWNN, WWPN, and MAC ranges. In fact, it's so wide open that you can get yourself into trouble by inadvertently overlapping these addresses if you're not careful. However, assigning pools is extremely simple, accomplished by specifying a starting address and the number of addresses to put into the pool. Make sure you get it right, however, because you cannot modify a pool later; you can only specify another pool using an adjacent range of addresses.

You also need to worry about firmware revisions. You can load several different versions of firmware for all blade components into the FIs themselves and assign those versions to custom definitions, ensuring that certain blades will run only certain versions of firmware for every component, from the FC HBAs to the BIOS of the blades themselves. Because UCS is so new, there are only a few possible revisions to choose from, and loading them on the FIs can be accomplished through FTP, SFTP, TFTP, and SCP. Once present on the FIs, firmware can then be pushed to each blade as required. You also can set up predefined boot orders — say, CD-ROM, then local disk, followed by an FC LUN, and PXE (Pre-boot Execution Environment). These can also be assigned to each server instance as required and can include only one element if desired.

There are two forms of service profile templates: initial and updating. Each has specific pros and cons, and it's unfortunately not possible to switch forms after a profile has been created; if you begin with an initial profile, the profile cannot later be used to propagate updates.

Initial profile templates are used to build service profiles once, with no attachment to the originating templates. Updating templates are bound to those service profiles, so changing settings on an updating template will cause those changes to be pushed out to all bound service profiles. This is a double-edged sword, because while it does simplify the management of service profiles, making those changes results in a reboot of those profiles — sometimes with little or no warning. Something as innocuous as changing the boot order on a template could cause 20 blades to reboot when you click Save. It would be nice to have an option to stagger the reboots, schedule them, or both. Cisco has acknowledged that problem and is working on a fix.

Initial profiles do not have this problem, but once built, they must be manually modified one by one, server by server, if changes are required. There is no best-of-both-worlds solution here, unfortunately.

In any event, you can create a service profile that defines what firmware a blade should run on each component; what WWNN, WWPN, and MAC addresses to assign to the various ports on the blade; what management IP address to assign to the BMC; what order to boot the blade; and where the blade boots from — local disk or SAN LUN. You can then assign that profile to either a specific blade, or you can put all the identical blades into a pool and assign the profile to the pool, letting UCS pick the blades. Here, a curious thing happens.

PXE this

Each blade is but an empty vessel before UCS gets its hands on it. With each server profile, a blade must conform to any number of specific requirements, from the firmware revision on up. Cisco accomplishes the transformation from blank slate to fully configured blade by PXE booting the blade with some 127.0.0.0 network PXE magic and pushing a Linux-based configuration agent. The agent then accesses all the various components, flashes the firmware, assigns the various addresses, and makes the blade conform to the service profile. This takes a minute or two, all told. Following that, the blade reboots and is ready to accept an operating system.

This process presents a bit of a quandary: What if I want to PXE boot the OS? Through a bit of magic, the UCS configurator PXE framework will not interfere with normal PXE operations. It's apparently smart enough to get out of the way once the blade has been imprinted with the service profile. From that point on, you can install an OS as normal — say, VMware ESX Server, RHEL 5.3, or what have you.

Previous ArticleNext Article

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GET TAHAWULTECH.COM IN YOUR INBOX

The free newsletter covering the top industry headlines