Computerworld

Dell's greener M-Series

Dell's updated M-Series blade server gives more horsepower for less juice

The Dell M-Series blade server is being touted as using 19 per cent less energy than the company's previous blade offering while still providing a jump in horsepower. I had a chance to use this beast as part of the Interop iLabs, and after a false start caused by missing software in the pre-production unit, I found myself wondering if I had enough shekels to buy one for my lab. Instead of forcing me to surround my servers with additional out-of-band management gear, the M-Series has several cost- and labor-saving features built right in. Those features include IP KVM, intelligent power control, serial over IP, Virtual Media over IP, and power and environmental monitoring.

Since I didn't have a chance to tear into Dell's previous generation of blades, I don't have a way of confirming its 19 per cent power savings claim; however, I can say that for the six days we ran the system, our biggest, baddest blade (dual Quad Core Intel Xeon E5430 2.66GHz) used a grand total of 21.7 kilowatts of power. That's nice!

We really didn't expect the chassis to sip power, especially when we unpacked the system and found nine big-throated fans that looked like the business end of a wind tunnel. We also didn't expect it to be as quiet as it was, but what we did expect were some hellaciously fast blades -- and that's what we got. We ran a combination of Windows Server 2003 Enterprise Edition in both 32-bit and 64-bit flavors, along with CentOS Linux and VMware ESX server. While the Dell Open Manage installation DVD lists only Suse and Red Hat, the CentOS installation was able to deal with the LSI SAS RAID array and the Broadcom Gigabit Ethernet NICs just fine. Since we didn't have the SPEC benchmarks available to us in this round, I don't have direct performance numbers to compare with other servers InfoWorld has reviewed. However, it did run five virtual servers just fine, and the performance for the Unified Communications demonstration at Interop was more than adequate.

The configuration begins

We started off assigning IP addresses to the Chassis Management Controller (CMC) by connecting a local keyboard, mouse, and monitor and putting those onto our isolated control network. It's worth noting that this functionality uses the same Out Of Band Management Interface (OOBI) employed by many Avocent iKVM products. While the CMC OOBI interface required only a single Ethernet uplink, each blade got a separate management address, as did the CMC, providing access to both control and environmental monitoring widgets in the CMC and Integrated Dell Remote Access Controller (iDRAC) browser interfaces.

It should be pointed out here that if you wish to use the remote GUI console capabilities of the system, you will need to run an ActiveX-capable browser, which means Windows under IE for most people. As of the time of this writing, there was no word on if/when Avocent (Dell's choice of OEM for the iKVM capability) will support non-ActiveX browsers for full KVM over IP capability. This is an odd oversight since Avocent bought Cyclades, which did have a full Java iKVM solution that worked great on my old Sony Picture Book running Debian Sarge.

Page Break

It should also be noted that the addresses assigned to the iDRAC interface will follow the blades regardless of which slot they're in, so be careful if you shuffle blades around. We did the shuffle to even out the heat load and had to refer to notes to figure out which blade was which. As for the blade NICs, Dell engineering tells me that you can assign up to six NICs per blade through the CMC, and those can appear on either the pass-through ports or on the managed switch. Here's how they broke it out for me:

The nics go in pairs. Think of them as two integrated NICs (Fabric A) and two dual port PCI cards (Fabrics B & C).

In this case, the PCI cards are really mezzanine cards.

Each pair of NICs is associated with a fabric, so 3 pairs = Fabrics A, B & C.

Remember that these choices are on a chassis level.

Fabric A is always integrated NICs, port 1 to switch 1 and port 2 to switch 2. This is for every blade and it gives you redundant connectivity.

Fabric B & C is similar in design, but is Mezzanine card and is labeled B & C which aligns them with the switch modules.

There is no interdependency between the 3 fabrics. The choice for on fabric does not restrict or limit or depend on the choice for any other fabric.

The only mandate is that Fabric A (the integrated NICs) is Ethernet only.

(This is the info on NIC assignment from Dell and currently the NICs are Broadcom flavor only)

Note: Thank you to the Dell engineering support team for the above explanation on NIC assignments.

Page Break

OS Catch-22

So on to the OS install and some of the gotchas I discovered: First, do not lose your accessory pack. Installing Windows onto these blades without the Open Manage DVD can be done, but it's not worth the pain when Dell's installation tools do a great job. I happened to find a catch-22 situation where Windows Server 2003 x86 (original version) had an issue with the LSI RAID controllers that got fixed in Service Pack 1. However, the bug is where the 134GB RAID-0 array (two SAS drives) appears only as a 5GB partition to Windows (but Windows disk manager can see all 134GB), and there just isn't enough disk space to install the service pack. This catch-22 can be solved by getting Windows Server 2003 R2 with SP2, which has those critical patches slipstreamed into the distro.

You also want to use the Open Manage OS Installer since it inserts the widgets that report OS status information back to the CMC. Virtual media over the remote management IP connection is old to Dell but new to some other blades on the market. The functionality is also available on the Avocent IP KVMs, in that the same IP connection for iDRAC, IP KVM, and console all coexist without adding more Ethernet connections or IP addresses. You can share ISO CD/DVD images over the remote console link so that you don't need a USB CD/DVD drive hanging off the front of your server. If you need a virtual floppy (for Windows unsupported disk drivers and pressing F6 during the install), that option is also available.

After getting past the catch-22 stage, the M-Series blades just plain worked. Video through the iKVM and through the front-panel VGA connector was crisp, and with two USB connectors up front, you no longer have to hunt down the proprietary cables as you did for the past series of Dell blades.

We did fall into a trap since we only had six blades for this 16-slot behemoth. We just slipped them into slots one through six. However, this set up an uneven heat load in the chassis, and when we got to the very hot convention center, the bottom three fans turned our rack into a wind tunnel. At the suggestion of a Dell engineer, we shuffled the blades around and the fans dropped in speed and noise. Once the air conditioning came up in the exhibit hall, all nine fans dropped to a whisper.

Extra management features

One of my favorite features is really only visible if you're running Power Shell under Windows or a shell account on a Linux box -- that's the serial over IP redirector. Think of it as a terminal server connected to the first serial port and redirecting the console over the network. Dell has crammed in an IP KVM (optional), CMC (which can also have a redundant unit), a serial term server, and a couple of switches (various configurations in the options list) into the chassis, saving you at least 3RUs of gear. The only thing I wasn't wild about was only being able to bring up a single GUI console at a time on my laptop. I have to note that this problem was only on my machine, since others on the iLabs team were able to connect just fine from another laptop and get a different GUI console. We also cascaded the iKVM module into an Avocent AMX analog KVM unit so that we could get to all the server consoles regardless of where we were on the Interop show floor.

While I've decided I like this blade server -- even more than the 1955 series we tested last year -- I'm still not wild about those tiny pins on the back plane connector for the blades. I'm not sure if we or someone else did it, but one of the data pins snapped off as we were shuffling the blades around. The new rail-based alignment system is way better than the old blade mounting system, but those pins still make me nervous.

Quiet power

Whether you're running a single server on each blade or a virtualized environment; the Dell M-Series blade server should be able to save you bucks on your energy bill. It will also save your hearing since with all those remote control features, you'll make fewer trips into the server room. Heck, the boot manager even gives you a way to set up a "one-time boot" from virtual or USB-connected devices so that you don't have to worry about hitting the "any key" to boot from the CD-ROM. So save time, money, and your hearing -- what more can you want? Just don't cheap out and ignore the wonderfully useful iKVM option; it will save you time and confusion.