Soupin' up the bus

Simulations of galaxy formations. Complex drug designs. Studying the path of dust clouds. This is just a sampling of the compute-heavy tasks that Phil Williams has to support on his research network at the University of Nottingham, in England.

"The calculations we do are a combination of lots of repetitive serial jobs and highly parallel jobs," Williams says. "Each of these [calculations] takes only a few seconds, but it takes us many, many hundreds of thousands of CPU hours to simulate in a computer."

Despite Gigabit Ethernet connections, Williams' servers -- a cluster of 512 dual-processor Network File System-mounted file servers -- were getting severely bogged down. "The CPUs were spending all their time handling the connection so that the performance on the job disappeared to nothing," he says. "The interconnect was definitely becoming the limiting factor -- it was holding back the amount of data we could calculate."

A trio of high-speed interconnect technologies is aimed at alleviating exactly that kind of strain: veteran technology InfiniBand; newcomer EtherFabric, an Ethernet-based offering from start-up Level 5 Networks; and iWarp, an Ethernet-based technology being put through its paces at the University of New Hampshire (UNH) Interoperability Labs in Durham. All three technologies offload the network connections from the server CPU so that servers can catch up to the network's Gigabit bandwidth speeds. And the buzz on them is picking up as users look for alternatives to 10G Ethernet to handle their growing network loads.

"A push toward sophisticated applications will mean an increase in video over IP, the need for an enriched buying experience over the Web, and the need for improved rendering of images," says Ann MacFarland, a director at The Clipper Group. For IT managers who don't want to make the leap to 10G Ethernet, the combination of big pipes, real-time traffic, heavy computing and burdensome graphics will choke their networks if they don't reconsider their interconnects, she says.

The infrastructure you have

EtherFabric, a combination of a network interface card (NIC) and software that uses a company's current Ethernet switch infrastructure, fit the bill for Williams at the University of Nottingham. EtherFabric gains performance improvement for servers by distributing a TCP/IP software stack to each application on the server. Each application then can directly access memory on the NIC, which alleviates the copying of data to system memory and the over-utilization of CPUs.

"The NIC handles the network so the CPU isn't being hammered by the kernel and being interrupted to handle network traffic," says Williams, who wanted to preserve his investment in Gigabit Ethernet pipes, while achieving high performance and low latency.

Also, because EtherFabric uses standard Ethernet and is interoperable with Ethernet adapters, it only requires one end of the connection to have the NIC.

University of Nottingham has tripled network performance since implementing EtherFabric, Williams says. "We can do calculations in a third of the time and get three times the work done."

He adds that tripling performance also means "getting away with a third of [his] servers". The university is hoping to revamp its data centre, and Williams plans to use EtherFabric to get more CPU power from fewer boxes.

Betting on InfiniBand

Had Williams not wanted to use Ethernet, InfiniBand would have been a great choice -- improving performance even more than EtherFabric, he says. InfiniBand, the high-performance, low-latency fabric overseen by the InfiniBand Trade Association (IBTA), operates at speeds of 2.5G to 120Gbit/sec. But the technology falls down for some IT managers because it requires specialized gear.

Aimed at speeding data transfers between huge server and storage farms, InfiniBand comprises a host channel adapter that sits at the server and target channel adapters located in other servers or storage devices. Providing direct connections, the adapters enable efficiency management techniques such as CPU memory offload and QoS to be built in.

InfiniBand launched with a bang in 2000, but interest quickly faded with the economic downturn and ensuing IT budget drawbacks. "During the bust, people were less inclined to invest in new interconnect technologies," says Thad Omura, a member of the IBTA and OpenIB Alliance and director of product marketing at InfiniBand gear maker Mellanox Technologies. But he contends, "now there's a renewal of interest in the interconnect and ability of the interconnect to lower the cost of the data centre or high-performance computing."

In fact, heavy hitters such as Intel, Sun and Cisco (which recently acquired InfiniBand switch maker Topspin Communications) support InfiniBand. The technology is taking hold in blade servers, for example, because of its low power requirements (about 2 watts) and 10Gbit/sec interconnects.

And users are becoming more plentiful, with compute powerhouses such as the Lawrence Livermore, Los Alamos and Sandia research labs sharing testament to the technology's usefulness.

"InfiniBand started its climb back last year. We're on a good trajectory with a cross-section of industries, including universities, labs, gas, automotive, entertainment and biotechnology," says Stu Aaron, a director of product marketing at Cisco.

Mellanox's Omura says he sees InfiniBand spilling over into enterprise applications that need large computing resources for real-time cost and risk analyses, such as those in financial services, and that require low-latency access to huge databases and storage, such as with Web commerce. Overall, Omura says the pricing on InfiniBand products is less than its 10G Ethernet counterparts at $US300 per port vs $1500 per port, but he concedes that it could take a bit more skill to deploy and manage InfiniBand architectures than those based on Ethernet.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about BroadcomChelsio CommunicationsCiscoClipperClipperHISHPInfiniBand Trade AssociationIntelMellanox TechnologiesMicrosoftNetAppNICSECSpeed

Show Comments
[]