Computerworld

Soupin' up the bus

Simulations of galaxy formations. Complex drug designs. Studying the path of dust clouds. This is just a sampling of the compute-heavy tasks that Phil Williams has to support on his research network at the University of Nottingham, in England.

"The calculations we do are a combination of lots of repetitive serial jobs and highly parallel jobs," Williams says. "Each of these [calculations] takes only a few seconds, but it takes us many, many hundreds of thousands of CPU hours to simulate in a computer."

Despite Gigabit Ethernet connections, Williams' servers -- a cluster of 512 dual-processor Network File System-mounted file servers -- were getting severely bogged down. "The CPUs were spending all their time handling the connection so that the performance on the job disappeared to nothing," he says. "The interconnect was definitely becoming the limiting factor -- it was holding back the amount of data we could calculate."

A trio of high-speed interconnect technologies is aimed at alleviating exactly that kind of strain: veteran technology InfiniBand; newcomer EtherFabric, an Ethernet-based offering from start-up Level 5 Networks; and iWarp, an Ethernet-based technology being put through its paces at the University of New Hampshire (UNH) Interoperability Labs in Durham. All three technologies offload the network connections from the server CPU so that servers can catch up to the network's Gigabit bandwidth speeds. And the buzz on them is picking up as users look for alternatives to 10G Ethernet to handle their growing network loads.

"A push toward sophisticated applications will mean an increase in video over IP, the need for an enriched buying experience over the Web, and the need for improved rendering of images," says Ann MacFarland, a director at The Clipper Group. For IT managers who don't want to make the leap to 10G Ethernet, the combination of big pipes, real-time traffic, heavy computing and burdensome graphics will choke their networks if they don't reconsider their interconnects, she says.

The infrastructure you have

EtherFabric, a combination of a network interface card (NIC) and software that uses a company's current Ethernet switch infrastructure, fit the bill for Williams at the University of Nottingham. EtherFabric gains performance improvement for servers by distributing a TCP/IP software stack to each application on the server. Each application then can directly access memory on the NIC, which alleviates the copying of data to system memory and the over-utilization of CPUs.

"The NIC handles the network so the CPU isn't being hammered by the kernel and being interrupted to handle network traffic," says Williams, who wanted to preserve his investment in Gigabit Ethernet pipes, while achieving high performance and low latency.

Also, because EtherFabric uses standard Ethernet and is interoperable with Ethernet adapters, it only requires one end of the connection to have the NIC.

University of Nottingham has tripled network performance since implementing EtherFabric, Williams says. "We can do calculations in a third of the time and get three times the work done."

He adds that tripling performance also means "getting away with a third of [his] servers". The university is hoping to revamp its data centre, and Williams plans to use EtherFabric to get more CPU power from fewer boxes.

Betting on InfiniBand

Had Williams not wanted to use Ethernet, InfiniBand would have been a great choice -- improving performance even more than EtherFabric, he says. InfiniBand, the high-performance, low-latency fabric overseen by the InfiniBand Trade Association (IBTA), operates at speeds of 2.5G to 120Gbit/sec. But the technology falls down for some IT managers because it requires specialized gear.

Aimed at speeding data transfers between huge server and storage farms, InfiniBand comprises a host channel adapter that sits at the server and target channel adapters located in other servers or storage devices. Providing direct connections, the adapters enable efficiency management techniques such as CPU memory offload and QoS to be built in.

InfiniBand launched with a bang in 2000, but interest quickly faded with the economic downturn and ensuing IT budget drawbacks. "During the bust, people were less inclined to invest in new interconnect technologies," says Thad Omura, a member of the IBTA and OpenIB Alliance and director of product marketing at InfiniBand gear maker Mellanox Technologies. But he contends, "now there's a renewal of interest in the interconnect and ability of the interconnect to lower the cost of the data centre or high-performance computing."

In fact, heavy hitters such as Intel, Sun and Cisco (which recently acquired InfiniBand switch maker Topspin Communications) support InfiniBand. The technology is taking hold in blade servers, for example, because of its low power requirements (about 2 watts) and 10Gbit/sec interconnects.

And users are becoming more plentiful, with compute powerhouses such as the Lawrence Livermore, Los Alamos and Sandia research labs sharing testament to the technology's usefulness.

"InfiniBand started its climb back last year. We're on a good trajectory with a cross-section of industries, including universities, labs, gas, automotive, entertainment and biotechnology," says Stu Aaron, a director of product marketing at Cisco.

Mellanox's Omura says he sees InfiniBand spilling over into enterprise applications that need large computing resources for real-time cost and risk analyses, such as those in financial services, and that require low-latency access to huge databases and storage, such as with Web commerce. Overall, Omura says the pricing on InfiniBand products is less than its 10G Ethernet counterparts at $US300 per port vs $1500 per port, but he concedes that it could take a bit more skill to deploy and manage InfiniBand architectures than those based on Ethernet.

Page Break

Standards-based choice

While Level 5 and the InfiniBand companies are vying for today's high-speed interconnect market, iWarp might give both a run for their money within the next two years.

IWarp, which stands for Internet Warp, or high-speed Internet, is an umbrella protocol for Remote Direct Memory Access, Remote Direct Data Placement and Marker PDU Alignment. As a bundle, iWarp, like its competitors, is aimed at boosting the speed of networked devices -- in this case by reducing the overhead associated with Ethernet. It does this by combining the processing and routing functions on a single chip. "The basic idea is to allow a computer on one end of a connection to write data directly to the memory of another computer without any intervention," says Bob Russell, a member of UNH's Interoperability Lab.

The technology requires a swap-out of NICs on both ends of a connection, but uses the Ethernet infrastructure. "One of the things I like about iWarp is that it is standards-based, while EtherFabric is proprietary," says The Clipper Group's MacFarland.

Backing the iWarp movement are companies such as Broadcom, Chelsio Communications, HP, Network Appliance and Microsoft. However, the technology is not in broad use and devices are in the early stages, Russell says. He adds that widespread deployment could be six months to a year away.

According to MacFarland, iWarp's big benefit is for large data transfers. A complete implementation of iWarp could offload 90 percent of data transmission overhead from a server's CPU, resulting in latency numbers close to InfiniBand's 1.3 millisec and improved system performance. "It's a much more efficient process than its predecessors," she says.

Unlike InfiniBand, iWarp's barriers to adoption are low, MacFarland says. It is interoperable with most technology in high-performance computing and data centres, such as storage-area networks and network-attached storage.

Russell agrees. "Whether you're moving files or doing backups, iWarp would help," he says. "The big computing shops like the national labs that have supercomputers that are hungry for more power, more speed, they'll jump on iWarp right away."

Experts warn that when iWarp products debut, like the early InfiniBand products, they may be costly. But they point out that as adoption rates grow, the prices will quickly fall into line.

Questions to ask concerning interconnects

  • Is it compatible with bigger bandwidth - can I move to faster speeds without buying new gear?
  • Do the network interface cards have to be on both ends of the connection?
  • Can I test-drive it on my hardware in my network running my code and my applications?
  • What traditional training do I need to deploy and manage this interconnect?
  • Is this compatible with Ethernet or will I have to buy specialized gear?