Computerworld

Supercomputers bulk up on power while losing price pounds

High-performance systems are getting larger and larger. But lower costs are broadening their appeal within IT.

The trade show floor at the SC07 supercomputing conference in Reno, Nevada, last month had a futuristic, film noir feel, with low lights, large glowing screens and scattered towers that displayed the names of companies and national laboratories. It was a landscape that evoked the movie Blade Runner, and the exotic was the norm.

For instance, in one location, Hewlett-Packard Co. was demonstrating a small "supercomputer in a box" that can be rolled around on wheels. Around the corner, there were widescreen monitors displaying cubist-like biology simulations.

But supercomputing's future also includes the not-so-exotic: much more raw power -- and reduced prices that are helping to broaden the use of high-performance computing (HPC) technology in business applications.

HPC systems are in the midst of a huge leap in size and performance, thanks to multicore processors. In November 2003, when single-core chips still dominated the market, there was a total of about 267,000 processing units in the systems that made the biannual Top500 list of the world's most powerful supercomputers.

Two years later, the number of processor cores in the Top500 systems had jumped to 732,500. And when the academic researchers who compile the list released the latest version at SC07, the number of cores had reached 1,648,095.

HPC systems are growing larger so quickly that more than one quarter of all the server processors being shipped by hardware vendors are now going out the loading-dock door in supercomputers, according to market research firm IDC.

In 2004, about 1.65 million server processors -- 16% of that year's total -- were shipped in HPC systems, IDC said. Last year, it said, 3.35 million chips went into supercomputers, accounting for 26% of the processors shipped. That percentage will increase to nearly 30% this year, IDC predicts.

But while many HPC systems have tens of thousands of processor cores, the availability of more-affordable low-end systems is what's attracting the attention of companies like Ping.

Three years ago, Phoenix- based Ping began using a US$100,000 Cray XD1 supercomputer to help in designing the golf clubs it makes. The XD1 cut the average processing time of design simulations from the 13 hours or so that they were taking on workstations to 20 minutes, said Eric Morales, a staff engineer at Ping.

But at SC07, Morales saw US$20,000 systems that offer processing power equal to what his Cray machine can deliver. He said that he wants to take advantage of such systems to expand HPC technology into Ping's manufacturing processes.

"I think we've done as much as we can [on HPC systems] with what we have, but I feel that we need to expand," Morales said. "There's more that we can do."

Nine years ago, the most powerful supercomputer in the world was the ASCI Red system, built by Intel Corp. and installed at Sandia National Laboratories in Albuquerque. That system included 9,152 Pentium processors, took up 2,500 square feet of space and cost US$55 million. On benchmark tests, it reached a performance level of 1.3 trillion floating-point operations per second, or teraflops.

Now you can get nearly 1teraflop of throughput from the supercomputer-in-a-box system that HP announced at SC07. The machine, a version of HP's Blade System c3000 designed for midsize users, includes eight server blades, each with two of Intel's new Xeon 5400 quad-core chips.

HP said the system takes up just two square feet of space, can run off a standard wall socket and doesn't need to be located in a data center. Typically, it will cost between US$25,000 and US$50,000.

Page Break

Thanks to such systems, IDC forecasts that worldwide HPC revenues will rise from about US$11 billion this year to more than US$15 billion in 2011 -- an average annual growth rate of 9%.

But in some respects, the HPC market is still focused more on scientific researchers than it is on users like John Picklo, HPC manager at automaker Chrysler.

Picklo, who oversees clustered Linux and Unix systems with a total of 1,650 processor cores, said that the vendors of HPC applications aren't keeping up with the shift to multicore chips. According to Picklo, many software vendors still base their pricing on the number of processor cores in a system. The problem, he said, is that quad-core processors don't necessarily deliver performance equal to that of four single-core chips.

"If I was buying four single cores, I wouldn't mind buying four licenses," Picklo said. "But if a quad-core [processor] requires four licenses, I'm not going to get the same benefit out of that."

He added that he wants application vendors to consider alternative licensing models, such as ones based on processor performance.

Software licensing isn't as big of an issue for academic and government researchers, who typically run custom application code. For those users, vendors are packing quad-core chips into HPC systems in increasingly dense configurations. For instance, there are just under 213,000 processor cores in the BlueGene/L system that IBM built for the U.S. Department of Energy's National Nuclear Security Administration.

The BlueGene/L, at Lawrence Livermore National Laboratory in California, has been No. 1 on the Top500 list since November 2004. Following an upgrade earlier this year, its sustained benchmark throughput is 478.2 teraflops.

But IBM vows that next year, it will build multiple systems that can reach the petaflops level -- more than twice what is possible now. "You'll probably see several petaflop machines," said Leo Suarez, IBM's vice president of deep computing.

The growth rates are such that by 2015, all of the Top500 systems will at least be in the petaflops category, predicted Erich Strohmaier, a researcher at Lawrence Berkeley National Laboratory who helps compile the supercomputer list.