Computerworld

Memory: The new power hog

Forget processors, power supplies or storage. Memory is the new power hog

For engineers developing the next generation of servers, the CPU is no longer the biggest design obstacle to controlling power and cooling costs, which is a major issue for many data centers. "It used to be that the processor was our main concern," says Roger Schmidt, chief thermal architect and distinguished engineer in IBM's server and workstations division.

Not anymore.

System designers have been given a reprieve from contending with spiking CPU power demands in the volume server market as both AMD and Intel have continued to move to more energy-efficient multicore designs. For now, both chip makers are pledging to hold the line on power consumption while continuing to offer improved performance in smaller packages.

The other big kahuna -- power supply conversion losses -- is gradually coming under control. The power supplies found in most commodity Wintel servers today can waste 35 percent or more of incoming power before it ever reaches the proc­essor. But Sun, HP and IBM have all developed power supplies that exceed 80 percent efficiency, even at low load levels. Some servers are now shipping with power supplies that exceed 90 percent efficiency.

The challenge now, Schmidt says, is not processors. Or power supplies. Or storage. It's memory. Users simply want too much of it.

Applications are demanding more RAM than ever. And ironically, the very technologies IT has used to consolidate server sprawl and reduce power and cooling loads -- virtualization, multicore chips and blade servers -- have also increased the demand for memory. "The more proc­essing power you put on a chip, the more you need to surround it with memory," says Rich Hetherington, chief architect and distinguished engineer at Sun.

While memory density continues to follow Moore's Law, the demand for memory is moving faster than the rate at which memory-chip density is increasing. That leaves system designers struggling to fit more and more dual in-line memory modules (DIMM) on smaller and smaller mother­boards.

IBM's high-end Intel-based System x3950 four-way servers are now being configured with as many as 64 DIMMs. And the need to free up more real estate for DIMMs led Sun to go with fatter server blades in its 8000 Series line, bucking the "smaller is better" trend.

Increasingly, IBM is shipping machines whose power requirements for memory far outstrip those for processors. "The ratio we're seeing now is the memory taking over 2 to 1. That's huge," Schmidt says. Depending on the system architecture, the power load for just one DIMM can be as high as 14 watts, according to AMD. In contrast, the chip maker's dual-core processor for the blade server market consumes 68 watts.

Once system designers get the memory on the board, they still have to cool it. "A major problem for us in the design of our boxes is how to handle all of this memory that customers are asking for. It's a lot of heat in a small space," Schmidt says.

Both server and component manufacturers are finding creative ways to cut the power. AMD's Opteron architecture couples an on-chip memory controller with low-power register DDR2 memory that consumes just 2 watts at idle and 4.6 watts at peak.

Using higher-density memory can help, since the higher-density DIMMs consume about as much power as lower-density ones, according to AMD. But the cost per gigabyte is higher, and the number of DIMMs required still adds up. "Memory is not cheap anymore. It's a big piece of the pie," Schmidt says.

Sun uses fully buffered DIMMs, which are faster and higher capacity than regular DIMMs but add what Hetherington calls a "power tax." To minimize the power draw, Sun shuts down unused memory. "If a bank of memory is idle, we'll turn off the clocks," he says. That works for applications that can tolerate some latency, since the processor must issue a command to turn the memory back on before issuing a read command. "But for our x64 line, where latency is a huge issue, that would be painful," says Hetherington.

Where will it all end? Power-saving innovations may slow down the rate at which data centers move to higher energy densities, but the forces propelling users to jam ever-higher numbers of smaller, faster servers into a single rack are unlikely to subside. The increasing demand for memory will simply make server blades bigger than they otherwise might have been -- and more power-hungry.