Computerworld

Power struggle

When Tom Roberts oversaw the construction of an 836 square metre data centre for Trinity Health, a group of 44 hospitals, he thought the infrastructure would last four or five years. A little more than three years later, he's looking at adding another 280 square metres and re-engineering some of the existing space to accommodate rapidly changing power and cooling needs.

As in many organizations, Trinity Health's data centre faces pressures from two directions. Growth in the business and a trend towards automating more processes as server prices continue to drop have stoked the demand for more servers. Roberts says that as those servers continue to get smaller and more powerful, he can get up to eight times more units in the same space. But the power density of those servers has exploded.

"The equipment just keeps chewing up more and more watts per square metre," says Roberts, director of data centre services. That has resulted in challenges in meeting power-delivery and coolling needs and has forced some retrofitting.

"It's not just a build-out of space but of the electrical and the HVAC systems that need to cool these very dense pieces of equipment that we can now put in a single rack," Roberts says.

Power-related issues are already a top concern in the largest data centres, says Jerry Murphy, an analyst at Robert Frances Group. In a study his firm conducted in January, 41 percent of the 50 Fortune 500 IT executives surveyed identified power and cooling as problems in their data centres, he says.

Murphy also recently visited CIOs at six of the nation's largest financial services companies. "Every single one of them said their No. 1 problem was power," he says. While only the largest data centres experienced significant problems in 2005, Murphy expects many more to feel the pain this year as administrators continue to replenish older equipment with newer units that have higher power densities.

In large, multi-megawatt data centres, where annual power bills can easily exceed $US1 million, more-efficient designs can significantly cut costs. In many data centres, electricity now represents as much as half of operating expenses, says Peter Gross, CEO of EYP Mission Critical Facilities, a data centre designer. Increased efficiency has another benefit: in new designs, more-efficient equipment reduces capital costs by allowing the data centre to lower its investment in cooling capacity.

Page Break

Pain points

Trinity's data centre isn't enormous, but Roberts is already feeling the pain. His data centre houses an IBM z900 mainframe, 75 Unix and Linux systems, 850 x86-class rack-mounted servers, two blade-server farms with hundreds of processors, and a complement of storage-area networks and network switches. Simply getting enough power where it's needed has been a challenge. The original design included two 300-kilowatt uninterruptible power supplies.

"We thought that would be plenty," he says, but Trinity had to install two more units in January. "We're running out of duplicative power," he says, noting that newer equipment is dual-corded and that power density in some areas of the data centre has surpassed 250 watts per square foot.

At Industrial Light & Magic's (ILM) brand-new 1254-square-metre (13,500ft2)data centre in San Francisco, senior systems engineer Eric Bermender's problem has been getting enough power to ILM's 28 racks of blade servers. The state-of-the-art data centre has two-foot raised floors, 21 air handlers with more than 600 tons of cooling power and the ability to support up to 200 watts per square foot.

Nonetheless, says Bermender, "it was pretty much outdated as soon as it was built". Each rack of blade servers consumes between 18kw and 19kw when running at full tilt. The room's design specification called for six racks per row, but ILM is currently able to fill only two cabinets in each because it literally ran out of outlets. The two power-distribution rails under the raised floor are designed to support four plugs per cabinet, but the newer blade-server racks require between five and seven. To fully load the racks, Bermender had to borrow capacity from adjacent cabinets.

The other limiting factor is coolling. At both ILM and Trinity, the equipment with the highest power density is the blade servers. Trinity uses 2.4-metre-tall racks. "They're like furnaces. They produce 120-degree heat at the very top," Roberts says. Such racks can easily top 20kw today, and densities could exceed 30kw in the next few years.

What's more, for every watt of power used by IT equipment in data centres today, another watt or more is typically expended to remove waste heat. A 20kw rack requires more than 40kw of power, says Brian Donabedian, an environmental consultant at Hewlett-Packard. In systems with dual power supplies, additional power capacity must be provisioned, boosting the power budget even higher. But power-distribution problems are much easier to fix than coolling issues, Donabedian says, and at power densities above 100 watts per square foot, the solutions aren't intuitive.

For example, a common mistake data centre managers make is to place exhaust fans above the racks. But unless the ceiling is very high, those fans can make the racks run hotter by interfering with the operation of the room's air conditioning system. "Having all of those produces an air curtain from the top of the rack to the ceiling that stops the horizontal airflow back to the AC units," Roberts says.

Trinity addressed the problem by using targeted coolling. "We put in return air ducts for every system, and we can point them to a specific hot aisle in our data centre," he says.

ILM spreads the heat load by spacing the blade server racks in each row. That leaves four empty cabinets per row, but Bermender says he has the room to do that right now. He also thinks an alternative way to distribute the load -- partially filling each rack -- is inefficient. "If I do half a rack, I'm losing power efficiency. The denser the rack, the greater the power savings overall because you have fewer fans" which use a lot of power, he says.

Bermender would also prefer not to use spot coolling systems like IBM's Cool Blue, because they take up floor space and result in extra coolling systems to maintain. "Unified coolling makes a big difference in power," he says.

Ironically, many data centres have more coolling than they need but still can't cool their equipment, Donabedian says. He estimates that by improving the effectiveness of air-distribution systems, data centres can save as much as 35 percent on power costs.

Before ILM moved, the air conditioning units, which opposed each other in the room, created dead-air zones under the 30-cm raised floor. Seven years of moves and changes had left a subterranean tangle of hot and abandoned power and network cabling that was blocking airflows. At one point, the staff powered down the entire data centre over a holiday weekend, moved out the equipment, pulled up the floor and spent three days removing the unused cabling and reorganizing the rest. "Some areas went from 10 [cubic feet per minute] to 100 cfm just by getting rid of the old cable under the floor," Bermender says.

Page Break

Even those radical steps provided only temporary relief, because the room was so overloaded with equipment. Had ILM not moved, Bermender says, it would have been forced to move the data centre to a collocation facility. Managers of older data centres can expect to run into similar problems, he says.

That suits Marvin Wheeler just fine. The chief operations officer at Terremark Worldwide manages a 65,742-square-metre (600,000sq2) collocation facility designed to support 100 watts per square foot.

"There are two issues. One is power consumption, and the other is the ability to get all of that heat out. The coolling issues are the ones that generally become the limiting factor," he says.

With 610mm floors and six-metre-high ceilings, Wheeler has plenty of space to manage airflows. Terremark breaks floor space into zones, and airflows are increased or decreased as needed. The company's service-level agreements cover both power and environmental conditions such as temperature and humidity, and it is working to offer customers Web-based access to that information in real time.

Terremark's data centre consumes about six megawatts of power, but a good portion of that goes to support dual-corded servers. Thanks to redundant power designs, "we have tied up twice as much power capacity for every server", Wheeler says.

Terremark hosts some 200 customers, and the equipment is distributed based on load. "We spread out everything. We use power and load as the determining factors," he says.

But Wheeler is also feeling the heat. Customers are moving to three- and 3.62-metre-high racks, in some cases increasing the power density by a factor of three. Right now, Terremark charges based on [area], but he says collocation companies need a new model to keep up. "Pricing is going to be based more on power consumption than [area]," Wheeler says.

According to EYP's Gross, the average power consumption per server rack has doubled in the past three years. But there's no need to panic -- yet, Donabedian says.

"Everyone gets hung up on the dramatic increases in the power requirements for a particular server," he says. But they forget that the overall impact on the data centre is much more gradual, because most data centres only replace one-third of their equipment over a two- or three-year period.

Nonetheless, the long-term trend is towards even higher power densities, Gross says. He points out that 10 years ago, mainframes ran so hot that the systems moved to water cooling before a change from bipolar to more efficient CMOS technology bailed them out.

"Now we're going through another ascending growth curve in terms of power," he says. But this time, Gross adds, "there is nothing on the horizon that will drop that power".