Power struggle

Pain points

Trinity's data centre isn't enormous, but Roberts is already feeling the pain. His data centre houses an IBM z900 mainframe, 75 Unix and Linux systems, 850 x86-class rack-mounted servers, two blade-server farms with hundreds of processors, and a complement of storage-area networks and network switches. Simply getting enough power where it's needed has been a challenge. The original design included two 300-kilowatt uninterruptible power supplies.

"We thought that would be plenty," he says, but Trinity had to install two more units in January. "We're running out of duplicative power," he says, noting that newer equipment is dual-corded and that power density in some areas of the data centre has surpassed 250 watts per square foot.

At Industrial Light & Magic's (ILM) brand-new 1254-square-metre (13,500ft2)data centre in San Francisco, senior systems engineer Eric Bermender's problem has been getting enough power to ILM's 28 racks of blade servers. The state-of-the-art data centre has two-foot raised floors, 21 air handlers with more than 600 tons of cooling power and the ability to support up to 200 watts per square foot.

Nonetheless, says Bermender, "it was pretty much outdated as soon as it was built". Each rack of blade servers consumes between 18kw and 19kw when running at full tilt. The room's design specification called for six racks per row, but ILM is currently able to fill only two cabinets in each because it literally ran out of outlets. The two power-distribution rails under the raised floor are designed to support four plugs per cabinet, but the newer blade-server racks require between five and seven. To fully load the racks, Bermender had to borrow capacity from adjacent cabinets.

The other limiting factor is coolling. At both ILM and Trinity, the equipment with the highest power density is the blade servers. Trinity uses 2.4-metre-tall racks. "They're like furnaces. They produce 120-degree heat at the very top," Roberts says. Such racks can easily top 20kw today, and densities could exceed 30kw in the next few years.

What's more, for every watt of power used by IT equipment in data centres today, another watt or more is typically expended to remove waste heat. A 20kw rack requires more than 40kw of power, says Brian Donabedian, an environmental consultant at Hewlett-Packard. In systems with dual power supplies, additional power capacity must be provisioned, boosting the power budget even higher. But power-distribution problems are much easier to fix than coolling issues, Donabedian says, and at power densities above 100 watts per square foot, the solutions aren't intuitive.

For example, a common mistake data centre managers make is to place exhaust fans above the racks. But unless the ceiling is very high, those fans can make the racks run hotter by interfering with the operation of the room's air conditioning system. "Having all of those produces an air curtain from the top of the rack to the ceiling that stops the horizontal airflow back to the AC units," Roberts says.

Trinity addressed the problem by using targeted coolling. "We put in return air ducts for every system, and we can point them to a specific hot aisle in our data centre," he says.

ILM spreads the heat load by spacing the blade server racks in each row. That leaves four empty cabinets per row, but Bermender says he has the room to do that right now. He also thinks an alternative way to distribute the load -- partially filling each rack -- is inefficient. "If I do half a rack, I'm losing power efficiency. The denser the rack, the greater the power savings overall because you have fewer fans" which use a lot of power, he says.

Bermender would also prefer not to use spot coolling systems like IBM's Cool Blue, because they take up floor space and result in extra coolling systems to maintain. "Unified coolling makes a big difference in power," he says.

Ironically, many data centres have more coolling than they need but still can't cool their equipment, Donabedian says. He estimates that by improving the effectiveness of air-distribution systems, data centres can save as much as 35 percent on power costs.

Before ILM moved, the air conditioning units, which opposed each other in the room, created dead-air zones under the 30-cm raised floor. Seven years of moves and changes had left a subterranean tangle of hot and abandoned power and network cabling that was blocking airflows. At one point, the staff powered down the entire data centre over a holiday weekend, moved out the equipment, pulled up the floor and spent three days removing the unused cabling and reorganizing the rest. "Some areas went from 10 [cubic feet per minute] to 100 cfm just by getting rid of the old cable under the floor," Bermender says.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Hewlett-Packard AustraliaHISIBM AustraliaRobert Frances GroupTerremark Worldwide

Show Comments
[]