Five lessons of a datacenter overhaul

A datacenter makeover and migration can go wrong in many ways. Do as we suggest, not as we did.

What are the three most important ingredients of a successful project? Planning, planning, planning. For our datacenter makeover at the University of Hawaii's School of Ocean and Earth Science and Technology, we planned early and often, and still got bit by last-minute surprises and devilish details that cost us time and money. We'll do it a little different next time. You too can learn from our mistakes.

Our little room in the Hawaii Institute of Geophysics, HIG 319, was no stranger to servers, though it only had a casual acquaintance with them. When we started the project, the room had six racks installed, one with an 80kW APC InfraStruXure UPS being used at 40kW capacity, and most of the rest of the racks only partially populated with servers for the various SOEST departments.

SOEST needed the new datacenter to house a number of new server clusters for use by the research labs. An initial estimate would add three clusters comprised of a mix of traditional servers and blade servers housed in new racks. Managing this upgrade would require doubling HIG 319's square footage, adding an additional 250 amps of electrical power on a new breaker panel, and completely revamping the cooling system, which at the beginning of the project consisted of three wall-mounted window-style air conditioners that were already giving their all, to little effect.

Although HIG 319 had some drawbacks in terms of location, the tight deadline precluded any more political wrangling for a more favorable position on the building's ground floor, which was occupied by several research labs. Besides, the maintenance corridor directly behind the room was a welcome advantage, and the room directly next to HIG 319 was a little-used storage room exactly the same size. Combining the rooms would give us the square footage we needed. We drew a deep breath and took the plunge.

Lesson 1: Give your physical space a good physical

A basic task list was fleshed out in February of 2007 and work began immediately, temporarily moving HIG 319's existing servers, removing whatever artifacts were being stored in HIG 319a, knocking down the wall separating the two rooms, and gutting everything else. A sexy new tile floor had been installed, the walls painted, and new lighting wired up when the campus facilities management department threw us the first curve ball.

Because the SOEST building is almost 50 years old, it's standard UH practice to have a structural engineer vet the room before anything as heavy as a new datacenter is installed -- we just didn't find out about that little detail until it was too late to go anywhere else. Further, because the building's original structural records had long since disappeared into Hawaii's tropical ether, the engineer had to start from scratch with his calculations.

This effectively paralysed the project for a solid month, since nothing could happen until the engineer rendered his verdict. Four weeks later the engineer announced the floor stable...barely. While the two rooms could house a datacenter, it would have to be a lightweight datacenter because most of the racks would be limited to an 800-pound maximum load, the few exceptions being certain areas over the support beams. That was a nasty kick in the nethers, given that a fully loaded cluster-running rack can weigh as much as 2000 pounds and we had planned on using six of the 12 racks in the new datacenter for Beowulf clusters. Strike one -- back to the drawing board.

A flurry of tropical meetings later and we had what looked like an effective workaround. The four server clusters would move to another location, while the HIG datacenter would now house departmental servers from the various SOEST departments in 12 APC InfraStruXure racks. This would effectively make HIG 319 the central datacenter for all these departments while freeing up space for the clusters at the other locations. Not an optimal solution, but a necessary move if the college intended to install the new server clusters it wanted.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about APC by Schneider ElectricCA TechnologiesCreekDellIslandLeaderLeaderSpeedVIA

Show Comments
[]