Data Center Gets Star Treatment

Net Gains

The LDAC project gave Plumer's team a unique opportunity to rebuild its IT infrastructure from scratch. The team started by interviewing users on their needs, says Gary Meyer, systems engineer and project manager. From there, a narrative description of the technical infrastructure was developed and given to the design teams.

"The biggest key is the networking infrastructure," says Plumer.

"This industry tends to be a good 10 years ahead of general business in terms of critical network-capacity needs and capability," says Rob Enderle, principal at Enderle Group in San Jose. ILM "will probably be passed relatively quickly, given [that] this need crisscrosses their industry."

The architecture consists of three networks: one for a new voice-over-IP telephone network and two separate 10Gb network cores. One is for video in the media data center, and the other is for the main data center, which handles the render server farm and back-end business systems. A 10Gb fiber backbone runs from the data centers to each building and out to the distribution closets. All employees now have 1Gb/sec. connections, up from 100Mb/sec. in the old facilities. ILM also pulled fiber to each artist workstation. "Putting the fiber in gives us the ability to go to 10 Gb or greater to the desktop," Meyer says.

Meyer won't be surprised if ILM's artists max out their 1Gb connections within a year. Between downloading very large files and streaming high-definition video to the desktop, they could start to fill up the pipe, he says.

Those kinds of anticipated bandwidth demands resulted in very strict requirements for network equipment, says Runge. "We spent months doing a bake-off between several vendors," he says. Foundry won because it dropped the fewest packets-a critical metric for an organization that needs to run multiple high-definition video streams. While the new buildings gave Plumer a blank slate for a new data center, the need for more space wasn't the biggest issue. "It's more about power and cooling," he says. During the data center's design phase, heat and power-density requirements for IT equipment rose faster than anyone expected. The original design called for 200 watts per square foot.

"Partway through the process, we threw up a flare and said, 'We think we've made a mistake. We think we should design for 400 watts per square foot.' And we were basically laughed out of the room," says Meyer. Today, the room supports 330 to 340 watts per square foot and could easily consume 400, he says.

One major reason for the increase was the server farm used to render movie images frame by frame. As ILM has adopted blade servers, power density has gone up from 10 kilowatts per rack a few years ago to nearly 20 kilowatts for its blade servers today. ILM adjusted the original data center design but still has had to spread out blade servers to dissipate heat. "It's a constant job of balancing the room," Plumer says.

Data on the Move

Handling storage needs during the transition was another challenge. ILM had 18 Network Appliance R200 filers connected to 68TB of storage in San Rafael. Those arrays needed to be online around the clock in order to feed files to the render server farm.

ILM was also using SpinFS from Spinnaker Software Solutions, a distributed file system that virtualizes storage and establishes a single, unified namespace that all of the filers use. SpinFS eliminated a performance bottleneck that resulted when many machines in the render farm requested the same data at the same time.

ILM uses the technology to distribute the data across multiple disk arrays, says systems developer Mike Thompson. ILM also used it to migrate data between San Rafael and the LDAC.

Thompson added another 78TB of near-line storage and deployed another 10 R200s running SpinFS in the LDAC. Then he connected them over the 10Gb link to the arrays in San Rafael. "No matter which [end] you are on, you see all the storage," he says. Using the near-line storage as a buffer, Thompson pulled arrays out of the storage pool in San Rafael and reconnected them in the LDAC without disrupting operations. It's now used as a place to store completed projects until the data is ready for migration to tape.

Once the last staffers and equipment from the three organizations are finally moved in, the data center will be at about 60% of capacity, Plumer says. The infrastructure design, as deployed, is supposed to last five years. Already, however, the IT staff is anticipating new needs.

"We're migrating production to 64-bit," says Plumer, which means swapping out older servers for units with dual-core Opteron processors. And the film industry could be moving to 4K frames, which would double the storage requirements.

"We'll stay at 68TB for a year or two," Thompson predicts. "But as shots get more complex ... it's hard to tell."

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Foundry NetworksHISLucasArtsNetAppRoseSEC

Show Comments
[]