Elastic IT resources transform data centers

Several IT trends converge as data centers evolve to become more adaptable, Gartner says

The enterprise data center of the future will be a highly flexible and adaptable organism, responding quickly to changing needs because of technologies like virtualization, a modular building approach, and an operating system that treats distributed resources as a single computing pool.

The move toward flexibility in all data center processes, discussed extensively by analysts and IT professionals at Gartner's 27th annual data center conference, comes after years of building monolithic data centers that react poorly to change.

"For years we spent a lot of money building out these data centers, and the second something changed it was: 'How are we going to be able to do that?'" says Brad Blake, director of IT at Boston Medical Center. "What we've built up is so specifically built for a particular function, if something changes we have no flexibility."

Rapidly changing business needs and new technologies that require extensive power and cooling are necessitating a makeover of data centers, which represent a significant chunk of an organization's capital costs, Blake notes.

For example, he says "when blade servers came out that completely screwed up all of our matrices as far as the power we needed per square foot, and the cooling we needed because these things sucked up so much energy, used so much heat."

Virtualization of servers, storage, desktops and the network is the key to flexibility in Blake's mind, because hardware has long been tied too rigidly to specific applications and systems.

But the growing use of virtualization is far from the only trend making data centers more flexible. Gartner expects to see today's blade servers replaced in the next few years with a more flexible type of server that treats memory, processors and I/O cards as shared resources that can be arranged and rearranged as often as necessary.

Instead of relying on vendors to decide what proportion of memory, processing and I/O connections are on each blade, enterprises will be able to buy whatever resources they need in any amount, a far more efficient approach.

For example, an IT shop could combine 32 processors and any number of memory modules to create one large server that appears to an operating system as a single, fixed computing unit. This approach also will increase utilization rates by reducing the resources wasted because blade servers aren't configured optimally for the applications they serve.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags data centres

More about Amazon.comAmazon Web ServicesGartnerGoogleNDCVMware Australia

Show Comments
[]