Addressing IT efficiency in a recession

When surveying IT infrastructure for consolidation or efficiency opportunities, what should catch an IT executive's eye?

With increasing frequency these days, articles are being published about the coming economic downturn and its effect on corporate IT. In one sense, IT organizations have been preparing for a downturn for some time, given the considerable pressure over the past several years to better curb the rate of IT spending. Consolidation efforts have become commonplace: data center consolidation initiatives are occurring in most large organizations, and server consolidation through virtualization and blade technologies seems to top almost everyone's to-do list. Green initiatives within data centers represent another dimension of the ongoing effort to drive efficiency.

So when surveying the IT infrastructure landscape for consolidation or efficiency opportunities, what else might catch an IT executive's eye? Certainly storage has to come to mind. Given data growth rates and the inherent storage multiplier factor (10 to 50 times for every byte of new data), the question is not whether storage can be consolidated, but how much?

Are there ways to readily gauge the storage-consolidation potential within an organization? Here are some basic factors to consider in weighing consolidation or efficiency improvement potential:

Utilization -- An obvious but important starting point, utilization metrics, particularly when analyzed in combination with configuration and allocation data, begin to paint a picture of overall storage efficiency as well as the effectiveness of capacity planning and provisioning processes.

Tiered storage distribution -- Assuming that a tiered storage architecture is in place, the distribution of capacity across the various tiers can indicate the level of efficiency. Ideally, one would expect a pyramid model with the greatest capacity at the lowest tier. An inverse pyramid with the preponderance of storage in the top tier represents an opportunity.

Allocation -- How and where storage gets allocated can offer insights as well. For example, is storage for development and test instances regularly allocated from the same tier as production?

Complexity -- Complexity doesn't always mean inefficiency, but overcomplexity can a contributing factor. So what represents overcomplexity? One quick indicator is the number of different technology platforms and management tools that exist within the storage infrastructure.

SAN -- The design of the SAN infrastructure and related port-usage data are also helpful efficiency indicators. Host-to-target port and host-to-interswitch link port ratios combined with port-utilization metrics can point to aggregation opportunities (or, conversely, to oversubscription-related bottlenecks).

Of course, just identifying inefficiency is not enough. The big challenge in storage is to actually realize identified improvements. Service-disruption concerns and operational challenges often mean that many improvements are only implemented with technology refreshes. Forthcoming economy-related fiscal constraints highlight the need for storage management to ensure that limited "efficiency improvement" dollars are well spent.

Jim Damoulakis is chief technology officer at GlassHouse Technologies Inc., a leading provider of independent storage services. He can be reached atjimd@glasshouse.com .

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.
Show Comments
[]