Accounting for virtualization in chargeback

We will face issues like whether to charge by logical (virtual) gigabyte or physical de-duplicated gigabyte

A recent technology discussion among some colleagues unexpectedly turned to focusing on the operational challenges introduced by new technologies like server virtualization, dynamic (a.k.a. thin) provisioning, data de-duplication and grid-based clustered file servers. Although each of these technologies is at a different phase in market acceptance, each is having an influence on the future planning and architecting of IT infrastructure. And while there is often a groundswell push for adoption, many organizations haven't given appropriate consideration to the organizational changes that will invariably accompany their introduction.

One area that is a fairly hot topic of conversation in the server virtualization world is how to account for infrastructure costs in a virtualized world -- who pays for the underlying physical servers and how. Traditionally, organizations funded server (and storage) acquisition as part of the new project process -- the budgeting for these items was included within the project. When it came to chargeback, whether formal or informal, the system was at least conceptually straightforward: Server costs were associated with an application that was owned by a business unit and storage costs were similarly apportioned usually on a per gigabyte basis.

But virtualization breaks this model; now it is possible to provision new servers in minutes rather than weeks without acquiring physical devices. In this circumstance, how should users be charged, and, more importantly, when the next physical server is needed, who pays for it? This can get particularly ugly when considering things like the consolidation of servers at various life-cycle stages and factoring in depreciation costs. While there are shared resource models in place in IT for areas like mainframes and networks, the measurement and accounting mechanisms for open systems servers and storage haven't really caught up to the new reality.

In designing an accounting mechanism to support new technologies, two factors must be determined: What are the resource metrics on which chargeback will be based and how does one account for the excess capacity required to support a dynamic, shared usage model. For servers, resources like CPU, memory and IO come to mind. Excess capacity must be part of the per-unit cost and is therefore dependent on the organization's ability to plan and forecast.

As newer technologies like de-duplication become widely adopted, we will also face issues like whether to charge by logical (virtual) gigabyte or physical de-duplicated gigabyte and how to predict or plan for that. Certainly server virtualization is leading the way, in terms of forcing consideration of these issues. At least one company, V-Kernel Corp., has introduced a tool specifically targeting virtualization chargeback. But one doesn't need a crystal ball to envision a future of large pooled storage farms consisting of dynamically provisioned storage arrays and banks of commodity storage grids, and a day when planning and forecasting capacity and provisioning IT infrastructure resources becomes more akin to metering by the power company.

Jim Damoulakis is a chief technology officer at GlassHouse Technologies, a provider of independent storage services. He can be reached at jimd@glasshouse.com.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about GigabyteLogicalProvisionProvision

Show Comments
[]