Demystifying deduplication

De-duplication can be applied anywhere there is a significant amount of data commonality

Of the assortment of technologies swarming around the storage and data protection space these days, one that can be counted on to garner both lots of interest and lots of questions among users is deduplication. The interest is understandable since the potential value proposition, in terms of reduction of required storage capacity, is at least conceptually on a par with the ROI of server virtualization. The win-win proposition of providing better services (e.g. disk-based recovery) while reducing costs is undeniably attractive.

However, while the benefits are obvious, the road to get there isn't necessarily as clear. How does one make a decision to adopt a particular technology when that technology manifests itself in so many different forms? Deduplication, like compression before it, can be incorporated in a number of different products types. While by no means a complete list, the major options for our purposes include backup software, NAS storage devices, and virtual tape libraries (VTL).

Even within these few categories, there are dramatic differences in how deduplication is implemented with each offering having its own benefits. The scorecard of feature tradeoffs includes:

  • Source vs. target deduplication

  • Inline vs. post-processing

  • Global vs. local span

  • Single vs. multiple head processing

  • Indexing methodology

  • Level of granularity

As with any set of products, these tradeoffs reflect optimization for specific design or market targets: high performance, low cost, enterprise, SMB, etc. For more detail on the range of deduplication options and their implications, you may want to check out my colleague Curtis Preston's Backup Central blog.

Until recently, one aspect of deduplication that was generally unquestioned was its focus: secondary data, particularly backup. However, there are growing signs that this too is changing. In theory, deduplication can be applied anywhere there is a significant amount of data commonality -- this is why backup is such a good fit.

However, if we look around for more examples of high data commonality, one area that comes to mind is virtualized server environments. Consider the number of nearly identical virtual C: drives in a VMware server cluster, for example. Recently NetApp has been leading the way among storage vendors in suggesting deduplication for primary storage in these environments. In fact, they have been steadily expanding their support of deduplication, initially offering it on their secondary Nearstore platforms, then on their primary FAS line, and as of last week on their V-series NAS gateways where they can deduplicate the likes of EMC, HDS, HP, and other storage.

Of course, for many, this is unchartered territory and the performance and management impact needs to be better understood. But given the higher costs of primary storage versus secondary, the potential to achieve a 20:1 savings in storage, even for just a portion of the environment, is quite tempting.

Jim Damoulakis is chief technology officer of GlassHouse Technologies, a leading provider of independent storage services. He can be reached at jimd@glasshouse.com.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about EMC CorporationFASHDSHewlett-Packard AustraliaHPNetAppNetAppVMware Australia

Show Comments
[]