First will come virtualization, then utility storage. That's long been the vision of how enterprise storage will evolve as IT grows increasingly dynamic and on-demand becomes business as usual.
The good news is that storage virtualization's day finally has arrived, the uptick in interest fueled by the success of server virtualization and reports of noteworthy results from early adopters. Pioneers cite big gains in storage utilization and decreases in device-level management headaches. They also share how they've avoided the spending sinkhole despite contending with ever-increasing data volumes.
See slideshow: Eight technologies for better data storage
"We've gone from 40 per cent utilization of storage on the back end to up over 85 per cent. Virtualization has saved us from going out and getting three times more disk than we needed, and we've realized a single point to manage connectivity between storage and the hosts," says Drew Kreisa, storage administrator at Mercury Marine, a recreational propulsion-engine maker.
Early adopters also have discovered, however, that the distance between storage virtualization and a utility-storage utopia is vast and full of what seem to be insurmountable challenges. Confusion - about the large number of architectural options, the lack of interoperability among different vendors' products and poor storage-resource management (SRM) tools for virtual environments - has muddied user expectations. As they wait for the industry to sort itself out, IT executives are left to relish the gains storage virtualization has brought them while pushing off their grand utility visions further into the future.
The wow of now
When Mercury Marine started looking at storage virtualization options five years ago, choices were limited. Not so today. The options for how to architect the abstraction of the physical layer are so abundant that IT executives should be wary, Kreisa says.
"There's a lot of confusion in the market. There are too many companies offering completely different ways to architect your network, which means you have to be careful not to bring a new component into your network that will blow away what you're already doing," he says.
In theory, storage virtualization lessens the complexity of managing, backing up, archiving and migrating data among pooled storage devices. With the technology, IT executives shouldn't have to get mired in details about the physical devices.
Picking the right approach to storage virtualization can be critical. First, a company must decide where it wants the storage virtualization to take place. For instance, it could choose a host-based system from such companies as Brocade Communications and Symantec. However, as these environments grow, they require their own operating system, host virtualization licenses, maintenance and software overhead.
A company also could deploy storage virtualization as part of the fabric with an appliance, such as IBM's SAN Volume Controller or with software that runs on the switch, such as EMC's Invista. Arun Taneja, founder of The Taneja Group consultancy, says the appliance-based approach is hot right now, while the switch-based approach doesn't have as much traction because of its higher cost.
A company that decides to go with a fabric-based strategy also must consider whether it's going to perform virtualization with in-band, out-of-band or split-path technology. In-band products, such as those from DataCore Software, FalconStor Software and IBM, allow both data and control information in the direct path of the host and controller. With out-of-band solutions, such as those from LSI, data flow is separated from control flow.