Computerworld

Storage virtualization buying guide

What is storage virtualization? Read our how-to guide on the ways storage virtualization can help address persistent data management issues
Edit Cowan University's Angus Griffin.

Edit Cowan University's Angus Griffin.

Over the last five years, the terms 'server' and 'desktop virtualization' have permeated almost every conversation in the IT arena, with much being said about the wealth of benefits both can offer enterprises.

While IDC statistics indicate Australia has the highest rate of server virtualization penetration in the world, the country is yet to warm to storage virtualization. That’s surprising, as if careful implemented, the technology can help manage persistent data management issues.

It’s even more the case as IDC statistics also suggest that data will grow some 44 times larger than it is today during the next decade, while alarmingly, the average number of storage administrators managing that data will only grow 1.4 times.

Clearly, it’s time more attention was paid to storage virtualization. But, as IBRS advisor, Kevin McIsaac tells it, a lot of the blank stares usually encountered when the technology is mentioned could be due to the fact that storage virtualization isn’t necessarily the easiest concept to grasp. For one thing, there is no clear product segment for the technology and, in a way, every aspect of storage is some form of virtualization.

“I don’t see storage virtualization as a product,” he says. “I see it as a bunch of features that add different types of storage virtualization to an existing product that you need to evaluate.”

IDC senior program manager, Matt Oostveen, agrees, describing storage virtualization as a “spectrum of technologies that roll up into virtualization”. That spectrum encompasses the likes of thin provisioning, de-duplication, hypervisors and forms of middleware located between the storage area network (SAN) and servers.

Early adopters

Either way you slice it, storage virtualization, for those that have made the leap, has enabled some pretty dramatic changes. With some 350,000 customer worldwide, Australian online hosting company Melbourne IT is no stranger to complex storage networks. The company’s dynamic storage environment has around 800 terabytes (TB) of available content which is projected to break above a petabyte at some stage this year.

The organisation turned to virtualization early taking its storage virtual in 2007 with the implementation of IBM’s SVC (SAN Volume Controller). The rollout virtualised 26 SAN controllers across multiple data centres into single views of its storage infrastructure.

Initially, the technology acted as a migration tool, enabling 300TB of content to be migrated across during business hours. Three years on, the company hasn’t looked back, though is 12 months into a migration of storage from IBM to EMC with completion due in the second quarter of this year.

In addition to being an essential data migration tool, storage virtualization technologies can also help organisations save dollars and increase efficiency, availability and flexibility.

Melbourne IT’s chief technology officer, Glenn Gore, says the technology serves a significant purpose in his organisation, allowing staff to manage the storage and data centres as a single “blob” rather than individual and ultimately more complex silos.

With the advent of virtualization technologies, Gore says he now has the ability to move data between SANs without an outage and can move that storage across different LUN (logical unit number) or RAID (redundant array of independent disks) groups. Storage can also be assigned across different performance or availability tiers to the applications when needed, providing the on-demand capabilities customers require in a Cloud-enabled world.

“Being a hosting company, we always have new customers coming on board,” Gore says. “We also have existing customers who are always looking to grow, are requesting more storage or installing different applications with different performance requirements.

"There are occasions where customers no longer need their applications as they move or upgrade to newer services, meaning there’s always new storage being provided, old storage coming offline and being recycled for new customers.”

For organisations with disparate storage spread across two or more locations, storage virtualization is a no-brainer, says Gore, as it simplifies the management of storage across the locations, adding freedom and agility in the same way as server virtualization.

He notes that since going virtual, both density and utilisation are up, total cost of ownership models have vastly improved and the cost per gigabyte of storage to the customer has decreased.

“When customers need that high performance level, which unfortunately does come at a higher cost, they only have to use it for the periods they require, so there’s some very real savings that can be realised particularly in the larger storage environments,” he says.

In the case of Western Australia’s Edith Cowan University (ECU), flexible storage has consistently been a key priority, says manager of IT infrastructure, Angus Griffin. The university's large number of systems change regularly, requiring the institution to scale storage depending on project load and specifications.

Edith Cowan's also went virtual with its storage in 2007 with the 15-month deployment of IBM’s SVC, replacing an ageing, outsourced EMC Clarion system.

“We were looking for a system that gave us flexibility and the capability to move storage around live while our systems were running," Griffin says of the upgrade.

The technology allows the university to isolate the back-end disk system from the servers, enabling movement of storage from slower to faster disks and vice versa without interrupting the servers. It also enables replication between the different campuses and the ability to take data snapshots for disaster recovery.

“Without storage virtualization the end user would constantly experience service interruptions caused by a poorly performing application as a result of the application being on an inappropriate tier of storage,” Griffin says. “To shift the application to the correct tier, the student record system would need to be switched off each time, causing a disruption. Virtualised storage technology allows the application to be moved while the system is live.”

Next: When to go virtual

Page Break

When to go virtual

As is the case with most emerging technologies, storage virtualization isn’t a cure-all for all organisations and it can come with a host of unexplored risks and challenges to the unwary.

Firstly, organisations must assess the cost and complexity involved in deploying virtualization technologies to ensure a given offering is the correct solution for them, warns IBRS' McIsaac.

“In my experience unless you’re organisation with a very large data sets like a Melbourne IT or a Telstra and you’re constantly moving data around and every six months you’re disposing of an array and buying a new one, you’re vastly better off buying one or two arrays from a vendor that will hold all your data rather than trying to knit together a quilt of different arrays from different places,” he says.

IDC’s Oostveen advises end users to make sure that each option is compared against their particular set of requirements and that they evaluate price based on dollars per gigabyte as well as dollars per IOPS (input/output operations per second) and IOPS per gigabyte. Doing this will ensure they are buying all necessary aspects of storage and not just passive capacity.

Melbourne IT’s Gore echoes this sentiment, noting that anyone with more than one SAN environment operating will reap the benefits while a smaller environment with a single SAN will see none.

“You can’t virtualise one SAN but the moment you’ve got two or more in operation, virtualization benefits start kicking in and the more controllers you have the more those benefits start coming in,” he explains.

However, even in situations where virtualization is feasible, the skills require to manage them remain scarce in Australia. Part of the resolve to move to IBM at Edith Cowan, according to Griffin, was the desire to in-source management of the university's storage, where previously it was "tightly managed" by EMC.

While the solution itself is slightly more complex — and the learning curve a challenge — Griffin says in-house management has made day to day maintenance much simpler.

“Being able to have the staff really closely acquainted with how the gear works helps because it is quite simple to operate and they can really get some benefit out of understanding it which helps to prevent any major hiccups,” Griffin says.

“We can measure whether storage is performing appropriately and how a project runs and then adjust how the storage is provisioned in the backend as the project evolves or progresses."

Melbourne IT’s Gore claims there isn’t a huge degree of risk in migrating the data over to virtualised storage, despite spending the last 12 months conducting rigorous tests on the new platform before it goes into production. He does, however, stress the importance of training staff to manage the environment.

Staff must have the skills required to manage virtual storage, which Gore notes are slightly different as are the risks from virtualising storage, and should an operational issue at the virtualization layer arise, it could potentially take out an entire storage infrastructure.

According to Gore, just as is the case with server virtualization, storage virtualization involves a different thought process as the data is transient and no longer locked to a single thing. Staff must be aware of the potential impacts of moving storage and invest accordingly in the tools to allow them to view their storage environment and its performance.

For Griffin, there's no going back.

“We deliberated over whether it was the right thing to do but when you look at the benefits we get out of it, it certainly turned out to be the right choice, we went through a process internally where we looked at all the technical options that we wanted to put into the request for tender we put out and it became very clear that with that simplified, standardised and virtualised strategy, we needed to put this technology in place to get those benefits,” he says.

Next: Storage virtualization tricks and tips

Page Break

Storage virtualization tricks and tips

When it comes to storage virtualization, it’s important to remember that not everyone will benefit from the technology. Like the Cloud, storage virtualization doesn't always live up to the hype.

“It’s such a muddy, ugly, messy, stupid area,” IBRS’ McIsaac says. “I think the problem is that vendors are trying to come in and pitch a concept that doesn’t exist and the storage managers then have to wade through the architecture to really see what they’re selling.

“They need to stop talking about storage virtualization and all of its benefits and say ‘let’s start talking specifically about server feature functions and how they translate into real business benefits’, because otherwise all you get is a real high-level pitch.”

The solution is to plan from the data, rather than the technology itself, McIsaac advises. Create a high-level map of the data, look at the major applications, and the data sets. Also, examine service levels, including the hours of operations of your data or hours of availability, and what the recovery time objective is.

On the other side of the spectrum, organisations should also be mindful that there is greater danger in not looking for a solution to assist with data growth.

“To be able to manage and cope with the amount of data that’s accumulating as we head into the future, organisations need to be able to rationalise and consolidate and virtualization techniques are the only way we’re going to be able to keep up with this data growth,” IDC’s Oostveen says.

There is more risk, he says, in sitting back and letting data explode within the data centre, than in moving to virtualization, because the organisation will rapidly run out of both budget and storage administrators.

“Start looking at what key problems you’ve got and then look at what the vendor is offering and ask yourself — does it make it simpler, more flexible and cheaper per terabyte," McIsaac says.

Though storage virtualization remains the much maligned cousin of its desktop and server counterparts, Melbourne IT’s Gore maintains higher uptake is inevitable.

“Storage will go the same way and the reason for that is because of these fantastic opportunities and technologies that you get from storage virtualization.”