Grid computing poised to shake up storage

The concept of grid computing has been a hot topic for some time, and as the year has unfolded the future application of grids is becoming more clear. Sandial Systems Inc.'s Director of Technical Marketing Rob Strechay spoke with Scott Tyler Shafer about how grid computing will transform from the domain of high-performance computing and into the enterprise.

Q: How do you define "grid computing?" I define it as the decoupling of networking, storage, processing, and memory?

Strechay: I'm a member of the Global Grid Forum and the co-chair for a proposed research group for policy for grid networking, and that is how we approach it as well.

Q: What is the Global Grid Forum?

Strechay: Global Grid Forum is the standards body looking at how to develop standard ways of providing interfaces and an application infrastructure for grid computing. That means making all the different pieces such as servers, networks, and storage talk together in a coherent way, irrespective of who is writing the underlying infrastructure. Basically, (it means) making sure that the networking pieces can be provisioned in such a manner and that the storage resources can be provisioned in (such) a manner and the computer resources can be provisioned in (such) a manner.

Q: What needs to happen on the hardware side to make grid networking in the enterprise a reality?

Strechay: From a hardware perspective, you have to look at the intelligence in the network. It has to increase, and that means standard ways of interfacing. Most of those standards are pushing toward the CIM (common information model) standards, or what has come out of DMTF (Distributed Management Task Force) as an interface and API. Global Grid Forum and DMTF are partnered together as well as SNIA (Storage Networking Industry Association) with DMTF.

Q: Do we need a standard to connect these pieces?

Strechay: No, we need a standard way to exchange information. A lot of the stuff that is happening with T11.5 (a task group within Technical Committee T11 responsible for Storage Management Interfaces) and intelligent switching is very complementary to what is going on with grids. You can look at grids in two different ways: There is stuff the research community is going to use that is called "the grid," and then there is the stuff that businesses are going to use that is called "grids." Business grids are definitely going to be architected in a much different way than the research grids are. You are going to have fault-tolerant hardware, you're going to have much more redundant backbone types of network infrastructures to provide higher availability, visibility, and control. That way the interconnecting of different servers and storage from a compute resource and a storage resource perspective becomes much more manageable. For instance, controlling bandwidth, then quality of service becomes extremely important based on the fact of being able to go through and provision out how much resource is needed for a certain job. Because when you start sending I/Os all over the place, you really have to be able to manage that process so you are not wasting bandwidth or starving another process out.

Q: Is this stuff all still theoretical?

Strechay: Well, right now your major pharmaceuticals have all deployed them: Pfizer, GlaxoSmithKline, and even the automobile industry uses them.

Q: Do they use grids for their entire IT operations?

Strechay: No, no, I don't think you'll ever see an entire IT operation go to a pure grid, or at least not in the next five to six years. It's just not practical. They have departmental grids, and that is how it is going to grow out. They'll interconnect those departmental grids to backbone grids that join the different resources together. It is kind of how everything evolves -- from workgroup to departmental to enterprise.

Q: Has the decoupling of all the elements begun yet?

Strechay: Absolutely. If you look at what is coming out from HP (and) IBM on the server side, they are already decoupling using the blade servers. That means separating the storage and memory from the I/O.

Q: Does that mean enterprises aren't buying the storage that comes with the blade servers?

Strechay: Not a lot are. Or if they are, they may just put an OS running on the disk for emergency purposes or just buy disk-less. Then they just point their blade server at a LUN (logical unit number) on an array and it assumes the personality of that OS. That way the OS boots off that array and it can pick up the applications and the personality of that server, that application. (And) if a blade server goes down, all you have to do is reposition a new blade server to take that over.

Q: Where does Sandial fit in?

Strechay: We see Sandial as providing the plumbing for grid networking by providing a backbone for storage clustering or server clustering. You have to have a highly reliable system in between your storage or your compute and your storage resources as you start to deploy them.

Q: What will interconnect them?

Strechay: Fibre Channel, Ethernet, InfiniBand. It comes down to being able to account for everything that is going over the network -- we need to be able to tag what resources by element: processors, memory, bandwidth, and storage. For example, if you a have an extremely large batch-oriented process that could be staged in memory somewhere and than a processor is going against that and trying to read that and returning a result, which could be very large. It is being able to understand how much of those four different resources you are actually using to complete a job. It is kind of like what ASPs were to applications; this is like ASPs for servers. IBM is doing this in an ASP-type format. We'll see HP do this, too, (and) Gateway is already doing this in this format.

Q: What are the software needs?

Strechay: You're going to need software that understands the four different pieces and what is being used. They have DRMs (distributed resource managers), and those are a combination of software and hardware that understands what is being used and able to allow you to go in there and deal with the security, usability, and accessibility aspects of the different resources, like a broker.

Q: Is this software mature?

Strechay: Well, the batch processing stuff is very mature. That question gets into how soon we will see these widely deployed? Right now grids are extremely efficient for batch processing, but you are not seeing any interactive applications. You are not going to see an SAP deployed on a grid anytime soon. Oracle is pushing down this route, but until those apps become grid-compliant, then you're basically still talking large clusters.

Q: Where does clustering fit in when you talk about grids?

Strechay: When you talk about grids, it's like clusters on steroids. It is going to start out as a mass amount of servers with no disk in them that are time-shared. Then it is going to move into large frames with 18 boards of memory and 18 boards of processors and you don't really care where or how the task gets done, you just know what the resources are. Then it communicates back to the disk that is sitting in a large array or a bunch of large or small arrays.

Q: What needs to be done to the current storage architectures to make storage part of the grid?

Strechay: Look at the T11.5 stuff, with some of the virtualization that is going on (there). That's going to become extremely important because opposed to moving data between servers over Ethernet, which takes the data off the disk and sends to the server then back to another server and back to another array, it would be a lot easier if the SAN network could self-provision itself. For instance, I create a zone of a specific grid application for a specific time period. I can account for that and have access to that LUN on that disk array and pull off that data across my network at wire speed. That is what becomes very attractive to interactive applications. That will take visibility and control being in the fabric of the network.

Q: What does this all mean for the storage array vendors?

Strechay: I think you are starting to see it now. EMC, Veritas, HDS, IBM, and others are looking to move their software out into the fabric and onto an appliance model. And you have the appliance vendors looking to make a standard platform for that software. I think that the virtualization of storage is the underlying foundation that needs to happen. I think that smart array vendors are going to figure out how to move their value add out of their boxes while keeping the hardware smart and more provisionable from the outside.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Distributed Management Task ForceEMC CorporationGatewayGlaxoSmithKlineGlobal Grid ForumHDSIBM AustraliaLogicalOraclePfizer AustraliaProvisionSAP AustraliaStorage Networking Industry AssociationVeritas

Show Comments
[]