Pimp your apps

Watch application performance hit the metal as traditional acceleration technologies merge and end-to-end optimization becomes a reality

Tools for speeding sluggish applications traditionally are of two types: application-delivery controllers designed to ease the load on Web servers, and WAN optimization devices aimed at mitigating network latency and bandwidth constraints. Some say it's time for these two to consolidate.

"I'd like to see convergence of traditional data-center load-balancers and general WAN-optimization devices. It has always confused me that a convergence of those boxes has not occurred," says Michael Morris, network architect at a US$3 billion high-tech company.

The two product categories tackle different performance-related problems. Companies deploy load-balancers and traffic-management devices in the data center primarily to improve the performance of Web applications that users access over the Internet. WAN devices, on the other hand, are deployed symmetrically (at both ends of WAN links) and generally use such techniques as caching, compression and protocol acceleration to improve the performance of business applications that internal users access over dedicated WAN links.

Over time, however, the lines have blurred, and users are accessing business-critical applications - Microsoft SharePoint and SAP software, for example - across public and private networks. In addition, data-center gear and WAN appliances have grown to include some common features, such as compression and SSL optimization.

So, should the two categories be merged into a single product? Or if not merged, should they at least be better integrated so IT staff could take advantage of their respective acceleration talents to optimize applications from the data center to the desktop?

Morris makes a case for merging them. "It makes perfect sense that the same device that is essentially handing out the connections from the servers holds the data and then does everything it can to optimize that traffic down to the clients, which are generally around the world," he says.

At a minimum, if the devices remain separate edge and data-center boxes, Morris would like to see them share information about application and network conditions. "They could at least have some sort of communication going on, saying 'this is what I'm seeing, this is what you're seeing,' and optimize traffic that way," he says.

Choose your platform

At a high level, setting application-delivery policies that span data-center and network devices has merit, as does taking into account where a request is coming from, says Rob Whiteley, principal analyst and research director at Forrester Research.

"It makes sense to be able to control a policy that says, 'OK, do as much as you can in the load-balancer, especially if the endpoint I'm serving this to is across an extranet or across some kind of public link where I don't own the endpoint. And if it's going out across my private network, then turn off whatever feature I would use on the load-balancer and turn on a more robust version at the data-center perimeter in the WAN-optimization box,'" Whiteley says.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about Ashton, Metzler & AssociatesBest PowerBillionCiscoCitrix Systems Asia PacificCitrix Systems Asia PacificF5F5 NetworksForrester ResearchGartnerGatewayInfobloxInfobloxJuniper NetworksJuniper NetworksMicrosoftNetQoSNICERiverbedSAP Australia

Show Comments
[]