Testing crucial to avoid performance hit: consultant

Performance testing of IT applications can be disastrously neglected in an effort to satisfy "all the other -ilities", says a consultant.

A quest for desirable qualities like flexibility, reliability and scalability often underestimates their adverse impact on the speed with which the system processes transactions and the response times it provides.

Richard Leeke, of Equinox, addressing a meeting of the New Zealand chapter of the Worldwide Institute for Software Architects last week on "architecting for performance", says a quest for flexibility, for example, can mean constructing a highly table-driven system. This not only counts against performance but can handicap testability. He cited one Java-based package whose user interface used dynamically generated HTML. The HTML field-names were generated at runtime based on the order of execution of system functions; so the field identified as FieldID54 could become FieldID99 on a subsequent run, making testing under a simulated load extremely difficult.

Scalability is sometimes overestimated, with developers building in the ability to expand to workloads that will never be reached in practice. This, again, can adversely affect performance of the current system. Leeke cited another "war story" of a system with complex data structures and a rich user interface, where the architect decided to use a three-tier architecture with data access in a middle-tier using Microsoft Terminal Server.

All test runs showed excessive response times and typically ended with MTS or SQL Server collapsing under the load. While there was a scalability advantage to allowing connections to the database to be obtained from an MTS pool, there was a cost in the repeated object creation and destruction required.

The system was restructured as two tiers with the data access layer on the client and a persistent connection between client and database. Response times dropped from a maximum 35.8 seconds to a comfortable sub-second range.

Leeke suspects that the choice of MTS was partly "CV-ware" — a desire by the developers to build their experience in marketable skills.

The fault is, of course, not always with the architecture; in another case the performance bottleneck proved to be a wrong ethernet switch setting. Leeke tested this by "pulling out [the client's] huge Cisco switch and putting in an $80 Dick Smith hub" — just for the duration of the test, he emphasised.

But whatever the reason for inefficiency, performance testing should be done as early as possible, preferably at the end of the first construction iteration, to give the architects and developers time to change the system if necessary. Otherwise, costs will build later in the process.

"People don't build systems to be tested," he says. But if the design makes it too hard to test performance, developers will pay a time and cost penalty.

Keep the big picture, he counsels. Ensure database specialists do not get lost in their corner of the system and forget about how that interfaces with the field of the network specialists.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about EquinoxMicrosoft

Show Comments
[]