The Need For Performance Testing
Performance is a "must have" feature. No matter how rich your product is functionally, if it fails to meet the performance expectations of your customer the product will be branded a failure.
Application architectural design decisions may be greatly influenced by the importance placed by the customer on one or more specific requirements. Incorrect design decisions, made at the outset of a project as a result of invalid assumptions, may become impossible to remedy downstream. (Remember: What we 'ASSUME' can often make an '***' out of 'YOU' and 'ME'.)
Set Performance Testing Objectives
It is useful during performance testing to start by setting clear objectives. More often than not, your performance tests will seek to achieve one or more of the following objectives:
Identify system bottlenecks.
Verify current system capacity.
Verify scalability of the system.
Determine optimal hardware/software configuration for your product.
Dealing with the identification of system bottlenecks is a good place to start. The scalability and capacity of your system will often be directly constrained by a bottleneck, although identifying and removing a bottleneck often leads to the discovery of yet another bottleneck, so be prepared for the long haul.
Determine Customer Requirements Early
It is extremely important that you fully understand your customer's intentions and requirements as early as possible regarding software performance i.e. the operating environment (both hardware and software) in which your product will be deployed and the manner in which it will be used.
To begin to identify your customer's requirements you must determine:
Transaction mix.
Usage patterns.
Data volumes.
Maximum allowable response times.
Minimum transaction throughput rates.
Note: Determining the above information is particularly important when a customer is seeking your advice on purchasing suitable hardware and software specifications in order to best deploy your product. As there will undoubtedly be lead times to purchasing and configuring such items, your customer is likely to want this information at the earliest opportunity.
Determine The Transaction Mix
The "transaction mix" that your product must cope with is determined by the number of functions your product implements and the way in which those functions are executed by users as part of the activities they each perform in relation to their individual roles. Try to identify key user groups and list the activities associated with each user role.
In addition to the activities performed by users in different roles, the transaction mix is also dependent upon the number of concurrent users and the frequency of their activities.
Determine Usage Patterns
To accurately simulate system usage it is important to understand the intended usage patterns for your product. By studying user roles and quantifying the frequency and concurrency of user activities it becomes possible to begin to predict user behaviour and usage patterns that can be simulated in the test environment. Your test cases must simulate real usage patterns to be meaningful.
Try to determine different levels of usage occuring over time such as normal usage and peak usage. From this information you can estimate double peak usage levels and design appropriate system stress tests to push your product to limits that it might not normally achieve.
Tip The number of virtual user licenses required for your automated testing tools to test your product need not necessarily agree with the intended user base size. On most occasions, automated regression testing tools will out-perform individual user activities which need not run concurrently and can therefore be used to simulate the load of a much larger number of 'real' users. A little effort with your calculator can help save on the cost of expensive virtual user licenses.
Data Volumes
If your application creates data then you need to consider the impact of increasing data volumes over time on application performance. You will need to forecast data volume sizes based on usage patterns and then create test data volumes as appropriate to simulate future data volume scenarios.
Creating or obtaining large data volumes can be a problem. Large volumes of test data can take considerable time to create as well as introduce unexpected hardware requirements during development.
Expect to have to repeat your test cycles many times. Automated testing tools are therefore essential along with a fast means of backing up and restoring your test data and environments.
Design Tests Carefully
To design credible tests requires an intimate understanding of the system transaction mix i.e. user activities and behaviours. Even when armed with this knowledge, time and costs will ensure it will not be possible to test every conceivable scenario and so tests need to be considered carefully.
All tests must simulate real user activities under a variety of circumstances providing sufficient data to allow plotting of meaningful graphs. Unfortunately this is more easily said than done so expect to spend a reasonable amount of time designing tests and their associated pass / fail criteria.
Tip Beware of false (variable) results caused by a hot data cache or one-off costs caused by JIT (just-in-time) compilation. To overcome such issues you may wish to avoid cyclic use of primary key values or allow an operational warm up time before you start gathering metrics.
Keep Tight Control Of Your Test Environment
For performance tests to be meaningful, your test environment must be kept under strict control with no unauthorised changes being made (such as the installation of service packs or tweaking of configuration settings etc.) which might falsely influence test results leading to possible incorrect conclusions.
Take steps to ensure your test environment is isolated from variance such as network traffic or scheduled tasks to ensure that test results are repeatable. Only variations which you choose to make as part of your test strategy must be allowed. (Beware memory leaks which can prevent repeatable test results!)
Your test environment (hardware and software) should mimic the intended deployment environment of your customer as closely as possible. The manner in which you install and configure test software builds of your product should also mimic the way your customer intends to install and configure your product.
Collect Metrics As You Go
During performance testing, be sure to take and record precise measurements in a controlled environment. Treat each test as a controlled experiment. Don't just measure response times, the number of users or transaction throughput rates. Take note of system performance counters such as processor usage, memory usage, network traffic, disk input/output etc. as these can provide developers with valuable clues regarding the cause of a bottleneck or failure.
Performance rarely degrades gracefully. More often than not performance degrades drastically when circumstances suddenly change. Under such circumstances, the more operational data you have at your fingertips, the sooner you are likely to diagnose problems which cause sudden performance degradation.
Performance counter information can also be used to derive the possible impact of vertical scaling on performance i.e. how upgrades to an individual computer such as adding additional memory, processors or faster disks might improve performance.
Conclusion
Performance testing requires a different mind set and skill set to that of functional testing and is best started early in the development life cycle whenever possible.
Understanding customer requirements and expectations, as well as user activities and behaviours, is key to designing suitable tests.
Ensure tests represent realistic usage of the application.
Test environments must be carefully controlled to prevent unauthorised modifications which might falsify test results.
Automated test tools coupled with fast backup and restore mechanisms are essential due to the need to repeat tests many times.
System bottlenecks can rapidly become very technical in nature and consume considerable resource and effort to diagnose. Resolution may require considerable re-work and even re-design of your product.
Even after performing a significant number of tests and gathering a considerable amount of data and test results there is still a possibility that the wrong conclusions may be drawn by developers inexperienced in performance testing.
Seeking advice from specialist consultants can offer a cost effective means of designing tests, diagnosing bottlenecks, interpreting test results and resolving performance problems quickly.
Performance Testing Considerations