Lightweight Performance Testing Rikard Edgren
If performance is crucial for product success, you probably need pretty advanced tools to measure various aspects of your product, to find all bottlenecks and time thiefs. For all other software, performance is just very important, and you might get by with lightweight test methods. You may, or may not have quantified performance requirements, but you should test performance to some degree anyway; for the whole, but also for each detail (when appropriate.)
In TheTestEye’s classification, performance consists of:
Performance: Is the product fast enough?
- Capacity: the many limits of the product, for different circumstances (e.g. slow network.)
- Resource Utilization: appropriate usage of memory, storage and other resources.
- Responsiveness: the speed of which an action is (perceived as) performed.
- Availability: the system is available for use when it should be.
- Throughput: the products ability to process many, many things.
- Endurance: can the product handle load for a long time?
- Feedback: is the feedback from the system on user actions appropriate?
- Scalability: how well does the product scale up, out or down?
Be aware of different definitions of performance testing, e.g. some include reliability, stress handling, robustness, and what stakeholders believe is most important might differ (even when using the same words…)
Ongoing Violation Awareness
The number one lightweight method starts by finding out which of these characteristics that are relevant for your product. Then keep them in the back of your head, and whenever you see something fishy, investigate further and communicate. Often the OK zone is easy to reach, but testers should notice when violations occur. When appropriate, apply the destructive principle: Increase the amount of everything that can be increased.
Perceived performance is what matters for end users (but maybe not for a product comparison check list) so think about how it feels, and try using a stop watch. You might get pretty far by load testing with colleagues with several instances each.
There exists limiters for CPU, RAM, bandwidth etc. and many of them are free (and some of them become obsolete.) A task manager/resource utilization tool can give you hints on memory, CPU, disk, network et.al. Scripting your product to run over weekend is good for endurance and stability testing. JMeter is free and often quick to get running.
Summarizing performance test results is difficult. Aggregations of measurements don’t tell the full story, and the whole story takes a long time to tell. Communicate what is important, which is easier if you have asked stakeholders beforehand.
Warning: For some products, users aren’t as interested in Performance as the developers…