Inside the Capability Characteristic Rikard Edgren
I think quality criteria/factors/attributes/characteristics are extremely powerful.
It helps you think in different ways, and makes it easy to get a broader coverage of your test ideas.
See Software Quality Models and Philosophies for McCall, Boehm, FURPS, Dromey, ISO 9126 models, or CRUSSPIC STMPL for a version without focus on measurability.
The granularity of this (and all other) categorization can be discussed, so here are suggested sub-categories for Capability, with some thoughts for inspiration:
Capability – The set of bigger and smaller things you can accomplish with the software. This is usually covered by requirements or similar, and with the addition of some help from developers telling about small or hidden features, you can cover this with thorough and hard work. I suspect some test efforts stop here.
The most lightweight testing approach is to ignore this totally, it has been considered by others, and you will cover parts in more interesting ways when performing other testing.
Completeness – Are the functionality really enough? Or are there small things in between, and on the edges that are needed to create a killer app? As a tester you won’t decide the scope for a release, but you can identify small things that can be implemented almost for free, and since you have knowledge of the system you can identify bigger things that someone else can put up on the list.
Lightweight method: think about what is missing while performing system testing.
Correctness – This could be seen as an obvious part of capability, but if you think about precision for real/double, or other corner cases, there are a lot to investigate, and probably a lot to ignore as well…
I don’t know about any lightweight technique for this, except asking detailed questions about what’s important.
Efficiency – Does the product do what it is supposed to do in an effective manner, without doing what it isn’t supposed to do?
Lightweight: keep your eyes open and look at more places than the apparent ones.
Interoperability – In a requirements document you will find a lot of things the software should be able to do, but you will not get a list of all important combinations and interactions that surely exists. Pairwise testing is a theoretical solution to this dilemma (and I guess it is effective for some situations), but with knowledge about the product you will see that some combinations are more error-prone than others.
A lightweight testing solution consists of two test ideas: 1) turn on everything 2) turn off everything
Concurrency – Can the software or functions therein operate simultaneously? How many concurrent actions? What if they are dependent on each other?
Lightweight testing: start more operations now and then while system testing.
Extendability – all features wanted by customers can’t be implemented, so it can be nifty with an API that allows extensions of various types.
Lightweight testing: get hold of an API implementation, use it and change it.
What is wrong, what is missing?