Testing Clichés Part III: “We can’t test those requirements” Rikard Edgren

It is good to strive for better requirements by critical analysis (and looking for what’s missing), but there is a danger in complaining about untestable requirements.
If those vague requirements are changed (made too specific) or removed, the words in the requirements document have less meaning, and less chance of guiding towards great software.

And there are no such things as untestable requirements. There are requirements that can’t be verified, that can’t get a true/false stamp, but you can definitely test the software and look for things that don’t match the essence of the vagueness.

An example: “the feature should be easy to operate” is difficult to prove right or wrong, but very easy to evaluate subjectively after doing some manual testing.
If the requirement is changed to “minimum no. of mouse-clicks to perform common operations”, you might catch some issues, but some other, more important things, might be lost in translation.
And if the requirements are split into many, many smaller pieces, you might lose less information, but end up with a too complex document that is very expensive to create and maintain.
It’s not a bad thing to be specific, but that’s not feasible for everything.

There’s an underlying assumption I should tell you about:
I do not think requirements should be contractual, they should rather be aiding – they should help the development team produce good software.
Since requirements neither can be complete nor perfect, we should rather take advantage of oppurtunities that arise, and create something that can solve problems. If the essence of the unspoken requirements are captured, it might not matter that a few specifics aren’t met.

Testers should keep in mind that there’s a greater whole we’re aiming for, and do our best with what we have, so be it unverifiable requirements.

12 Comments
Henrik Emilsson April 7th, 2010

Great thoughts Rikard, and I agree with (most of) what you say.
One problem with this is though when the requirements in fact are contractual. Then it is a big risk to test according to the “essence of the vagueness”. Since the requirement is made by someone and an interpretation of something , and the tester is making an interpretation of the interpretation, you could easily end up in a situation where testing is in fact focusing on the wrong things. And remember that requirements aren’t only written for software produced in-house; there are a number of situations where requirements are produced where you don’t have any (control of) new software to be produced e.g., evaluation of a third-party application; implementing an IT-system; etc. In these situations you cannot say that they are “aiding”, they must be “correct”.
So, would you go on and test vague requirements for a multi-million dollar project that is based on a contractual agreement to meet the requirements? If the contract is lost (or you don’t meet the requirements), will you then blame the requirements for being too vague? How much are you willing to risk?
I am not trying to be the devil’s advocate here, but there are several situations where it is of essence to criticize the requirements for not being testable.

James Bach April 8th, 2010

Hi Henrik,

If you think of the tester as an arbiter, who is settling things, then indeed it’s a problem to have contractual requirements that are vague in any way.

But testers don’t settle things. “Testability” does not have to mean “decidability” because testers are not umpires. Testers collect evidence and offer inferences. That may be enough.

Whether requirements are satisfied or not is a question for the people who run the project, or those who pay for it. The tester sells ammunition to the parties that may wish to have a pistol duel over that.

Besides, if the tester considers the requirements vague, that does not mean that the parties who signed the contract consider them vague. Or even if they do, it may actually be a strategy, on both sides, to allow the requirements to be vague. The problem with specific requirements is sometimes that you KNOW you aren’t going to get what you want. Sometimes it’s better not to argue about it at the time the requirements are written, but to postpone that debate until later, when one side may feel it will have the advantage in the contest.

Rikard Edgren April 8th, 2010

I will not take the easy way out and say that this only applies to in-house software production; there might be situations where you know everything in advance, so contractual requirements could work (e.g. choosing between existing systems for a very specific usage.)

In the “producing new software” situation, you should have enough trust between involved parties so you know you will sort things out along he way (maybe not so realistic…)
Would you really dare to rely on contractual requirements for a multi-million dollar project?
2010 Volume 1 of Professional Tester magazine (http://www.professionaltester.com/files/PT-issue1.pdf page 13-14) tells the story of British NHS patient records system, where nobody entered data in the system, because it was too cumbersome to use.
I’m confident that the requirements where of contractual type, and they did a thorough (but incomplete) risk analysis as well.
I guess the most important thing is to know that just fulfilling the requirements doesn’t mean it will be a good, or great, system.

I also want to mention that there should be several testers, and also other people involved in giving input to test ideas (reducing the risk of focusing on the wrong things.)

And probably there is some merit in all clichés…

Henrik Emilsson April 8th, 2010

@James:
I think that a tester has the right to express the feelings to the project manager about issues that she find when trying to do her job, namely to provide quality-related information about the product under test and “… collect evidence and offer inferences”. So, I don’t meant a tester to be obstinate and refuse to test something, I meant that a tester should flag if she suspects that the information she could gather from testing the requirement might be misleading. (I interpreted Rikard as that a tester should go on and just test the requirement no matter what).
This is in fact one type of ammunition that the tester could sell to the people running the project, even if it their strategy is to have vague reqs.
I do agree that the term “testable” isn’t the correct word since it is never a boolean choice whether it is “testable” or not; it is always possible to test the requirement in some way.

Rikard Edgren April 8th, 2010

Henrik, I should clarify that I don’t think that testers should test the requirements no matter what.
I think testers should review requirements critically and creatively, but they should be aware that there is a danger in routinely saying that requirements must be possible to verify.
For some requirements, they should be made more specific, and for others they are better kept as is.
And for the testing part, the existing requirements should be used together with a lot of other important information.

I am against “all requirements must be possible to verify”.
I am for “all requirements should aid in the construction of the software”.

Henrik Emilsson April 8th, 2010

Rikard, I understand. Then I totally agree with your post! 🙂

Saam April 8th, 2010

Regarding “all requirements should aid in the construction of the software”.
Perhaps the word that summarizes this is the word “useful”? All requirements shall be useful.
Meaning that someone can and will use it to perform good work. In some cases we might find “contractual” or “verifiable” type of requirments to be most useful and in some cases it might instead be “essence of the vagueness” type of requirments as you point out.
The important thing however, that you highlight in your post, is that we must not just by auto-pilot state “requirements shall be verifiable” (becasue we read it some book in school) but actually consider what will provide us the best end-result.

Martin Jansson April 9th, 2010

If the requirements covered the product at an extreme low level. I am assumed to test at highest system level. When I am asked to test the feature with reference to the really low level requirements, I could say “I can’t test those requirements”. I would be able to test, but not those requirements. I could create my own requirements or create use cases, but it would not be those low level requirements that are valid for a low level integration test, but invalid for a high level system test.

If I would say “Yes, I would will test those requirements” I would do a lozy job by not speaking up about the “missing requirements” at higher levels. I would naturally always be able to test. But if it was required of me to say how I covered those requirements I would say “No coverage that is valid”.

Rikard Edgren April 9th, 2010

Saam; Yes, auto-pilot can only be used under strong supervision.
And “useful” is a better word, and together with James’ point of different people having different opinions we get:
“All requirements should be useful, at least to some person”

Geir Gulbrandsen April 12th, 2010

“All requirements should be useful, at least to some person”

And as in the case of “Quality being value to some person”, I guess it would be preferable if this person was someone who matters?

Of course, the requirements being usefull only to your competitors would not necessarily make them unverifiable.

Rikard Edgren April 13th, 2010

Martin, “non-matching” requirements due to testing level is a good observation.
I think it is a problem if the focus is on coverage of only the requirements, since the potential usage is more important. And there are many other sources to use as testing inspiration, like you said, it is always possible to test.

Geir, feel free to add “that matters” if it helps.
Personally, it only gives me follow-up questions I find distracting.
For instance, the competitors you mention can be considered to matter, but not in the intended direction.

Devon Smith April 22nd, 2010

In a perfect world, requirements would be perfect. I think we should work hard to get requirements up to a good standard to make both testing and development easier.

However, in the real world, testing needs to be done even in less than perfect situations. It is also important to learn to work with the requirements we have, get clarification where we need it and work hard to sign off even in the real world, where deadlines are missed, requirements are faulty, and software still needs to launch. Great post.