Persuading about Exploratory Testing: The provocative-analogy way Henrik Emilsson 1 Comment

This is my reply to the thread “Persuading about Exploratory Methods” in software-testing@yahoogroups.com.

This starts out with the problem that it is sometimes hard to persuade a manager about Exploratory Testing, when all that matters to the manager is that tests ought to be documented (in order to know what should be tested, how many test to produce and execute, etc). Or at least, this is often their argument in such discussion.
And it is also often the case that a tester or test lead find it difficult to explain why it would be suitable with exploratory methods; and exploratory testing in particular. Partly because managers don’t know how to test efficiently; and partly because the tester/test lead find it hard to speak at the same level as the manager.

So here is an example of how you can use the provocative-analogy way when discussing this with your manager.

persuading

You:
“Dear Boss, I am concerned about the meetings you attend and what you say on those meetings. I am especially concerned about the coverage of the important issues that should not only be brought up, but also be solved in these meetings…
So, here is what I think you should do:
a) Document everything that you should say on the meeting in advance, by first identifying all possible issues that we have, then by writing down exactly what should be said in a specific order.
b) Write down the expected answer or solution to the issue.
c) Remember to ignore anything else that is brought up on the meeting, the only thing that matters is those issues that you have prepared in advance.
d) Take notes during the meeting (but only when your issues of interest are discussed).
e) Finally, write “OK” for each of your issues where the meeting came up with the solution/answer that you had foreseen, and “Not OK” for those issues where the solution/answer disagrees with yours.”

Boss:
“Are you kidding me!?! Do you have any idea of how a meeting really works?
You see, what we come up with on the meetings are through conversation and interaction.
Yes, there are important things that we know of in advance and that needs to be resolved; we put these on the meeting agenda. But there are also things that come up during the meeting that we could not have foreseen, things that come up during interaction between the attendees.
Also, the answers or solutions to the issues are sometimes very hard to predict.”

You:
“Oh, I see… That it something I can somewhat understand and relate to.
As I understand it, this somehow resembles how we do testing:
We have a conversation with the product and interacts with it to get to know more and more. You see, what we come up with in testing are through conversation and interaction. And yes, there are important things that we know of in advance and that needs to be tested; we put these on the testing agenda.
But there are also things that come up during the testing that we could not have foreseen, things that come up during interaction with the product. Also, what the actual result should be is sometimes very hard to predict.”

Boss:
“Oh, I see… That it something I can somewhat understand and relate to. I have not thought of it in that way. I will never ever tell you to do scripted testing again.
Do the brain-stuff! I am your friend!”

Disclaimer: The last comment might not be said, but hopefully you get my point. 🙂

———-

Another less provocative way is to just use the analogy with testing and meetings/discussion/dialog in your efforts to explain why exploratory methods should be used and promoted.

Test Plan – an unnecessary artefact? Henrik Emilsson 4 Comments

Well, it is always controversial to criticise the making of the Test Plan (http://en.wikipedia.org/wiki/Test_plan ). But here is an attempt that will leave some open questions for further discussions.

According to my experience, a test plan is a mandatory document that test managers and test leads often promote but seldom question.
Sometimes it is promoted and actually needed; sometimes it is promoted but not needed; sometimes it is not promoted and not needed but still produced anyway.
Why is this happening?
Is it because this is something we can do (by can I mean that this is something we done several times and therefore have developed a skill)?

A test plan is usually done in the beginning of the project and is an old remain from the V-model and Waterfall methods, where all activities are specified and planned in advance (hence the word plan).
But if these project models are out of date, why are we still doing the test plan as if nothing has happened?
Some might say that this, nowadays, is a living document and should change. But isn’t it about time that we call this document something else then? Or break it up into separate documents/artefacts/etc?

One of the main reasons of having a test plan, is the ability to communicate all the how’s, why’s, when’s, what’s, who’s, do’s, don’ts, etc, that regards the testing effort during a project. This is great, but since there will be so many things to cover, it is easy to forget some important aspect or info. Especially if this is something that should be produced in the initial project phase.
And also, what if these things change over time (ever been in such situation?). Is it then fair to expect that the test plan always is updated, at any given time?
Is it also fair to expect that all stakeholders should update themselves by reading this document say once a week? (These documents tend to be large after a while).
And is it fair to say that different stakeholders have different interests in the testing effort? Might they have different quality criterion on the info?

I would like to propose that in many (most?) cases it is good enough to have a Test Strategy to deal with what the testing mission is about; a wiki for documenting the latest useful info; the iteration plan or sprint backlog or similar to keep track of current tasks/work. This way we can address the right stakeholders with information served in the way it would be expected.
Is there something important that we miss if we would do it this way instead?

The irony of TMM Henrik Emilsson 5 Comments

At one time, I worked as a tester at a company that claims to be at TMM-level 4 (perhaps 5).
Two things struck me:
Firstly, the quality of the actual testing performed on this company was not as good as one could expect from a company that have produced software for over 20 years.
Secondly, at the project post mortem, the testers complained about the lack of respect from the rest of the organisation.

One of the issues with TMM is that it says nothing about the quality of the testing.
A maturity level in TMM is looked upon as “a degree of organisational test process quality”. But it does not say anything about the degree of test process quality; or perhaps more important, test quality.
I think this has to do with the fact that many who promote and implement TMM has little knowledge in actual testing. This means that if you are to improve the actual testing, or tester’s ability to adhere to different test processes, you need to have knowledge about testing and people.
Or put in another words: it is easier to promote a single model than to improve testers’ skill.

Another issue with TMM is that when it is implemented, you might think that it is some sort of assurance that proper work is done constantly. This is not correct.
Since it is not saying anything about the quality of the actual work performed, your colleagues will not judge you on the basis of how high you have reached the TMM stair. Since “quality is value to someone”, you will be judged by your actual work – your performance. Therefore you cannot expect to be respected only because your company has reached level 4 on the TMM.
That is, respect is something you have to earn.

——

The maturity levels, according to the TMMi foundation, can be found at:
http://www.tmmifoundation.org/downloads/resources/TestMaturityModel.TMMi.pdf