ISTQB Certification is not a qualification Henrik Emilsson 28 Comments

Let me begin by saying that these are my beliefs ever since I took the ISEB/ISTQB certification. But when I thought of this recently, I think I need to make a statement and try to help all those that are rejected because they are not certified.

————————————–

In the search for a qualifying certification many European companies have chosen to use the ISTQB certification and specifically the Foundation Level as a qualifier. But there are several voices raised against the validity of this certification, many prominent experts do not agree with the content of the syllabus. The problem gets worse because the more people that gets certified, the more this certification becomes “valid”.

The certification is based on an examination that you can take after reading a syllabus by yourself or paying for attending a two-day course and then take the examination. Almost anyone can do this! And it does not say anything about your testing skills. So this certification should not (and cannot) be used as a qualifier of how good one person is as a tester.

So what it really means is that a person that hold this certification has managed to get at least 30 multiple-choice questions right out of 40.
Think about that before you reject job applicants because they aren’t certified (which seems to be the case too often in Europe).

Let us treat software testing as a serious profession and throw out this certification as soon as possible.

Not all testing is software testing Martin Jansson 4 Comments

In many discussions about testing methods, courses, techniques, approaches etc it is usually software testing that is in focus. I cannot see why the limit is set to just software. For instance, the excellent course Rapid Software Testing advocates, by its name, that is meant for personel who perform testing of software. It could perhaps be called Rapid Testing, because the esssence of the course is not limited to just software. Does the name limit what non-software testers and their managers think of this course?

I’ve seen similar limited thinking when someone was building production test equipment to be used on each unit created in the factory. For production testing there are some important factors of the tests performed namely speed of the tests and a high yield. The production test equipment should also be user friendly because the person running the test should do it correctly and quickly so that you get even higher yield.  You can use the concept of a smoke test, then take some ideas on how unit testing is done but there are several other areas that can be considered when creating the tests in the test station. Naturally you can use your entire tool box of techniques when testing the test station/production test equipment.

When testing hardware you do many checks (as Michael Bolton calls them), measuring power, temperature and so on, but you can do these with an exploratory approach. There are usually several well defined tests found in various standards with test scripts on how to execute them. Usually it is the designer who does this task. Do they use any of the methods and techniques that are created and discussed in software testing? I think that would be very rare.

Me and a collegue used exploratory approach and pair-wise testing when doing testing on a radar used to trigger a blinking light. There is some software included in all this, but I would not say I was doing software testing, rather system testing or just testing. I was not doing any verification or validation, since we had no specs and the creators had given us too little information on how the system was supposed to work. So, we explored it and found bugs that did not fit in our model of thinking. Some of the issues found were indeed bugs others were not.

Do you say that you do software testing when testing a car? 20 years ago software was probably never used in a car, but the test ideas and test techniques from that time might apply today with the cars that do have software to control all major systems.

Before being too quick in saying that it is software testing that your approach, technique, course or thingie is focusing on, do think again.

Grounded Test Design Rikard Edgren 9 Comments

For quite some time I have felt that the classic test design techniques don’t add up to the needs of software testing that tries to find most of the important information.
At EuroSTAR 2009 it dawned on me that it is time to describe the method that I, and many, many others, have been using for a long time.
By talking more around this, I hope we can spread the usage of this test design, improve the way we do it, and make it more understandable for other people, avoiding the comment: “oh, you’re error guessing.”
What I will describe could be called a method, activity or a skill, but I like to think of it as a test design technique, that is seen as a tester’s most powerful weapon, much better than equivalence partitioning, decision trees etc.

Grounded Test Design: learn a lot, and synthesize important test ideas.

To explain this further: Software testers should follow the thoughts of social science Grounded Theory; we should use many sources of information regarding the subject (requirements, specifications, prototypes, code, bugs, error catalogs, support information, customer stories, user expectations, technology, tools, models, systems, quality objectives, quality attributes, testing techniques, test ideas, conversations with people, risks, possibilities); we should group things together, use our creativity, and create a theory consisting of a number of ideas capturing what is important to test.
Grounded Test Design is for those that want to test a lot more than the requirements, that understand the need to combine things of different types, to look at the whole and the details at the same time.
This might seem heavy, but people have phenomenal capacity.
It has a lot more to it than the closest technique error guessing; for instance, it is not only about errors, it can also be investigations, or questions regarding dubious behavior.
It isn’t guessing, it is a qualified synthesis of knowledge.
And also: it isn’t (and doesn’t sound like) something done lightly, something unstructured, or unprofessional.

Being a technique, it is something that can be applied in many different situations. The most informal Grounded Test Design might happen extremely fast in the expert tester’s head when thinking of a completely new idea while performing exploratory testing; or it could be the base for test scripts/cases/scenarios, that are carefully investigated and designed in advance. Good software testing uses many different methods.
A formal Grounded Test Design (to be defined, tried and evaluated…) might be best used by someone completely new to a product, or maybe most efficiently performed in diversified pairs.

Grounded Test Design has a loose scientific, theoretical base in Grounded Theory (Strauss/Corbin), used in social science as a qualitative, thorough analysis of a phenomenon.
The scientists examine the subject very carefully, gather a lot of material, document codes and concepts, combine them to categories, from which a theory is constructed.
There is no hypothesis to start with, as in the traditional natural science, and that’s quite similar to software testing; it would be dangerous if we thought we knew the important questions at once.
I think it is good to be slightly associated with this theory, especially since it emphasizes that the researcher must use her creativity.

On the other hand; do we really need this word, maybe we already have enough words about how to do good software testing?
Or is it a good word to have in reserve when explaining to managers why testing is difficult, takes a lot of time, and must include manual work?
Or should we go further with this and try to develop a framework for performing structured analysis of everything that is important for testing?

Opinions are very welcome.

In search of the potato… Rikard Edgren 4 Comments

When preparing for EuroSTAR 2009 presentation I drew a picture to try to explain that you need to test a lot more than the requirements, but we don’t have to (and can’t) test everything and the qualitative dilemma is to look for and find the important bugs in the product.
Per K. instantly commented that it looked like a potato, judge for yourself:

Software Potato

Software Potato

 The square symbolizes the features and bugs you will find with test cases stemming from requirements (that can’t and shouldn’t be complete.)
The blue area is all bugs, including things that maybe no customers would consider annoying.
The brown area is all important bugs, those bugs you’d want to find and fix.

So how do you go from the requirements to “all important bugs”?
You won’t have time to create test scripts for “everything”.
So maybe you do exploratory testing (thin lines in many directions), and hope for the best.
Or maybe you test around one-liners (thicker horizontal lines), that are more distinct, that are reviewed, and have a better chance of finding what’s important.
Either option, some part luck, and a large portion of hard work is needed.
But I think you have a much better chance if you are using one-liners, especially if it’s a larger project.

Later I have realized that one-liners aren’t essential; that this problem has been solved many times at many places with many different approaches.
What is common could be that testers learn a lot of things from many different sources, combine things, think critically and design tests (in advance or on-the-fly) that will cover the important areas.
Maybe we need a name for this method; it could be Grounded Test Design.

Notes from EuroSTAR 2009 Rikard Edgren 5 Comments

It was Stockholm again this year. Good to not have to travel far, but since you are travelling I wouldn’t object to something more exotic, and warmer. Next year it is Copenhagen, again.
I had a full-packed program with 4 days of tutorials, workshops, tracks, short talks, test-labbing, conversations, so in total it is quite an amount of ideas since new things appear when you combine what you hear with your won reality and thoughts on testing. I am a bit exhausted after all this, which is a very good thing!

Monday gave Exploratory Testing Masterclass with Michael Bolton. Even though I would have expected a bit more advanced level, it was very good; here are some highlights:
it’s the scripted guys that sit and play with the computer; most of what we look for is implicit, it is tacit; develop a suspicious nature, and a wild imagination; checks are change detectors; some things are so important that you don’t have to write them down; codingqa.com episode 28 is important (I listened to it, but didn’t understand what was so important); managers fear that Exploratory Testing depends on skill, is unstructured, unmanageable, unaccountable; we need to build a “management case” (or should it be middle-management case?)
Michael showed an improved Boundary Value Analysis for a more complex example, where there are many boundaries; whatever focus for coverage you have, you will get other coverage for free; visualizing Test Coverage with sticky notes on a model is a good way of creating charters for Session-Based Test Management.
Go beyond use cases, create rich scenarios; emphasize doing, relax planning; Test Coverage Outline and Risk List to guide future sessions; don’t try to find bugs in the beginning, it takes time away from building a model; HICCUPPS + F (Familiar Problems); you learn best when you are in control of the learning process (and have fun); who said something valuable should be that easy?
Reports to make number people happy; SBTM debriefs are important for keeping quality of the testing and the report (and good for coaching and mentoring); the principal interrupter of testing work is bugs; Weinberg: “everything is information”; Dr. Phil: “How’s that working for you?”
He also had good exercises, and a nice movie, a Detective Story.
I haven’t been to Michael’s tutorials before, so it was about time.

Tuesday started with Tim Koomen tutorial “From Strategy to Techniques”; there’s a gap between the test strategy and the actual tests.
He is very knowledgeable, and walked through the basic testing techniques that every tester should have in his toolbox: Equivalence Partitioning, Boundary Value Analysis, Classification Tree Method, Pairwise, Path Coverage, Condition/Decision Coverage, Input/Output Validation, CRUD, Operational profiles, Load profiles, Right/Fault paths, Checklist.
The examples are focused on functionality, and a magazine discount example is shallow; it doesn’t consider if the person is just about to become 20 or 65 years old, or if you don’t know the age, or if an incorrect age is corrected. And now we haven’t even considered everything else that interacts with this small piece of functionality.
Every time I see this list, I think that they don’t sum up to the testing techniques I actually use when I design tests.
So my highlight was this feeling combined with my shallow knowledge about Grounded Theory; maybe we could have a super-advanced error guessing test technique, that describes the really, really good test design that happens all over the world, where we are looking at a lot more things than the requirements (more to come on this…)
Tim showed PICT tool (consider 1-wise!), and audience mentioned that Mercedes-Benz also has a free tool (see pairwise.org for a long list of tools)
I learned a new thing: the modified conditional coverage, where you omit tests that aren’t likely to catch errors.
Sometimes I wonder how many of the tests from the classic test techniques that preferably are automated in unit tests.

The actual conference started with Lee Copeland talking about nine innovations you should know about: Context-Driven School (the search of best practices is a waste of time); Test-Driven-Development (help you write clean code); Really Good Books (too few testers read the books!); Open Source Tools; Testing Workshops (Specialized focus, participatory style); Freedom of the press (He is no fan of twitter, but like blogs); Virtualization (Rapid setup, state capture, reduced cost); Testing in the Cloud (rent an awful amount of machines, very cheap); Crowdsourced Testing (Lee did not mention the ethical payment dilemma)
“sincerity is the key – once you learn to fake it…”
Keys for future innovation: creative, talented, fearless, visionary, empowered, passionate, multiple disciplines. Do we have all of these???

Johan Jonasson explained Exploratory Testing and Session-Based Test Management, but since this was a short track, there wasn’t so much time left for the real juice. “ET has specific, trainable skills” (Bolton)
Julian Harty, Google (where the testers seem to have huge responsibility areas) explained the concept of Trinity Testing, 30-90 minutes walkthrough(s) by Developer, Tester, Domain Expert. Not radically new, but it felt very fresh and effective. Julian was the only one I saw that brought a hand-out, one paper with the essentials.
Geoff Thompson talked about reporting, that “it’s the job of the communicator to communicate.” 1/10 of men (1/50 for women) are color-blind, and maybe you want everyone to understand the report? (I saw two other presentations, where red-green was used to highlight important differences.) “Know your recipients, what information do they want?”, “honesty is always the best option.”
Michel Bolton had a short session on “Burning Issues of the Day” that is available here. Very funny, very thought-worthy, very good.
Jonathan Kohl talked about Agile having lost a lot of its original value, it is re-branded, old stuff, and has become business. Process focus can distract from skill development, the point is: focus your work on creating value.
I asked Jonathan afterwards about Session Tester (where not much has happened lately), and he said that the programmers are too busy, but it will happen things pretty soon.

Wednesday’s first keynote was Naomi Karten about change; change that represents the loss of control, change that we often respond to in an emotional and visceral way.
Hofstadter’s Law: It always take longer than you expect, even when you take into account Hofstadter’s Law.
Regularly communicate the status of the change, also when you don’t have any news, or when you’re not allowed to tell the news (say that you can’t say anything!)
Listening and empathy are the most important change management tools.
The biggest mistake is to forget the chaos; and in chaos: don’t make any irreversible decisions.
This was my favorite keynote, and as I’m writing this I understand there was some really important information in the presentation.
Mike Ennis talked about software metrics, that help you manage the end game of a software project.
The end game term is taken from chess, where the outcome is almost decided, it is just a matter of technique, primarily about not making mistakes. Mike used the analogy that if you can anticipate what will happen, you know what to do next.
He defined example release criteria, which often aren’t met, but business decisions can overrule the criteria.
40% of the code is about positive requirements, “not a huge fan of exploratory testing, do it if you have time, after the standard tests have been run”.
He used a Spider Chart (aka Radar Plot) to visualize The Big Six Metrics (Test Completion Rate, Test Success Rate, Total Open Defects, Defects Found this week, Code Turmoil, Code Coverage.)
A question was raised that there is a risk of over-simplifying things, and the answer was: “Yes, but these are indicators only.”
Erik Boelen talked about Risk-based test strategy, if you do it with different roles it is like Läkerol, it makes people talk.
He likes games, the we-versus-them game with developers is good, at his place developers with many bugs buy drinks to the testers; and testers aren’t allowed to say one word for a week; last week that word was testing…
A very interesting and nice thing about the presentation was that he explained their (very good, but for some, very provocative, I assume) test method as a natural and obvious way:
They take the entry paths from the Risks and perform Exploratory Testing. For High and Medium risk they document test cases as they explore, and for Low risks they just report the results.
“Eventually testing will rule the world.”
Shrini Kulkarni talked about dangerous metrics, and that software development must consider where it is suitable with measurements. (Shrini hates SMART by the way, so I like him.)
A root cause is that metrics/measurements represent rich multi-dimensional data, there is inevitable information loss.
People might say “we can’t improve without metrics”, but you could use metrics as clues to solve and uncover deeper issues.
We can report with stories attached to the numbers, but still, we are losing information.
Susan Windsor had a double session on communication styles where time flied. In the audience, everyone said No to “Exploratory Testing adds no value”
Art of Storytelling involves: Random, Intuitive, Holistic, Subjective, Looks at wholes (two of my favorite adjectives!)
Research shows that interviewing is the most ineffective method when hiring.
She noted that a high proportion of testers also do creative things like music, poetry (which seems natural, it is good to have trained a lot at being creative.)
We looked at four different Personal Communication Styles (why is it always 4 different types of persons??): Strategist, Mediator, Presenter, Director.
Gitte Ottosen had the ending keynote of the day with a presentation about combining Agile and Maturity Models (“CMM = Consultant Money Maker”)
“Metrics, I know they are dangerous, but also necessary.”
Manual Testing involves using the story to do exploratory testing (“continuous learning as we implement the feature.)

On Thursday I was wise enough to skip 2 sessions in order to have a late breakfast and practice my presentation.
So the first presentation of the day was Zeger van Hese (he won the best paper award for the second time this year) that shared his experiences of introducing Agile, but only doing parts of the full-blown, capital A stuff (resulting in a Real-world, semi-Agile process.)
They used this strange mix of Waterfall and Agile that many, many companies have, and got a better and better situation as more members of the team sat in the same room.
But in the end they fell back to old behavior, there were many late changes, many Release Candidates, and a one month delay. But; excellent quality and stability.
3 Agile goals: better feedback, faster delivery, less waste.
They did a big Agile no-no by using manual testing, which seems like a wise deviation to me.
A quote attributed to Einstein, and several others: “In theory, there is no difference between theory and practice; In practice, there is.”
Next presentation was my favorite of the whole conference: Fiona Charles, Modeling Scenarios with a Framework Based on Data.
They built a conceptual framework at 2 levels: an overall model of the system (testing), and the tests to encapsulate in that model.
They did a structured analysis of all attributes for each framework element, and then used these attributes to build simple, and then more complex, scenarios. This is difficult to do for many testers, so careful review of this work is a way to make sure the results are good.
I think this is an example of the test design technique I thought about on Tuesday, a very advanced, structured way of designing tests that can’t be captured by the classic test design techniques (error-guessing is closest, but there’s a lot more to it.) I like to call this Grounded Test Design (more to come on this…)
“scenario testing is a nice thing to add to your repertoire”, “combine two or more models”, “don’t ever fall in love with your model”; they found 478 bugs, and all except 20 was essential to fix for the customer.
What you need to do something like this: testers with domain experience, business input and scenario review (and maybe an industry book), a model, structured analysis.
After lunch, I had a second session in the Test Lab, so I could report some of the bugs Zeger and I found the day earlier. It was great to test on real stuff, but I didn’t have the time that I would have liked in order to understand the product and its failures. There weren’t time (at least for me) to discuss in depth the findings with other testers, which is something I hope to be able to do next year (I’m hoping the Test Lab will continue.)
At the last presentation slot, I did my thing on “More and Better Test Ideas”. People were tired, but looked interested, so I’m happy with the presentation. I won’t recapitulate the session, but I did talk about the potato, but had to skip the new Find Five Faults analogy (unexpected time pressure, I’m still in doubt that I got the 38 minutes I was supposed to.)
The paper is available here, the presentation here, and it will be given as a EuroSTAR webinar at December 15th.
Good questions, and also examples of how similar approaches are used by others. A bit more than 10% of (almost 100?) attendants use test ideas/conditions.
The next day I got a mail stating that ideas from my presentation could be used at once; the best feedback to hope for.

The Test Lab organizers (James Lyndsay and Bart Knaack) seemed happy when presenting the results, and it’s good to know that the efforts might make open-source medical product OpenEMR a bit better (there is certainly room for improvements…)
At the final panel debate half of the audience voted that certification is important, Tobias Fors shared the insightful “as a developer I was scared about code review, but then I realized it really was about my low self-esteem.”
Regarding teaching testing in school, it was said that critical thinking should be taught early.
“How do we breach the barriers and invite the developers to our world?”
Dorothy Graham (who reviewed every presentation!) ended the conference and announced the next years programme chair John Fodeh.

Overall it was a very nice conference, at the expo Robert from ps_testware was nice and let me win a chess game this year also.
Recurring themes were Agile/Exploratory Testing (why are they grouped together?) and now and then the importance of a Story was emphasized.
Unknown source: “The higher and more complex quality objectives you have, the more manual testing is needed.”
Attending a conference isn’t about learning truths from the experts, it’s more about getting input to be able to create your own ideas that apply to your job, and to meet people, hear stories, interact with people that share your passion: software testing.
See you next year!

/Rikard

Is our time estimation on testing valid? Martin Jansson 4 Comments

What do we actually base our time estimations on when delivering a plan to a project manager?

I know that we initially can have a vague idea on what to include and what must be done. I am sure that we can even make a rough estimation on how many resources we need in some cases. If we are testing something new, where we do not know the developers and the full extent of what is delivered, how do we know how much time and resources we need for testing? I have seen many plans that give an estimate, but how accurate can they be?

Once I did a resource plan for a year long test project, I was allowed to change the plan incrementally. I estimated that we needed quite many testers since the deliverables had been delayed and the original plan with incremental builds early did not work out as planned. In the test team we had a few disagreements about how many we actually needed, some thought that we were enough and some (including me) thought we needed more. We got 10% of the resources that I had wanted, but we managed somehow (to some extent). Decision makers seemed to be satisfied with the quality and the result of these tests. One of the reason for us succeeding was that we reported more bugs than the developers were able to fix. If we would have been more testers we would have found many more bugs for sure, but they would probably just have been postponed. The most critical ones were always fixed, but all others below that could not be fixed naturally.

So… should we when estimating resources and time focus on getting enough and the right resources just to keep the spot light of the resource discussion somewhere else? Lets say that bugs are fixed faster than we find new ones, will that mean that we are ready to deliver or that we need more testers to find even more bugs?

I think resource planning and time estimations is very hard. Michael Bolton has expressed, in an excellent way, many of the thoughts that I have tangled with. You can find his reasoning here:
http://www.developsense.com/2009/11/why-is-testing-taking-so-long-part-1.html
http://www.developsense.com/2009/11/what-does-testing-take-so-long-part-2.html

Combine the idea of having estimated how long time testing needs with an answer on how far we have come and especially pinning it down in percentage. Can we say anything truthfully here? Is it worth the cost it takes in planning and administration to give a accurate picture?

The Inquisitive Tester – Part II: Question the specs the test eye No Comments

Statements in specifications try to clarify and are inevitably an interpretation of what the author thinks need to be more specific. I.e., they try to be a more specific model than what existed before the spec. And “Essentially, all models are wrong, but some are useful” (http://en.wikiquote.org/wiki/George_E._P._Box).

Every specification you encounter is persons’ interpretations, and  not necessarily true.

This means that you as an inquisitive tester have a lot to do by questioning the specifications. The questioning will help you to form a model of the software that is better than if you only had read and accepted the spec as it was.

Specifications cannot be complete and especially regarding things that the program shouldn’t do. It is probably not stated that the software shouldn’t use too much memory or processor for certain operations; it is not stated that the screen shouldn’t flicker, or that all text should be easy to read with all different font settings. Other typical omissions are interactions with other systems; things you expect from all applications under that operating system, internet browser, connected software etc.

You cannot expect a specification to be complete, in most (all? many?) cases, the thing produced by the specification is more important than the document about it. The hardest challenge for the inquisitive tester is to question a lot, but only for those things that are important.

——————–

Who will use the specification? What will they use it for? Will it meet their requirements?

What is it all about? Really?

What areas are left out?

Who is the writer? Does he/she usually miss certain things?

Are there many writers? Does this make the whole less tangible?

Are there many reviewers? Are they using different perspectives?

Is the writer vague, insecure and confusing about certain areas?

Is the specification consistent?

Is the specification consistent with other related specifications?

Is the specification consistent with other different features and combinations of those?

Are all functional and non-functional requirements covered?

Are there dubious thoughts about the wished for functionality?

Are there other sources of information that can be useful?

How is the style of the language affecting the specification?

What quality attributes are the most important, e.g. how is Security weighed against Performance and Usability?

Does it match the system requirements?

Does the specification focus on what is most important?

Does the specification reflect the model of what you think is described?

Are there any new terminology? Will this affect other documentation such as help files?

Is the new terminology consistent with other specifications?

What does the Internet say about the newly chosen terminology? Will there be any misunderstandings?

—————–

If there was no specification, could it be described in a completely different way?

Introducing exploratory testing in a scripted test environment Martin Jansson 5 Comments

In many organisations it is hard to change how you are working. You might be bound to certain CM tools, how things are expected to be planned, documentation systems, management expectations, project management expectations and so on. In many of these traditional environments you might also use the regular test plans, test matrices, test specifications, test cases with expected result and test record.

If you are doing scripted testing it might sometimes be hard to handle the cases that fall a bit outside the scope, that mess up the planned activities. Some might say that it is better to stick to the plan and leave the vague and hard to reproduce thing for later.

To totally change the way you work might affect many things, so you might want to start out with small changes. If you are in such an environment and consider how it would be possible to try out exploratory testing as an approach to testing I have a few suggestions.

One way is using your current test cases as a guide to where you intend to test, then you use exploratory testing on those areas. The test cases will in this case just map lots of areas that need to be looked at and what you actually cover and what you expect is up to you. This means you will certainly skip a lot of the content of the test case. If this is accepted that is perfect, if not see how you can get away with it.

Another way is executing the test cases and each time you find something outside the test case or something fishy along the way, you create a work page/task for this issue. You then assign someone or a small group to dig deeper into this area using the exploratory test approach. All issues that are outside the regular plan are handled as an exploratory test.

You will report progress and result as usual, noone will know that you tried a new method in secret.

Each time you use exploratory testing you do it as a defined session, you can google how this is done. There are a multitude of good articles out there on how this can be done.

Exploratory Testing vs. Scripted Testing – rich terminology Rikard Edgren 2 Comments

Exploratory Testing in its purest form is an approach that focus on learning, evolution and freedom.
Cem Kaner’s definition is to the point: “Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

ET in real life is a collection of ways of testing, sometimes used as example implementations of the approach, sometimes used for testing that don’t follow a script, and sometimes as a synonym to ad hoc testing (because that word has become increasingly under-rated.)

Scripted testing in its purest form is an approach that focus on precision and control. It is yet to be defined by proponents, but a benevolent try could be:
“With a scripted testing approach the testing effort is controlled by designing and reviewing all test scripts in advance.
In this way the right tests are executed, they are well documented, progress towards 100% execution can be controlled, and it is easy to repeat tests when necessary.
The scripted approach is not dependent on many tester heroes, and can take advantage of many types of resources for test execution, since the intelligence in the test scripts are created by test design experts.

Scripted testing in real life mostly means designing test scripts early, and executing them later, and these scripts have quite detailed steps, and a clear expected result.
The terminology is rich, complex and sometimes confusing, since they at least can mean approach, style, method, activity and technique; and these are in reality so connected and intertwined that distinctions aren’t necessary or helpful.

 

So are the distinctions important?
I think they can be, especially if the words are used without details, e.g. in statements like “Exploratory Testing is the opposite of Scripted Testing” or “combining exploratory and scripted testing”.
Both statements can be true, because the first talks about the approach, and the other about methods.
By understanding the different meanings of the words it is possible to get a more nuanced debate, and to see other combinations, e.g. test scripts with an exploratory approach, or scripted approaches with elements of ad hoc testing.

The intention of the used method shows which approach you are using (inferred from Cem Kaner’s Value of Checklists… p.94):
If test scripts are used to control the testing, it is a scripted testing approach.
If test scripts are used as a baseline for further testing, it is an exploratory testing approach.

 

It would be nice to have a solution to this semantic mess, but I don’t think it is feasible to always attach approach or method to Exploratory Testing and Scripted Testing (or to distinguish between upper case Exploratory and lower case exploratory.)
It is extremely difficult to give life to new words, but I do have some hope in the clarifications by testing vs. checking, and less hope for a renaissance of ad hoc testing.
A start would be if more people are aware of the different meanings, and are more precise when necessary, and eventually the problem will dissolve, in 25 years from now.

The Quality Status Reporting Fallacy Henrik Emilsson 4 Comments

A couple of weeks ago I had a discussion with someone that claimed that testers should (and could) report on quality. And especially he promoted the GQM-approach and how this could be designed to report the quality status. When I asked how that person defined quality, he pointed to ISO 9000:2000 which define quality as “Degree to which a set of inherent (existing) characteristics fulfils requirements”.

But wait a minute!

If testers can report the current quality status based on the definition above, it means that test cases corresponds to the requirements; and bugs found are violations where the product characteristics does not satisfy the requirements. If so, then you must have requirements that follow a couple of truths:

  • Each requirement should exhibit the statements: Correct, Feasible, Necessary, Prioritized, Unambiguous and Verifiable.
  • The set of requirements cover all aspects of people needs.
  • The set of requirements capture all people expectations.
  • The set of requirements corresponds to the different values that people have.
  • The set of requirements contains all the different properties that people value.
  • The set of requirements are consistent.

(The word People above include: Users, customers, persons, stakeholders, hidden stakeholders, etc.)
At the same time, we know that it is impossible to test everything; you cannot test exhaustively.

But assume, for the sake of argument, that all requirements were true according to the list above; and the testing was really, really extensive; and the test effort was prioritized so that all testing done was necessary and related to the values that the important stakeholders and customers cared about.
If this would be the case, then how can you compare one test case to another? How can you compare two bugs? Is it possible to compare two bugs even if you have 20 grades of severity?

We, as testers, should be subjective; we should do our best to try to put ourselves in other people’s situation; we should find out who the stakeholders are and what they value; we should try to find all problems that matter.
But we should also be careful when we try to report on these matters. And it is not because we haven’t got any clue about the quality of the product, but we should be careful because many times we report on the things that we do that can be quantified and take these as strong indicators of the quality of the product. E.g., number of bugs found, number of test cases run, bugs found per test case, severe bugs found, severe bugs found per test case per week, etc. You know the drill…

If you are using quantitative measurements, you need to figure out what they really mean and how they connect to what really should (or could) be reported.

If you think that “non-technical” people are pleased by getting a couple of digits (hidden in a graph) presented to them, it is like saying: “Since you aren’t a technical person we have translated the words:  Done , Not quite done, Competent, Many, Problems, Requirements, Newly divorced, Few, Fixed, Careless, Test cases, Dyslexic, Needs, Workaholic, Lines of code, Overly complex code, Special configuration, Technical debt, Demands, etc, to some numbers and concealed it all in one graph that shows an aggregate value of the quality”.

Quality_is_a_number

I think that it is a bit unfair to the so-called non-technical…

Instead, we should use Jerry Weinberg’s definition “Quality is value to some person” in order to realize that quality is not something easy to quantify. Quality is subjective. Quality is value. Quality relates to some person. Quality is something complex, yet it is intuitive in the eyes of the beholder.

 

Page 20 of 28« First...10...1819202122...Last »