The details and the whole Rikard Edgren 5 Comments
Testers are often in a unique position because we know a lot about the system as a whole, but also a lot about the details of the operating software.
There are interesting dynamics between the small and the large, and with a human mind in between, a lot of important information will emerge.
“The distinction between micro and macro is an artificial one.” [1]
The whole system consists of the many details; the perceptions of the details are based on the expectations of the whole system.
“Everything is connected” [2], “The devil is in the details” [3]
Enough fluff; how can a tester take advantage of this artificial dynamics?
1) Become an expert of your macro
Find out (for real!) what the users want to accomplish with your software. Learn the even bigger box; what values (and in which ways) are expected from your customers’ customers?
2) Become an expert of your micros
You can take the technical path; learn the details of how a part of your system works and the details about the surrounding it interacts with, and (create) all the tools needed for your full advantage.
Or take the quality path; delve into details of a quality characteristic, learn about any types of deviations, learn which sub-aspects are really important for your software, and have this in the back of your head whatever you are testing.
Or if possible, do all of the above, and provide value; super-fast and all-the-time.
Growing test teams: Uncertain team composition Martin Jansson 13 Comments
This is a follow up from previous articles on Growing test teams based on the ideas from Peopleware by Tom DeMarco and Timothy Lister.
Uncertain team composition
If you are newly assigned to be a team leader there is a big chance that you also have a team, but that is not always the case. Just as you want to start organizing the team, steer toward a set of goals, begin promising your stakeholders what you will be able to accomplish… you begin to wonder… why am I sitting alone on this team meeting? Where are they?
The test team will have a harder time growing when …
- you do not know who is in the team
- you do not know how much time in the resource plan each member is allocated to the team
- you do not know how much time in reality each member is allocated to the team
- your team members have other assignments that they spend time on that are not communicated
- your test manager does not see you as a team, but instead a resource pool where all members are to be plucked out at any time
- your team members does not see themselves as part of your team and continue to work on their own agendas
- you have hidden members that should be part of the team, but are safely tucked away in a hidden project
When you are uncertain about your team composition it will block you from going forward. You will be handicaped as a team leader and will most certainly fail, or burn out trying.
Is your testing saturated? Rikard Edgren 7 Comments
There are many names for software testing strategies/activities/ approaches/processes; they can be risk-based, coverage-focused, exploratory, requirements-based, Super-TPI, TMM 5 et.al.
The names generally come from how the testing is performed or initiated, so I thought we should look at it from another angle, from the end of testing, from the results that we might know about a year later.
I like the content of Good Enough Testing, but the words are easily misunderstood; we don’t have definitions for Crappy or Decent Testing, and since that often isn’t enough it might be useful with Saturated Testing. (This is elaborated elsewhere with a different name??)
The term is borrowed from Grounded Theory (social science qualitative analysis), and means that further research in an area doesn’t give any important, new information, and therefore isn’t worth the effort.
The same can happen in ambitious software testing (if the test time isn’t radically shortened…): when important bugs no longer are found, and all important test areas have been covered, and no code changes are made, there is no need to continue testing.
I see three hypothetical levels of Testing Saturation:
1) more testing will not find information that would change the release decision
2) more testing will not find issues that later would be patched
3) more testing will not find bugs that we would like to have fixed
If I you use a mix of these as your goal, I have three warnings:
* Have you used all relevant information?
* Have you tried all relevant test approaches?
* Have enough people with different views been involved in (at least) generating test ideas?
To get a result you are satisfied with, you need to think about things like this early, you need a lot of skill and hard work, and a fair share of luck.
Exploratory Testing is not a test technique Henrik Emilsson 7 Comments
Well, to many people this is nothing new. But still, there are a lot of testers, and indeed test leads, that still think that Exploratory Testing is a technique that can be used in testing. To some extent, it has to do with that both Cem Kaner and James Bach have used this term amongst other techniques (e.g., in the BBST course material). But they have changed and updated presentations as much as possible over the last period of time.
According to Cem Kaner nowadays, the definition of exploratory testing is “a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”
And this is important. You can come a long way on reaching the style of Exploratory Testing just by treating testers as intelligent people; which is one of the most important factors in the definition above. In contrast to Exploratory Testing you have Scripted Testing that, in my opinion, treats testers as dumb people or even dumb machines. I think that this approach is devastating for our profession (even though I can somehow see the need for Scripted Testing in some places).
A technique is a recipe for solving a problem, whereas a style (or approach) is a way of thinking around a theme that stretches far beyond solving a particular problem.
So when we talk about selling in Exploratory Testing to managers or project stakeholders it is not a technique that we are selling; it is rather an acceptance of a mindset where testers are treated as professional and intellectual human beings that are able to perform Sapient Testing, and particularly in an Exploratory way. It is not about stakeholders investing in a technique, it is about them showing that they have as much trust in testers as they have in other intelligent co-workers of the project.
Exploratory Testing is not a controlled process Rikard Edgren 8 Comments
Exploratory Testing is not as widely used as it could be, because management doesn’t want it.
Stated reasons for this could be unaccountable, unstructured, sloppy, non-scientific etc, reasons that can be refuted by communication.
But I think the real reason is something Exploratory Testing can’t have: a controlled process.
Management/Companies want to have a plan with dates and costs; they want a test manager to be able to say how many percent of the testing is completed; they incorrectly think that software development is a lot like manufacturing.
Exploratory Testing can’t have this, because the testing will change as new information is uncovered.
Testing might be quick, might take a long, long time, might not be close to complete, might look outside the requirements, and discover so important information that the whole project needs to be re-thinked.
These are not good things for some managers, they want control and precision.
They want to see everything go smoothly and at the promised date we deliver what we said we would deliver.
The focus is on the control, not the result.
That is why managers can prefer to outsource testing to a company that runs the same script over and over until they all pass, than to a company (or ideally, in-house testers) that will change their testing strategies along the way, and leave the project with a bunch of fixed and unfixed bugs.
Exploratory testing is about learning, discovering new things and changing our mind.
And for software testing, which can’t be complete, this is a very good thing.
It enables better products, in a way that can be managed, but not controlled.
This is why it is so hard to sell Exploratory Testing:
First you must sell trust, which is priceless.
Testing Clichés Part III: “We can’t test those requirements” Rikard Edgren 12 Comments
It is good to strive for better requirements by critical analysis (and looking for what’s missing), but there is a danger in complaining about untestable requirements.
If those vague requirements are changed (made too specific) or removed, the words in the requirements document have less meaning, and less chance of guiding towards great software.
And there are no such things as untestable requirements. There are requirements that can’t be verified, that can’t get a true/false stamp, but you can definitely test the software and look for things that don’t match the essence of the vagueness.
An example: “the feature should be easy to operate” is difficult to prove right or wrong, but very easy to evaluate subjectively after doing some manual testing.
If the requirement is changed to “minimum no. of mouse-clicks to perform common operations”, you might catch some issues, but some other, more important things, might be lost in translation.
And if the requirements are split into many, many smaller pieces, you might lose less information, but end up with a too complex document that is very expensive to create and maintain.
It’s not a bad thing to be specific, but that’s not feasible for everything.
There’s an underlying assumption I should tell you about:
I do not think requirements should be contractual, they should rather be aiding – they should help the development team produce good software.
Since requirements neither can be complete nor perfect, we should rather take advantage of oppurtunities that arise, and create something that can solve problems. If the essence of the unspoken requirements are captured, it might not matter that a few specifics aren’t met.
Testers should keep in mind that there’s a greater whole we’re aiming for, and do our best with what we have, so be it unverifiable requirements.
Where are you going with testing? Martin Jansson 3 Comments
In order to determine where you are heading with your test department it is good to understand where you are currently standing as a group and as individuals in the group.
Understand which way of working with quality that you tend to lean the most against. Use Brett Petticord’s Four Schools of Testing [1] as a start point for discussion. Did this rattle your thoughts on testing in any direction or in many directions? Do you have conflicting ideological beliefs about testing within the test group? Has this set the test group in motion in any direction?
What view do you have on testers in your organizations? Can anyone be a tester? Can anyone in the organization assist testing? Are you expected to create test cases that anyone can use when performing a test? Perhaps it is so that many in the organization think that the intelligence is in the test script, not the tester executing the script? Cem Kaner’s Ongoing Revolution of Software Testing [2] might shed some light on the subject? If you are viewed upon as a group of professional testers that is great, if not… what do you intend to do about it?
What view does the test manager have? Is he/she a former tester with experience from your business? Is he/she inexperienced with testing and is more of administrator? Does he/she make decisions about the test process and test strategies? Perhaps it is time you get him/her involved in what you do and where you think the test department should go?
If you are going in a specific direction, what is pushing or pulling you there? In some cases the company/organization is moving toward a specific goal or a new way of working, does this mean that your test department must change or move toward the same new goals? What platform do your company [3] lean towards? Is it the fundamentals from Scientific Management [4] or is from Management style according to Drucker [5].
If the company is trying to go towards Lean/Agile, what obstacles do you see if the organization is based on Taylor’s ideas? Do organizational structures, internal tools (resource allocation, time reporting, etc), project models, general thoughts and expectations stand between those goals? Do you think it will be a bumpy ride [6] or not [7], perhaps something else [8]?
What is the focus on education for the testers? Is there focus on ISTQB or Exploratory Testing, perhaps a bit of both? Does the basis for education align with where you are going or want to go?
Now… where do you want to go next?
References:
[1] http://www.testingeducation.org/conference/wtst_pettichord_FSofST2.ppt
[2] http://www.kaner.com/pdfs/TheOngoingRevolution.pdf
[3] http://www.usnews.com/usnews/biztech/articles/030224/24manage.htm
[4] http://www.ibiblio.org/eldritch/fwt/taylor.html
[5] http://en.wikipedia.org/wiki/Peter_Drucker
The 100th thought from the test eye the test eye 3 Comments
Today we celebrate our 100th post on this blog!
It has been an interesting journey for us so far; and we realize that we have only begun this ride, a ride with no destination but to enrich ourselves with wisdom and knowledge through discussions and by sharing thoughts. And you, our readers, are a very important part of this process.
Our decision to write posts in English has been a good strategy as we see it; not only do we reach people all over the world, but we can also discuss matters with peers that, in this community, don’t have any country borders. But you have to bear with us sometimes if you think that our English isn’t perfect, we are working on it… We do hope that the message gets through though.
The thing that we are three persons sharing a passion for testing and a blog to express it, has been a really good thing as we see it. Not only is it more fun, but it is also some assurance that the blog will keep up a good tempo; since there is always someone of us that has inspiration and time to write. And it will also be a guarantee that the topics that we choose to write about are evolving and often diversified. Over the years we have noticed that we agree on many things, but disagree or have different perspective on as many.
We are very happy to see that we now have a really large group of readers from all over the world, and we have had a huge increase of new readers over the last two years. We hope that you will keep finding us interesting and continue reading in the future; and we invite you to contribute even more with your comments on our thoughts.
Do you have any thoughts on where you think we should go from here on?
Best regards,
Henrik, Martin & Rikard
the test eye
Teaching testing: scripted vs exploratory testing Martin Jansson No Comments
Let us assume you are a test lead and you have a group of testers. Some are totally new to the profession and some are old and experienced.
In the scripted test environment you might setup a test matrix, plan test cases and allocate them among the testers. Some of the testers might have been involved in creating test cases. Some might create test cases along the way as they do testing, but in the extreme scripted test environment you plan the tests before you get the build. Those that were not involved in creating test cases will just execute the tests.
Where is the learning process in this? Is it limited to just a few of the testers or is it in fact the whole group?
If you introduced pair-testing combined with the scripted testing there might be some collaboration that would stimulate learning, still if you are to follow the script there is no room for going outside the path.
When we do session-based testing you might identify risks and outline which missions to do. Testers will during the test session practise their way of expressing what they test, how they do it and what they find. Then during debriefing there is a natural way of giving feedback. Overtime you can look back on what you did and how you have evolved.
If I look at a group of testers who have been running test sessions, I see enthusiasm and sometimes passion keeping them from stop testing. Compared to those who have been executing predefined test cases, I see lack of will and boredom. The learning will be affected by this.
I have seen a few times that when giving out missions for test sessions to a group of testers who were inexperienced at exploratory testing, but instead were very familiar with how test scripts where run, there was a lack of courage in going to unchartered territory. I heard objections such as “But we have no test cases for this area, we do not know how this works.”. Can it be so that exploratory testing also brings courage and decrease fear of learning new things?
I have written tons of test cases over the years and I would say that you do learn by designing a test and preparing it to be run by yourself or by someone else. Still, it is not the same kind of learning and not the same kind of satisfaction (at least for me) as when writing on your charter.
What if scripted testing began making charters the same way as an exploratory tester do in session-based testing. Would they also increase their learning curve? What other changes would we see in the testers behavior? If the script was the mission and going outside was opportunity, would we infact see even better testing that what we see today from exploratory testing without the script? Is one of the keys the charter/opportunity, thus the fact that it is ok not to do the script?
The Testing vs. Checking Paradox Henrik Emilsson 21 Comments
If you haven’t read the excellent articles by Michael Bolton regarding Testing vs. Checking yet, now is a good time to do it:
http://www.developsense.com/blog/2009/08/testing-vs-checking/
http://www.developsense.com/blog/2009/09/transpection-and-three-elements-of/
http://www.developsense.com/blog/2009/09/pass-vs-fail-vs-is-there-problem-here/
Done?
One thing that struck me with this is that the more testing you do will result in less testing and more checking. I.e., the more you test, the more you know, the more heuristics you will develop; and the more you become a checker for things that you assume or already know. The experience and skill that you have gathered in order to be a good Exploratory Tester actually gives you test ideas that you really just need to check.
It is a paradox in theory, but if you think of it:
Lets say that you are the best Exploratory Tester that exist on planet Earth, and you have experience from all kinds of software development and teams/companies. Then you would have so much knowledge about products, people, software, user interaction, etc, that you eventually would sit on a gold mine of experience and knowledge. And there is nothing more to learn since you already know everything. This would mean that all tests that you do are merely checks to see if they meet the knowledge (assumption) or not.
Michael defines Checking as “Checking is something that we do with the motivation of confirming existing beliefs” and “Checking is a process of confirmation, verification, and validation“.
Well, the more you know, the more you then confirming existing beliefs.
Update 2010-03-29:
Thanks for all interesting and thoughtful comments!
I now realize that the paradox isn’t true, and cannot ever be. It was an intriguing thought that ended up being something not really true. Thanks for your help in clarifying that! Still, this post and comments have at least made me think around the subject! And it has made me wiser.