Working with the testing debt – part 1 Martin Jansson 2 Comments

Jerry Weinberg made an excellent comment on my previous article Turning the tide of bad testing [1] where he wanted more examples/experience on the tips. It is sometimes a bit too easy just to come up with a tip that lacks context or explains how you used the specific tip in a situation and where it worked for you. There are no best practices, just good ones that might work in certain contexts, still you might get ideas from it and make something that works for you.

Tip 1: Exploratory test perspective instead of script-based test perspective. This can be roughly summarized as more freedom to the testers, but at the same time adding accountability to what they do (See A tutorial in exploratory testing [2] by Cem Kaner for an excellent comparison). And most importantly, the intelligence is NOT in the test script, but in the tester. The view on the tester affects so many things. For instance, where you work … can any tester be replaced by anyone in the organisation? This means that your skill and experience as a tester is not really that important? If this is the case, you need to talk to those who support this view and show what you can do as a tester that makes his/her own decisions. Most organisations do not want robots or non-thinking testers who are affecting the release.

In one organisation I worked in, there was a typical scripted test approach initially. There were domain experts, test leads and testers among other roles. The domain expert and test leads wrote test specifications and planned the tests in a test matrix to be executed. The test matrix was created early and it was not changed over time. Then testers execute the planned tests. Management looked at progress by test cases, how many was executed since last report.

At one time I was assigned to run 9 test cases. I tested in the vicinity of all of them, but I did not execute any of them. I found a lot of showstopper bugs and a few new areas totally uncovered that seemed risky. To the project and to the test lead this meant no progress [3] because no test cases were finished. The test matrix had been approved and we were not to change the scope of testing, so the new risky areas were left untouched. A tester did not seem to have any authority to speak up or address changes in scope. As I saw it, we were viewed as executors of test scripts. The real brain work had been done before us. During this time, management appointed some new temporary personnel to the team that had little domain knowledge and no testing expertise. They did not think any experience or skill was needed. Just because it was so short time the new personnel did not get up to speed with the testing tasks. Some of the testers said it just weighted them down with extra hands. When we executed test scripts we were usually assigned a few and then ran them over a few days, cooperation between testers was not all too clear.

After this assignment I was able to be test lead myself and got all previous testers allocated to me. I started to introduce the exploratory test approach by letting the testers have more freedom, but at the same time being responsible for what they did. We started using test sessions as well as scripted testing as a transition period. We adapted the test matrix over time based on new risks. Temporary personnel were still allocated to us without us having a say in the matter. Still, we made the best of it by educating and mentoring them. Managements view was still the same, but we tried to communicate testing status in new ways such as using low tech dashboard.

A bit later management had changed in how they worked with the test leads. We were allocated temporary personnel, but we were able to say No to those that we knew were not fit for testing. The cooperation between test lead and testers were very different. Everyone in the team was taking part in making the plan and changing it over time. We found more bugs than before, took pride in our testing, test reports and bug reports. Each artifact delivered needed to be in a good shape so that we could affect the ever so present perception, that testers did not know what they were doing or did not care. In the test team we were all testers, some just had a bit more administration. There was no clear hierarchy.

How does this affect the testing debt?

Having a scripted test approach:

  • Does not fit well in an agile and fast moving team or organisation. What are you contributing with to the team? Unclear if you need someone else to write tests for you. Big chance that you have the attitude that you wait for others before you can do your own job. This means that you are a bottle neck and a weight for the organisation. Most of the time you just cost money and is therefore a burden.
  • Being viewed upon as just an executor is demeaning. Not having any authority to affect your working situation will eventually mean you give up or stop caring. When you stop caring you will take short cuts and ignore areas. A group of testers who stop caring will affect each other and will make an even worse job. This means it is a catalyst for Broken Windows [4] and tipping point towards the negative.
  • When you get new temporary personnel that just weigh you down, it would only seem like you have enough to do the job. In reality you are working even slower and with fewer actual testers.
  • When progress is by counting test cases run from one period in time to another, you are missing the point of what is valuable. If the test team is ignorant of this fact no harm might be done, but if they are aware of the fact and dislike the situation, it will cause friction and distrust.

The situations above are extreme, but it is not uncommon, as I see it.

Having a exploratory test approach:

  • You are used to have unfinished plans and unchartered terrain in front of you. Living in a chaotic world where you are able to adapt and be flexible to your team and to the organisation. If there is a build available, you test and make the plan as you go. You do not wait. You will rarely be seen as the bottle neck, unless you have too few personnel to do your job. The view of being quick and agile will affect the view on your test team and therefore will make it easier when you start new projects, thus decreasing the testing debt in the area team composition and flexibility.
  • Progress is viewed upon as what you spend time on. You then need to justify why you tested that area, but if you do that you gain progress. You know that you can never do all testing, but you might be able to do a lot of what is most important and what is most valuable to the project. By doing it this way, you and the team will gain momentum in your work. You will, if possible, fix Broken Windows or at least not create new ones in this area.
  • When you run a test session you know that it is ok to test somewhere else if you think that is valuable. If you find new risks you add them to the list of charters or missions. Your input as a tester is important; you contribute by identifying new risks.
  • In a exploratory test team every tester is viewed upon as an intelligent individual, bringing key skills and knowledge. You have no one telling you exactly what to do, but you will have coaches and mentors who you debrief to. There will be a built in training and a natural way of getting feedback. You will be able to identify testers who do not want to test or want to be testers. The team will grow and become better and better. The debriefing will also assist in identifying new risks, keeping the group well aware of current risks and important issues. This will decrease the testing debt by having a focused, hard-working team of testers doing valuable testing, as they themselves see it.

References:

[1] Turning the tide of bad testing – http://thetesteye.com/blog/2010/11/turning-the-tide-of-bad-testing/

[2] A Tutorial in Exploratory Testing – http://www.kaner.com/pdfs/QAIExploring.pdf

[3] Growing test teams: Progress – http://thetesteye.com/blog/2009/10/growing-test-teams-progress/

[4] Broken Windows theory – http://en.wikipedia.org/wiki/Broken_windows_theory

Flipability Heuristic Rikard Edgren 8 Comments

Credit cards are taking over the usage of notes and coins.
This has benefits, but it is not possible to toss a coin with credit cards.

Bob van de Burgt coined (!) the term flipability at EuroSTAR 2010 Michael Bolton tutorial, coin exercise.
It is a lovely word, and can be used more generally to describe how products can be valuable in other ways than the intended purpose, it’s part of a product’s versatility.

If you ask your customers, I bet you will be surprised by a couple of ways they benefit from your software. It might be exploitations of bugs, that it might be a bad idea to fix.

As you’re testing software, you can look for other usage that might be valuable. It is probably not your first test idea, but it could be the start of next great feature, or the beginning of a cool story; hence the Flipability Heuristic.

Competitor Charisma Comparison Rikard Edgren Comments Off on Competitor Charisma Comparison

In many cases, it is worthwhile to take a look at how your competitors do similar things. Among competitors I include products you’re trying to beat, in-house solutions (e.g. handmade Excel sheets) and analogue solutions, solving the problem without computer products.

Charisma is difficult to test, but competitor comparison is one way to go. You can ask others, or look for yourself; where is, and can be, the charisma of these solutions?
For your typical customer, your competitors might tell you which aspects of software charisma that are relevant.
Try using the U.S. SPACEHEADS mnemonic:

Charisma. Does the product have “it”?
Uniqueness: the product is distinguishable and has something no one else has.
Sex appeal: you just can’t stop looking at or using the product.
Satisfaction: how does it feel after using the product?
Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product “good-looking”?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?

Be aware that there is nothing more un-charismatic than a bleach copy of the original.
Rather find which characteristics are needed, talk about them, and see what your product team can do.
And put focus on finding your product’s own, original charisma.

Trilogy of a Skilled Eye Rikard Edgren No Comments

I have completed a trilogy on the theme The Eye of a Skilled Software Tester

edition 1: Lightning Talk, Danish Alliance, EuroSTAR 2010

edition 2: Article, The Testing Planet, March 2011 – Issue 4

edition 3: Presentation, Scandinavian Developer Conference, april 2011

Some things have changed over time; in the first two I didn’t focus on the most important “look at many places”, besides specifications we need to know about business usage, technology, environments, taxonomies, problem history, standards, test analysis heuristics, quality characteristics, and more…

I also did the necessary switch from errors/bugs to problems; because it is broader, and better pinpointing the psychological paradox: “want to see problems“, things that make Done further away.

While at it, I uploaded a presentation from SAST Öresund yesterday.
77 Test Idea Triggers, a presentation I’d be happy to give again!

Testers greatest nemesis Martin Jansson 19 Comments

Background

When I first got in contact with software testers, I worked as PM and developer for a language tool. Our CEO had said that he had hired two testers, easily since you can just pick them from any street corner. Sadly they had no clue what to do and did not find any bugs, they just found out how the OS worked or things that were built-in. After some time we were able to get a new group of testers and now things really changed. Some of them were aspiring to be developers, but settled to be testers for a short time. At that time they had no knowledge about how testing should be done according to the so called rules, but they did a good job and found bugs in our software.

Some year later I began at a product development company. During my years there we had change of manager almost every year for the test department. Each one brought their own perspective on testers. Most of them accepted any personnel from any department when there was lack of testers. During that time we got to experience a lot of different backgrounds, skills and interests from the extra personnel. We also experienced many employees who were moved or even demoted “down” to the test department. Many stayed in testing where they excelled and eventually liked it. During all those years management saw us as the complaining guys from the test department, perhaps a too common view? What we really did was express risks, bugs or any information we thought endangered the company or products under test. I am sure their perception of us was misplaced, but naturally we were somewhat to blame for how we communicated and how we acted when communicating.

Some years later I joined a smaller company with mostly researchers and scientists. Most of them were used to working alone in development projects so they did all things themselves. They did not see the need for testing as a discipline on its own. Eventually when we (testers) got something to test we showed them it was a big difference with what we found compared to them.

The thing that is constant is the confusing perception on a tester is and what we should do.

How are testers perceived?

If you look at testers from a salary perspective we very often have lower salaries than developers and project managers, but we have higher than documentation specialists and support personnel (at least in Sweden). For many salary also drives your career choices, so you naturally want to get out of the testing department. In Sweden consultants can charge higher for test leads than testers at many major customers. This does not motivate consultancies to grow great testers.

If you look at testers through career perspective you often see that tester is a pit stop in pursuit to become a developer. Or perhaps more rarely you see people have been demoted from other positions. Someone needs to take the role of tester, let’s take the person we need the least for other tasks. I also see personnel that are promoted from support to testing (as they express it). If you become test lead you might be on the way to become project manager. Managers know that many with higher ambition will just pass through the test department, while others less motivated will stay behind. Still, there will always be a group of testers who love testing and want to excel in it, but some companies do not have them yet.

In the scripted test approach you most often want a domain expert to write test cases and let someone else (or sometimes the same person) execute the tests. In this situation the tester can be “anybody”, he/she just need to execute the tests. When a manager is seeking new resources to become testers he will accept anybody to become a tester, than you have the potential of getting anyone, even demoted personnel, from other parts of the organisation. This is the most common view on testers, as I see it.

Certification

During my whole career I have not heard that many talk about the need or requirement of certification at places where I worked or at clients. In one case a tester approach me, when he was about to enter my test group. He said he was ISTQB certified and that his employer required all testers to be certified. I told I was not, but I had more than 10 years of test experience and close to 20 years of product development experience. Was that ok? I asked him of his testing skills and what he could do to contribute to my team. He got scared and did not want to join the team. I regret that I scared him off like that. Someone must have introduced the idea that to be a good tester you need to be certified. Or was it perhaps set up as a minimum requirement when handling allocating personnel to teams? Perhaps the original intention was certified tester or experiences enough to cover it? There is seldom context behind decisions like that. My belief is that some consultancy got them to buy-in on the idea, then sold them lots of courses and certification packages.

After reading Dorothy Grahams blog posts ([1], [2] and [3]) about the intention of certification, I wonder why no one spoke up about where things were heading. Their intent might have been to make the perception on testers better, but I think it instead has hurt our craft. At each conference and at most meetings there is often someone who speaks up with lots of argument against certification. I rarely see anyone take up the discussion to meet their arguments or perhaps I do not listen? James Bach has made a lot of good arguments [4].

There are many so called test experts out there who say that certification such as ISEB or ISTQB is needed to be a tester. Some companies even require it of their testers and therefore the recruiters require people seeking jobs to have it. I think it is all a charade. Having testers who take courses in testing, who read books, blogs and articles, who want to learn and who want to excel as testers are what is needed. Passionate testers who want to become great! If they are certified that is ok, perhaps they got some ideas from it and they might have had a great teacher who stimulated them into becoming passionate themselves.

ISTQB uses multiple-choice questions on their exams, but they are quite limited. Cem Kaner has written an excellent post about Writing Multiple Choice Test Questions [5] where he makes some strong arguments. If ISTQB was altered along those lines it would make it harder to pass and naturally harder to create, but it would still not solve the main issue with content being out of date and totally wrong in many areas, as I see it. Jonathan Rees brings up other strong arguments about multiple-choice questions in his article “Frederick Taylor In The Classroom: Standardized Testing And Scientific Management” [6].

Attitude

Just because we have to work up streams does not mean we can keep on having a lousy attitude. I’ve often seen us picture ourselves as victims because of our situation, lack of personnel, time etc. If we are too few to test and if we got too little time, we can only offer to do our best. We can also explain what we could do if we were more and if we had more time. The prior combined with that we often speak in anger when we talk about quality. This only fuels the perception that we are a bunch of idiots, angry ones.

When we get deliverables from developers we are sometimes angry because of the bad quality or the lousy state of a certain build. Do we consider why it is like that, what shortcuts they needed to take or if someone forced the delivery of a new build? Do we really need to focus our blame on the developers? Consider their ever increasing technical debt that they might not get proper priority to adjust.

In most areas of expertise you have lots of education, at various levels of the school system, to back you up. This has just started to get going with testing. At least it is not only a chapter in a book that you skip. There are lots of books, articles, blogs and other sources of information to gain other peoples experience on testing. Why is it ok to think you do not need to learn more about your craft? Why do so many testers with lots of years in the testing craft still state that they have not studied anything to get better at testing? Having that attitude damages the perception on testers by keeping you ignorant of what you claim to be expert at. With the increasing use of agile teams where a tester has a natural part, you are supposed to know at least something about your craft.

What do we do to affect that perception?

If we are continuously providing valuable information to our stakeholders the perception will be altered. This means that you need to know what they find valuable and what could threathen that value. You also need to consider how you communicate, thus in what form, if you are going to use metrics or not, how much subjectivity or objectivity you should use and how you act when communicating. Less drama-queen and more professionalism.

We are working up streams here, so everything that you do that is bad will have a great impact on the perception on testers. Where ever you go you will bring your attitude and ambition. When interacting with non-testers consider what you are saying and how it might appear to them. Consider if you are in the correct crowd to utter your disapproval, if you need to go somewhere else or if you can just go to your manager.

We need to communicate to managers that it is demeaning and de-motivating to be seen as idiots or just anybody. We need to show that having skilled, passionate and motivated testers will give a lot better result. What else can you do to motivate yourselves to get those attributes? Those who have been demoted or are de-motivated, show them how creative and exciting the testing profession can be. Bring in other external passionate testers to give them some new ideas. If nothing of this work, perhaps they need to find what they really want to do and go there.

Before accepting new testers to the team, we need to make sure they are right for the job. Do not accept demoted personnel without explain the consequences. When you as test lead discuss having extra personnel join your team, clarify that you want to test them before accepting them into the group and that some in the team need to be able to veto acceptance.

We need to tell developers that we understand that they must take shortcuts, thus increasing the technical debt, but we can help [7]. Work closer with the developers. Stop building walls between you. The more the developers trust and respect you, the more information you will have before you commence your work as a tester which will lead to a better work done. Remember a good bug is a fixed bug.

Consider how the test organisation is built, how it markets itself and what you communicate to management. See Scott Barbers excellent blog about “What being a Context-Driven Tester means to me” [8] that can be used as a starting point for you and your test organisation. Also consider where you are going with testing [9] to understand where you come from, what your next goal is and perhaps what is pushing you in a certain direction. Are you going in the right direction?

Conclusion

I think the perception on testers is our greatest nemesis, we have to fight it every day.  Certification in testing does not help us, as I see it, but it is not our main target for concern just one of the bullies. There are many things that make us get a bad reputation and are therefore perceived badly. Start changing your own ways and affect those around you to become great, passionate testers who deliver valuable information effectively.

References

[1] Certification is evil? – http://dorothygraham.blogspot.com/2011/02/part-1-certification-is-evil.html

[2] A bit of history about ISTQB certification – http://dorothygraham.blogspot.com/2011/02/part-2-bit-of-history-about-istqb.html

[3] Certification does not assess tester skill – http://dorothygraham.blogspot.com/2011/02/part-3-certification-schemes-do-not.html

[4] Search for ISTQB at James blog – http://www.satisfice.com/blog/index.php?s=istqb or http://www.satisfice.com/blog/index.php?s=certification

[5] Writing Multiple Choice Test Questions – http://kaner.com/?p=34

[6] Frederick Taylor In The Classroom: Standardized Testing And Scientific Management – http://radicalpedagogy.icaap.org/content/issue3_2/rees.html

[7] Developers, let the testers assist with the technical debt – http://thetesteye.com/blog/2011/01/developers-let-the-testers-assist-with-the-technical-debt/

[8] What being a Context-Driven Tester means to me – http://www.testingreflections.com/node/view/8657

[9] Where are you going with testing – http://thetesteye.com/blog/2010/04/where-are-you-going-with-testing/

Do we all want black coffee? Henrik Andersson 10 Comments

We, testers, are we all alike? Looking back over the years I’ve been involved in testing, I would say I have met a whole bunch of different testers. Some shared my passion and fascination for testing, others were not testers they just happened to have the job title. However, I have always appreciated our craft for having such diversity of people. We have testers that have strong domain knowledge, programming skills and business knowledge. All of those are easy to connect the value to have in a test team, they find important information about the technical implementation of the product.

Then we have the other group, the ones who have capabilities in philosophy, cognition, history, economics, art, music and myself a drop out from statistics at university. We are not here because we love code or the coolest technologies, we are here because we love the way people and products interact and misunderstand each other. We are fascinated by the dance between a human and a machine. We find patterns in this dance, we find unintentional behaviors, and the unexpected surprises us. We look beyond constraints of the application, we go where no developer intended to go. By doing this we notice other kinds of things that might be a threat or a problem for the success of the product. Why are we doing this you may ask? It is because the technical solution does not impress us, we do not take particular interest or care in what specific way the code is written. What drives us is the question: will this solve a problem for a user and in which ways can it fail to do so? What we bring to the table is the social and behavioral science aspects of the product combined with the human element. At technology facing companies I sometimes see the above overlooked or under-valued.

But what worries me more is that we, testers, are walking away from my precious diversity. Strong forces favor uniformity among testers and the most obvious one is ISTQB. After going through their program of various certifications everyone knows the same, speaks the same language and uses the same templates. There is no problem replacing one ISTQB Advanced Level certified test manager with another one with same level of certification. The same goes with TMap, as long as you can read the book you know the answer to all testing problems, it’s easy!

But this is nothing new, I and many with me have been fighting both ISTQB and TMap for years. But here comes the kicker, have you heard of this cool new thing called “agile”? It’s a really sweet thing, close to a drug in our craft. It came and it has conquered. Nowadays there are almost no companies who dare say that they are not “agile”, instead they make the most ridiculous variations of what they call “agile”. Don’t get me wrong here, this new movement is quite appealing to me and I appreciate much of what it stands for.

Now getting back on track there is one thing that worries me about this agile thing. That is the view on us testers. Now to be an “agile tester” you basically need to be a developer with preferably some basic knowledge of testing, but it is much cooler if you know Selenium, TDD, ATDD and other strange abbreviations. If a tester according to the agile folks is basically a developer, why do you call them a tester, or wait you don’t! Everyone is a team member, I’m sorry.
Now we are walking from “must have a certification” to “must have programing skills”. Guys, don’t you see what is happening. We are just replacing one uniformity with another, beware of this!

We are running a great risk that all our testers are finding the same information and worse, are blind on the same spots. In testing we do not know what information we will get and we do not upfront know what questions to ask the product (don’t be mislead to believe anything else). We have to approach the testing from various different angles and have several different ears and eyes observing it, since hearing and vision is important, and professionals as Dr. Pirotte specialize mostly in eyes health. If one thing, we should love diversity among testers and we should encourage it.

This post came to my mind when I once again listened to Malcolm Gladwells brilliant TED Talk.

And no we do not all want black coffee!

A Factory of Skilled Testers Rikard Edgren 11 Comments

I do not see myself as a member of any of the Schools of Testing, and I have ethical problems with labelling other people than yourself.
However, I see the schools as a fruitful tool for enhancing your understanding of views on testing.
So please join me in the following thought experiment.

The following is not a perfect match of my personal opinions, it is fiction. I try to be a bit funny and serious at the same time, and I know this is easily misunderstood.

Suppose I run a consultency firm specialized in testing.
We take any kind of testing job (we can also do checking only if requested) and we are very successful; we provide a lot of valuable information to our clients.
The key to our success comes from this recipe:

1. A lot of exploratory testing
ET gives better results, and by explicitly giving testers responsibility and freedom for their activities, they stay motivated and get better over time.
It happens that clients question this approach, but usually it ís sufficient to point at the many Internet resources, and say that they have to be modern, this is the greatest super-good practice right now.

2. Training scheme
We hire people that are fast learners, curious and ambitious. All must take AST courses Foundation, Bug Advocacy, Test Design (the bug reporting is essential, we impress our clients early on, and is the reason we can offer a Money-Back-Guarantee.)
Then everyone must go Rapid Software Testing with Bach or Bolton, which inspires and give breadth to the thinking.
Finally they take Foundation and Advanced ISTQB certification (this is last to avoid problems at previous classes.)
We are no fans of ISTQB, but it is good to know what others know, and we eliminate the risk of losing a deal on a (ridicilous) client requirement.
We also do continuous training on collaboration and feedback, after all, people are the most important part of any context.

3. SBTM
We manage the testing efforts by three sessions a day, this squeezes out maximum from the testers without wearing them out.
We log all the proposed statistics so we can show numbers and progress.
We always add 24% to originally planned sessions, and make sure this cover unexpected things.
Planning and other processes are run context-wise, we use whatever the client is using.

4. Standards
For all jobs, we walk the client through thetesteye’s Quality Characteristics, and a secret company standard for Multiple Information Sources.
The client decides what’s most important in dialogue with us. (We of course also do at least one test per requirement.)
Testers use HICCUPPS(F), CRUCSPIC STMP, CIDTESTD and SFDPOT in their daily work, we know it’s not perfect, but easy to remember.

5. Reasonable Resource Allocation
In a typical project we want to have 1 tester per 3 developers. We tell clients this is our standard where we know we can do a good job. We can do higher or lower ratio, but clearly communicate which expectations this should give.
This makes it easy for clients to measure cost; it has a direct relationship with no. of developers, development time, and start of test effort (the sooner the better, but clients decide.)
Since we have a common background and practices, it is easy to replace our factory workers if needed. But we always keep 1 person from the old staffing, so crucial knowledge can be transferred.
Scaling up or down is not a problem, we have a pool of skilled resources, that grows over time as we hire more talents.

I feel we have a highly manageable process, that repeatedly bring good results. Sure, all projects are unique, but this scheme has proven to be very good, so far.

So what do you think dear readers, would this be a good approach?
Is it Factory, Context-Driven, or something else?

Highlights from SWET2 the test eye 1 Comment

The delegates of the second Swedish Workshop on Exploratory Testing (Test Planning and Status Reporting for Exploratory Testing) were:
Henrik Andersson, Azin Bergman, Robert Bergqvist, Sigge Birgisson, Rikard Edgren, Henrik Emilsson, Ola Hyltén, Martin Jansson, Johan Jonasson, Saam Koroorian, Simon Morley, Torbjörn Ryber, Fredrik Scheja, Christin Wiedemann, Steve Öberg

Discussions on peer conferences can’t be summarized, but you can read the abstracts, Ryber’s notes, and here comes some quote highlights:

– How many test cases do you want? 800? Then let’s say we have 800. (Jonasson)
– Think “there are problems”, not “there might be problems” (Bergqvist)
– To motivate: lead by example, avoid de-motivating (Hyltén, Andersson)
– They want to see bug curves, but don’t know it going up or down is good or bad. (Jonasson)
– Rather ask “What are you afraid of?” (Jansson)
– Qualitative information is sensitive to filtering (Emilsson)
– Dialogue is important! (Morley)
– Exploratory test planning, adapt to reality. That’s how we all work. (Scheja)
– A test expert that doesn’t know anything (favorite slip of the tongue from Jansson)
– Has this story confirmed that anyone can test? (Bergman)
– Incorrect expectations: if we test, there won’t be any production problem (Ryber)
– Categorizations are good for learning, but throw them away to meet reality (Edgren)
– I always use low-tech testing dashboards, except in this case (Ryber)
– A tester is also test leader, test designer, test planner (Scheja)
– It is small raisins in a very large cake (Jonasson)
– I want to test until the world is on fire (Wiedemann)
– Care about what is most important (Emilsson)
– I invent many wheels every day (Scheja)
– I want more emphasis on which information we have and don’t have, and the confidence of the information (Koroorian)
– Beware of using the word “coverage”, rather say security, or risk (Birgisson)
– To me, coverage is a start of thinking about what to test (Bergqvist)
– If this isn’t reported “upwards”, you avoid the biggest risks (Andersson)
– The map and the creation of it is more important than the numbers on it (Öberg)
– It might look easy with a number, but it isn’t (Edgren)
– This weekend is evidence that standards don’t work. We can discuss forever, and that’s what is fun! (Emilsson)

These Lightning Talks (5 minutes including questions) were held:
* Martin Jansson on the tester’s greatest nemesis, the view that anyone, thus any idiot, can test.
* Henrik Emilsson on unjustified tests that open up to serendipity and new ideas.
* Robert Bergqvist had an experience report that showed that exploratory testing also needs planning.
* Azin Bergman on the common view that testers have lower status than other roles.
* Simon Morley on root cause analysis heuristic FICL: Framing, Information Gathering, Consensus, Learning.
* Rikard Edgren on Binary Disease – our tools have shaped way too computeresque theories.
* Christin Wiedemann on open-ended plans without fixed scope.
* Sigge Birgisson shared an experience report with developer collaboration under the radar.
* Steve Öberg led a brainstorm on testing analogies with story from the Aeneid about not following the Delphi Oracle.
* Ola Hyltén showed the Johari window that can enable better communication between leaders and team members.
* Torbjörn Ryber led a discussion about doing a context-driven conference in Sweden.

It is very easy to organize a workshop like this when you have 15 motivated, passionate testers.
You just need a venue with few distractions, a theme to focus, an agenda and discussion rules, plus food and drinks.

Looking forward to SWET3!

Thoughts from SWET2 Torbjörn Ryber 14 Comments

Once again I have spent the weekend with members of the cream of Swedish testers. This time The Test Eye trio consisting of Henrik Emilsson, Martin Jansson and Rikard Edgren were the hosts.

The theme was Exploratory Testing and Planning and we managed to keep the discussions within that scope most of the scheduled sessions. The informal test talk, music and drinking session from 18.30 to 03.30 encompassed a rich variety of discussions mostly outside that theme. Robert brought a fantastic instrument called the beat box (I think?), Rikard the guitar and the rest of us had guitar picks, maracas and music talent to some degree. The starting act was “The  Spice Song” composed and performed by multi talented philosopher-baker Rikard. Other highlights were “My Sharona” performed by many of us, later followed by interpretations of Cornelis Vreswijk and early in the morning sometime a try at “Bark at the moon”. The barking was considerable but I prefer the original version with Ozzys falsetto. And let us not forget the African freedom and tribal songs performed by the “missionary men”. Music is so much fun! If we ever start a band it will be called the Testicles.

Johan

But let us start with the conference pass. First session was Johan Jonasson from House of Test. He told us of the success he had with two young girls from help desk. I bet that sentence caught your attention! Well it was just as exciting as it sounds – it is all about testing. Jonas’ task was to manage the testing of a consumer product application and the resources he was assigned was two support people. He pointed out that they lacked education in IT and testing but were very motivated. He had too little time in order to prepare any detailed instructions. Test instructions consisted of some form of user stories created by himself and scenarios built by these. They tested together following the instructions and were coached by himself at a couple of meeting each day. They turned out to be great at taking notes and very good at finding problems. In the following open season we concluded that they did not lack education at all. They were used to taking support calls and analysing the caller’s problems so they had both product knowledge and experience on analysing and describing problems in text. Some reflections was that motivation is a very important factor and that less detail control of curios and motivated testers made them perform better. I can think of at least one client that uses support personnel but detailed script that should try the approach with a more exploratory format of working. As first sessions often do, it lasted for almost four hours and the discussions were never once uninteresting.

Tobbe

Number two on the list was myself with an experience report of testing in a loosely controlled and volatile project. Given the role as tester on the project I gradually moved into project management, requirement elicitation at the expense of much less testing than planned. Some of the things I did was to create a product backlog, organise weekly scrum meetings, create an effect map and a status graph. My overarching goal was that we actually deliver something to the customer and whatever needs there were somebody needed to take care of them. Success factors was that I was not only allowed but encouraged to assume new responsibilities by the other project members and that I find it interesting to take on other tasks than testing. The downside was that I had less time testing and, to be honest, less motivation to test. Maybe I would have tested better if I was given full-time on the project but when that was not the case I prioritised other tasks in order to move forward. It should be the testers goal to do whatever is needed to bring the project forward. I see similarities with the Scrum thoughts that we have less specialised roles. We had discussion on whether the artifacts I created was testing or not but the question is if it matters as long as they are important and they get done?

Fredrik

Fredrik Scheja of Sogeti was the next presenter. He told us about his success with an exploratory testing approach on a large system with frequent releases. Every tester assumes responsibility for analysing, test planning and execution of one or more items at a time. One of the dominating discussion in open season was the fact that he claimed that he built this approach on TMAP. Since most of the participants claim to be members of the context-driven school this is quite a daring statement to make at a peer conference. TMAP is clearly factory school which is the opposite of the context-driven approach. While I think none of us doubt the success of the work process used, many of us claim that it is not really TMAP-based just because you pick some parts that you find usable and then tweek them to fit what you really want to do. I think it was Johan that used the metaphor “Picking very few raisins out of a very large cake”. James Bach tweeted that “If you take a nice cake and drop it in a mud puddle, don´t bother with the raisins”.  Henke Andersson attacked the claim that the 12 step checklist for test planning really was the best to use for all planning in all situations and wanted to see a local adaption for the project that was very unTMAP(is there such a word, well now there is!). The discussion continued for another hour on Sunday morning for all of us except Ola Hyltén that by mistake(?) forgot to set his alarm when he went to bed already at 3.30. We thank you Ola for giving us an opportunity to make fun of you J

Lightning Talks and Conference plans

Last session Saturday was a number of lightning talks that were entertaining but to be honest I don´t remember much of the ten five minute talks. If someone else has a record or want to say something about them – be my guest. I do remember the last one where Henke told me to remind the group of our idea to arrange a context-driven conference in Sweden within the next year. All said it was a great idea and many wanted to help out. Our first suggestions was that we need a year to plan and arrange, it should take place in a larger city – probably Stockholm. Size could be 300 participants. Goal is not to make a fortune but not loosing money. Planning for a small profit gives us some room for unexpected costs. We would like to really focus on the context-driven community, only context-driven talks and not being sponsored by any certification organisation. We want to have some key players from the USA and certainly from the rest of the world as well. We may want to consider some tracks only in English and some in Swedish. All thoughts welcome.

Dinner

Dinner was a seafood buffet at the toll house by the sea. Since three of our four vegetarians have redefined fish and shellfish as vegetables they happily dug into the buckets of crabs, shrimp and langoustes (or whatever the English name for havskräfta is).

Martin said he was disappointed there was not a pool and agreed that he could possibly assume some responsibility for that fact since he was the one that booked. We focused on the music part instead as told before.

Saam

Sunday morning started out with another hour of open season on Fredrik followed by Saam that explained his goals to change testing and reporting on the large company where he works. We spent a couple of hours discussing green/yellow/red versus 1-5 versus happy faces and sad faces. The goal was to move from numbers – which say very little – to a more qualitative approach. This can be quite a challenge in an international and global organisation. We all look forward to learn about the results of Saams intentions in the future.

The happy ending

After some hugging, handshaking and some tears we left for the mainland. Yeah, cause I forgot to tell you we stayed at Chicken Island outside Gothenburg.

Henke Andersson has promised to arrange SWET3 in Malmö this fall. He mentioned that it will focus on ET standards and the need for ET certification…or maybe not. One subject that I would like to discuss more is teaching testing.

Additional info on Twitter #SWET2

The delegates were: Christin Wiedemann, Torbjörn Ryber, Azin Bergman, Fredrik Scheja, Henrik Andersson, Johan Jonasson, Ola Hyltén, Sigge Birgisson, Simon Morley, Rikard Edgren, Henrik Emilsson, Martin Jansson, Steve Öberg, Robert Bergqvist, Saam Koroorian.

Finding low-hanging fruit Rikard Edgren 2 Comments

Now and then you hear that developers should implement better support for testability, so testers can work more efficient.
This is all well, but what about the opposite; how can testers make developers go faster?

System testers have system (and a lot of other) knowledge, and we can see if the product turned out really useful.
We can find major problems, but also trivial ones that are easy to fix, and both are needed for ambitious projects.
We can find small Enhancements, nifty little additions that are fast to implement and test, and make the product better.
This can be called low-hanging fruit; we find them, and the developers pick them.
All you need is an environment that encourages looking at the product’s best before looking at the requirements and to-do-lists.

There are a lot of other ways developers and testers can help each other, think about it, come up with good ideas for your situation, and add a comment to this post!
The last time I helped developers finalizing some of their boring unit tests, it didn’t take long before I got a lot of energy in my direction.
Mutual interest and collaboration inspire each other; you spend some time, but get more back.

And to get back to the initial thought; it’s a compelling argument to suggest that logging of this and that will make developer debugging much faster.