No Flourishes and New Connections Heuristics Rikard Edgren No Comments

I used to be a bit skeptic towards the word “heuristic”. It seemed needlessly advanced, and things are possible to explain in other words.
But when I read Gigerenzer’s Gut Feelings about how to catch a flying ball, it all came together.
For software testing, which can’t be complete, is dependent on many factors, with a product to be used with different needs and feelings; techniques are not appropriate. It is about skill, and human skill is very good to describe with a variety of heuristics.

When blogging about some heuristics I think are un-published and worth knowing about, I’ll try to do two at a time; more bang for the bucks.
With English as second language it is difficult to give them good names, so feel free to suggest better names! (and content…)
And remember, heuristics are not rules; they are more like rules of thumb, that might be useful, in specific situations.

No Flourishes Heuristic

At many times you can design and execute straightforward tests, without garnish, fancy tools, and incomprehensible details.
Try this when you have the chance!
See if perceived performance, manually clocked, can give good information.
Use common options instead of combinatorial.
Look at the GUI instead of automating tests.
Try to do something valuable instead of covering all paths in last years use case.
Write your basic test strategies in plain English, so everyone can review them.
Use what you have, and look for what’s important.

New Connections Heuristic

How do you “discover” what is important?
I think it often is about combining different knowledge in new ways.
So you need to know a lot about the product and its context, and look for connections between the wide diversity of knowledge you have.
When reading, talking, thinking, you sometimes instantly think “but this could have big impact if one is trying to do that!”
Or during test execution, when you suddenly get an impulse that you have to add some other things to the stew.
This might seem unstructured, and dependant on chance, but that is OK. If software testing is a sampling problem, we need different ways of discovering what is important.

Working with the testing debt – part 2 Martin Jansson 1 Comment

This is a follow-up from Working with the testing debt – part 1 [1]. The reason for the clarification is that you so easily come up with a tip without a context or example.

Tip 2: Focus on what adds value to developers, business analysers and other stakeholders. If you do not know what they find valuable, perhaps it is time that you found out! Read Jonathan Kohl’s articles [2] on what he thinks adds value for testing.

In one project I worked with a group of experienced developers. I had not worked with them before close hand, but they had received some of my bug reports and knew me by reputation (whatever that was) in the company. When I got their first delivery to me to test I started right away. Immediately I got the feedback that I was not testing the right stuff and there were a bit chilly in their demeanor towards me. I investigated a bit what had happened and found out that they were not really interested in the minor bugs at that moment and that I should focus on the major issues. I explained to them that I report everything I find, I was not expecting everything I found to be fixed though. What was fixed was up to those who prioritized the bugs. Before I started testing I asked them what they wanted me to focus on first. After that they were a lot happier, both knowing I worked on things valuable to them and that they understood that I reported everything that I found.

During another project we were two weeks from the release of the product. We were in the middle of a transition from traditional scripted testing to an hybrid with both scripted and exploratory testing. Rather, we had test scripts that we used as a guideline when we explored, but we reported Pass/Fail on those. At that time Project Management was strict in wanting number of test cases run as well as the Pass/Fail ratio. Earlier test leads had not communicated well why these figures held no value. When we had run all planned test cases project management communicated to their managers that we were done. But we were not, we continued with working on our planned charters and ran sessions. We interviewed the support organisation, business analysts, product management and experts in the test organisation. Eventually we got a long list of risks and areas that we should investigate. We also got a long list of rumours that we intended to confirm or kill. Basically, we were far from done and we still had time before the release as we saw it. We had also received areas that people in the organisation found valuable to get information about. Still, we failed because we had not communicated enough to project management what we were doing. We managed to go through most of the areas and identified lots of new issues as well as killing many old rumours. We failed to bring value to some, but not all.

How does this affect the testing debt?

If you continue to work on things that have no value to you or any of your stakeholders, you must take a stand and change things. Do not accept the situation as it is. If you and everyone around you think you and your test team are not doing anything of value, it will just add to your testing debt.

As I state above Jonathan Kohl gives a good set of questions [2] for you to ask yourself to get back on the path. Also consider what Cem Kaner writes about in Ongoing revolution if software testing [3], because it is still ongoing and it not over.


[1] Working with the testing debt – part 1 -

[2] How do I create value with my Testing? -

[3] Ongoing Revolution of Software Testing -

The automotive industry is not the role model Henrik Emilsson 5 Comments

This began as an answer to Rikard’s post where the discussion on “traditional testing” came up.

I often hear comparisons with our “industry” and the Automotive industry.
In that context, you could say that “traditional testing” corresponds to the methods and practices that are applied in line production of large car companies. And the unorthodox testing can be compared with those specialized and often smaller custom car builder companies out there.

The major issue with this kind of comparison is that large car companies makes thousands of the exact same type of car. Instead this means that the “traditional testing” approach is an attempt to apply line production methods when building custom cars. Applying “traditional testing” as if every project and product were the same is both wrong and dangerous.

And this comparison is not that far-fetched… It seems to me like “traditional testing” is promoted as practices that suits many (if not all?) projects and should be followed in order to enable success. Well, good luck!

Further, many practices that we use today comes from the automotive industry (at least in their latest form).
If we fail to see why they implemented them in the automotive industry and just take them as good practices for being effective in line production, we are doing the opposite of what the automotive industry did. They did some investigation on how they could improve their work. Some of that included seeing the human beings as intelligent and social creatures; utilize the diversity of a group of people; etc. And their productivity and efficiency could be measured by measuring number of cars and their quality. So by treating humans as humans they became more efficient (number of non-defective cars) and improved their work methods (analyzing quality of work).
When Lean development is implemented in psychiatric nursing or software development today, the focus is very often on the quantity measuring part which then misses the whole point. Measuring patients or software as uniform units is very wrong and dangerous.
It’s not that Lean or Kanban is to blame, it’s the implementation; and perhaps mostly the implementors.
The role models for Lean implementations in many healthcare institutions in Sweden have been some successful nursing teams that have increased their efficiency and quality of their work by using Lean development as a method. What those successful teams really did were to take command of their own work and found a method that supported their initiative and commitment. (Anyone had similar experience? Me!)
The problem is that when this is implemented by management at the whole hospital or nationwide, the focus is shifted from “Quality of work” to “Quantity of work” because that is the obvious driver for management and really the only incentive for they to implement it. They only need to say that this is a “best practice” and then it’s OK…
I do need to emphasize that you don’t automatically get high quality work by implementing “best practises”!

This is happening all the time; and the last flavor of the month is Agile.
If you read the Agile Manifesto and then think that you must play planning poker or have standup meetings you are obviously not understanding the Agile Manifesto.
I’m with Matt Heusser on his interpretation here:—-or-the-Manifesto-elaborated/Testing-Software-Test-and-QA-Teams-Strategy-Agile-Development


a word of caution Rikard Edgren 12 Comments

If you are a faithful reader of this blog, you have probably read some challenges of established ways of testing.
I write stuff like “anyone can do non-functional testing”, “look at the whole picture”, “test coverage is messing things up”, “you can skip all testing techniques”, “requirements are covered elsewhere, so focus on what’s truly important”, “Pass/Fail-addiction” or “be open to serendipity”.

It might be a bad idea to start with these alternative activities, people might think you are crazy.
You might have to do thorough, planned, systematic, requirements-based, to-the-bare-bones testing, in order to get the respect from the rest of the organization. The respect you need for your work to be appreciated and used.
You might need to follow the existing practices, to show respect, and learn about the organization.

So even though a radically different test approach would be faster and better, you should consider “traditional testing” as an appetizer for the main course.

Binary Disease Rikard Edgren 17 Comments

I have for a long time felt that something is wrong with the established software testing theories; on test design tutorials I only recognize a small part of the test design I live in.
So it felt like a revelation when I read Gerd Gigerenzer’s Adaptive Thinking where he describes his tools-to-theories heuristic, which says that the theories we build are based, and accepted, depending on the tools we are using.
The implication is that many fundamental assumptions aren’t a good match for the challenges of testing; they are just a model of the way our tools look.
This doesn’t automatically mean that the theories are bad and useless, but my intuition says there is a great risk.

Software testing is suffering a binary disease.
We make software for computers, and use computers for planning, execution and reporting.
Our theories reflect this much more, way much more than the fact that each piece of software is unique, made for humans, by humans.

Take a look at these examples:
* Pass/Fail hysteria; ranging from the need of expected results, to the necessity of oracles.
* Coverage obsession; percentage-wise covered/not-covered reports without elaborations on what is important.
* Metrics tumor; quantitative numbers in a world of qualitative feelings.
* Sick test design techniques, all made to fit computers; algorithms and classifications disregarding what is important, common, risky, error-prone.

When someone challenges authorities, you should ask: “say you’re right, what can you do with this knowledge?”

I have no final solutions, but we should take advantage of what humans are good at: understanding what is important; judgment; dealing with the unknown; separating right from wrong.

We can attack the same problems in alternative ways:
* Testers can communicate noteworthy interpretations instead of Pass/Fail.
* If we can do good testing and deem that no more testing has to be done, there is no need for coverage numbers.
* If the context around a metric is so important, we can remove the metric, and keep the vital information.
* We can do our test design without flourishes; focusing on having a good chance of finding what is important.

Do you think it is a coincidence that test cases contains a set of instructions?
These theories cripple testers; and treat us like cripples.

Now you know why people are saying testing hasn’t developed the last years: we are in a dead end.

And the worst is that if Gigerenzer is right; my statements have no chance of being accepted until we have a large battery of tools based on how people think and act…

Working with the testing debt – part 1 Martin Jansson 2 Comments

Jerry Weinberg made an excellent comment on my previous article Turning the tide of bad testing [1] where he wanted more examples/experience on the tips. It is sometimes a bit too easy just to come up with a tip that lacks context or explains how you used the specific tip in a situation and where it worked for you. There are no best practices, just good ones that might work in certain contexts, still you might get ideas from it and make something that works for you.

Tip 1: Exploratory test perspective instead of script-based test perspective. This can be roughly summarized as more freedom to the testers, but at the same time adding accountability to what they do (See A tutorial in exploratory testing [2] by Cem Kaner for an excellent comparison). And most importantly, the intelligence is NOT in the test script, but in the tester. The view on the tester affects so many things. For instance, where you work … can any tester be replaced by anyone in the organisation? This means that your skill and experience as a tester is not really that important? If this is the case, you need to talk to those who support this view and show what you can do as a tester that makes his/her own decisions. Most organisations do not want robots or non-thinking testers who are affecting the release.

In one organisation I worked in, there was a typical scripted test approach initially. There were domain experts, test leads and testers among other roles. The domain expert and test leads wrote test specifications and planned the tests in a test matrix to be executed. The test matrix was created early and it was not changed over time. Then testers execute the planned tests. Management looked at progress by test cases, how many was executed since last report.

At one time I was assigned to run 9 test cases. I tested in the vicinity of all of them, but I did not execute any of them. I found a lot of showstopper bugs and a few new areas totally uncovered that seemed risky. To the project and to the test lead this meant no progress [3] because no test cases were finished. The test matrix had been approved and we were not to change the scope of testing, so the new risky areas were left untouched. A tester did not seem to have any authority to speak up or address changes in scope. As I saw it, we were viewed as executors of test scripts. The real brain work had been done before us. During this time, management appointed some new temporary personnel to the team that had little domain knowledge and no testing expertise. They did not think any experience or skill was needed. Just because it was so short time the new personnel did not get up to speed with the testing tasks. Some of the testers said it just weighted them down with extra hands. When we executed test scripts we were usually assigned a few and then ran them over a few days, cooperation between testers was not all too clear.

After this assignment I was able to be test lead myself and got all previous testers allocated to me. I started to introduce the exploratory test approach by letting the testers have more freedom, but at the same time being responsible for what they did. We started using test sessions as well as scripted testing as a transition period. We adapted the test matrix over time based on new risks. Temporary personnel were still allocated to us without us having a say in the matter. Still, we made the best of it by educating and mentoring them. Managements view was still the same, but we tried to communicate testing status in new ways such as using low tech dashboard.

A bit later management had changed in how they worked with the test leads. We were allocated temporary personnel, but we were able to say No to those that we knew were not fit for testing. The cooperation between test lead and testers were very different. Everyone in the team was taking part in making the plan and changing it over time. We found more bugs than before, took pride in our testing, test reports and bug reports. Each artifact delivered needed to be in a good shape so that we could affect the ever so present perception, that testers did not know what they were doing or did not care. In the test team we were all testers, some just had a bit more administration. There was no clear hierarchy.

How does this affect the testing debt?

Having a scripted test approach:

  • Does not fit well in an agile and fast moving team or organisation. What are you contributing with to the team? Unclear if you need someone else to write tests for you. Big chance that you have the attitude that you wait for others before you can do your own job. This means that you are a bottle neck and a weight for the organisation. Most of the time you just cost money and is therefore a burden.
  • Being viewed upon as just an executor is demeaning. Not having any authority to affect your working situation will eventually mean you give up or stop caring. When you stop caring you will take short cuts and ignore areas. A group of testers who stop caring will affect each other and will make an even worse job. This means it is a catalyst for Broken Windows [4] and tipping point towards the negative.
  • When you get new temporary personnel that just weigh you down, it would only seem like you have enough to do the job. In reality you are working even slower and with fewer actual testers.
  • When progress is by counting test cases run from one period in time to another, you are missing the point of what is valuable. If the test team is ignorant of this fact no harm might be done, but if they are aware of the fact and dislike the situation, it will cause friction and distrust.

The situations above are extreme, but it is not uncommon, as I see it.

Having a exploratory test approach:

  • You are used to have unfinished plans and unchartered terrain in front of you. Living in a chaotic world where you are able to adapt and be flexible to your team and to the organisation. If there is a build available, you test and make the plan as you go. You do not wait. You will rarely be seen as the bottle neck, unless you have too few personnel to do your job. The view of being quick and agile will affect the view on your test team and therefore will make it easier when you start new projects, thus decreasing the testing debt in the area team composition and flexibility.
  • Progress is viewed upon as what you spend time on. You then need to justify why you tested that area, but if you do that you gain progress. You know that you can never do all testing, but you might be able to do a lot of what is most important and what is most valuable to the project. By doing it this way, you and the team will gain momentum in your work. You will, if possible, fix Broken Windows or at least not create new ones in this area.
  • When you run a test session you know that it is ok to test somewhere else if you think that is valuable. If you find new risks you add them to the list of charters or missions. Your input as a tester is important; you contribute by identifying new risks.
  • In a exploratory test team every tester is viewed upon as an intelligent individual, bringing key skills and knowledge. You have no one telling you exactly what to do, but you will have coaches and mentors who you debrief to. There will be a built in training and a natural way of getting feedback. You will be able to identify testers who do not want to test or want to be testers. The team will grow and become better and better. The debriefing will also assist in identifying new risks, keeping the group well aware of current risks and important issues. This will decrease the testing debt by having a focused, hard-working team of testers doing valuable testing, as they themselves see it.


[1] Turning the tide of bad testing -

[2] A Tutorial in Exploratory Testing -

[3] Growing test teams: Progress -

[4] Broken Windows theory -

Flipability Heuristic Rikard Edgren 8 Comments

Credit cards are taking over the usage of notes and coins.
This has benefits, but it is not possible to toss a coin with credit cards.

Bob van de Burgt coined (!) the term flipability at EuroSTAR 2010 Michael Bolton tutorial, coin exercise.
It is a lovely word, and can be used more generally to describe how products can be valuable in other ways than the intended purpose, it’s part of a product’s versatility.

If you ask your customers, I bet you will be surprised by a couple of ways they benefit from your software. It might be exploitations of bugs, that it might be a bad idea to fix.

As you’re testing software, you can look for other usage that might be valuable. It is probably not your first test idea, but it could be the start of next great feature, or the beginning of a cool story; hence the Flipability Heuristic.

Competitor Charisma Comparison Rikard Edgren Comments Off

In many cases, it is worthwhile to take a look at how your competitors do similar things. Among competitors I include products you’re trying to beat, in-house solutions (e.g. handmade Excel sheets) and analogue solutions, solving the problem without computer products.

Charisma is difficult to test, but competitor comparison is one way to go. You can ask others, or look for yourself; where is, and can be, the charisma of these solutions?
For your typical customer, your competitors might tell you which aspects of software charisma that are relevant.
Try using the U.S. SPACEHEADS mnemonic:

Charisma. Does the product have “it”?
- Uniqueness: the product is distinguishable and has something no one else has.
- Sex appeal: you just can’t stop looking at or using the product.
- Satisfaction: how does it feel after using the product?
- Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
- Attractiveness: are all types of aspects of the product “good-looking”?
- Curiosity: will users get interested and try out what they can do with the product?
- Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
- Hype: does the product use too much or too little of the latest and greatest technologies/ideas?
- Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
- Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
- Directness: are (first) impressions impressive?
- Story: are there compelling stories about the product’s inception, construction or usage?

Be aware that there is nothing more un-charismatic than a bleach copy of the original.
Rather find which characteristics are needed, talk about them, and see what your product team can do.
And put focus on finding your product’s own, original charisma.

Trilogy of a Skilled Eye Rikard Edgren No Comments

I have completed a trilogy on the theme The Eye of a Skilled Software Tester

edition 1: Lightning Talk, Danish Alliance, EuroSTAR 2010

edition 2: Article, The Testing Planet, March 2011 – Issue 4

edition 3: Presentation, Scandinavian Developer Conference, april 2011

Some things have changed over time; in the first two I didn’t focus on the most important “look at many places”, besides specifications we need to know about business usage, technology, environments, taxonomies, problem history, standards, test analysis heuristics, quality characteristics, and more…

I also did the necessary switch from errors/bugs to problems; because it is broader, and better pinpointing the psychological paradox: “want to see problems“, things that make Done further away.

While at it, I uploaded a presentation from SAST Öresund yesterday.
77 Test Idea Triggers, a presentation I’d be happy to give again!

Testers greatest nemesis Martin Jansson 19 Comments


When I first got in contact with software testers, I worked as PM and developer for a language tool. Our CEO had said that he had hired two testers, easily since you can just pick them from any street corner. Sadly they had no clue what to do and did not find any bugs, they just found out how the OS worked or things that were built-in. After some time we were able to get a new group of testers and now things really changed. Some of them were aspiring to be developers, but settled to be testers for a short time. At that time they had no knowledge about how testing should be done according to the so called rules, but they did a good job and found bugs in our software.

Some year later I began at a product development company. During my years there we had change of manager almost every year for the test department. Each one brought their own perspective on testers. Most of them accepted any personnel from any department when there was lack of testers. During that time we got to experience a lot of different backgrounds, skills and interests from the extra personnel. We also experienced many employees who were moved or even demoted “down” to the test department. Many stayed in testing where they excelled and eventually liked it. During all those years management saw us as the complaining guys from the test department, perhaps a too common view? What we really did was express risks, bugs or any information we thought endangered the company or products under test. I am sure their perception of us was misplaced, but naturally we were somewhat to blame for how we communicated and how we acted when communicating.

Some years later I joined a smaller company with mostly researchers and scientists. Most of them were used to working alone in development projects so they did all things themselves. They did not see the need for testing as a discipline on its own. Eventually when we (testers) got something to test we showed them it was a big difference with what we found compared to them.

The thing that is constant is the confusing perception on a tester is and what we should do.

How are testers perceived?

If you look at testers from a salary perspective we very often have lower salaries than developers and project managers, but we have higher than documentation specialists and support personnel (at least in Sweden). For many salary also drives your career choices, so you naturally want to get out of the testing department. In Sweden consultants can charge higher for test leads than testers at many major customers. This does not motivate consultancies to grow great testers.

If you look at testers through career perspective you often see that tester is a pit stop in pursuit to become a developer. Or perhaps more rarely you see people have been demoted from other positions. Someone needs to take the role of tester, let’s take the person we need the least for other tasks. I also see personnel that are promoted from support to testing (as they express it). If you become test lead you might be on the way to become project manager. Managers know that many with higher ambition will just pass through the test department, while others less motivated will stay behind. Still, there will always be a group of testers who love testing and want to excel in it, but some companies do not have them yet.

In the scripted test approach you most often want a domain expert to write test cases and let someone else (or sometimes the same person) execute the tests. In this situation the tester can be “anybody”, he/she just need to execute the tests. When a manager is seeking new resources to become testers he will accept anybody to become a tester, than you have the potential of getting anyone, even demoted personnel, from other parts of the organisation. This is the most common view on testers, as I see it.


During my whole career I have not heard that many talk about the need or requirement of certification at places where I worked or at clients. In one case a tester approach me, when he was about to enter my test group. He said he was ISTQB certified and that his employer required all testers to be certified. I told I was not, but I had more than 10 years of test experience and close to 20 years of product development experience. Was that ok? I asked him of his testing skills and what he could do to contribute to my team. He got scared and did not want to join the team. I regret that I scared him off like that. Someone must have introduced the idea that to be a good tester you need to be certified. Or was it perhaps set up as a minimum requirement when handling allocating personnel to teams? Perhaps the original intention was certified tester or experiences enough to cover it? There is seldom context behind decisions like that. My belief is that some consultancy got them to buy-in on the idea, then sold them lots of courses and certification packages.

After reading Dorothy Grahams blog posts ([1], [2] and [3]) about the intention of certification, I wonder why no one spoke up about where things were heading. Their intent might have been to make the perception on testers better, but I think it instead has hurt our craft. At each conference and at most meetings there is often someone who speaks up with lots of argument against certification. I rarely see anyone take up the discussion to meet their arguments or perhaps I do not listen? James Bach has made a lot of good arguments [4].

There are many so called test experts out there who say that certification such as ISEB or ISTQB is needed to be a tester. Some companies even require it of their testers and therefore the recruiters require people seeking jobs to have it. I think it is all a charade. Having testers who take courses in testing, who read books, blogs and articles, who want to learn and who want to excel as testers are what is needed. Passionate testers who want to become great! If they are certified that is ok, perhaps they got some ideas from it and they might have had a great teacher who stimulated them into becoming passionate themselves.

ISTQB uses multiple-choice questions on their exams, but they are quite limited. Cem Kaner has written an excellent post about Writing Multiple Choice Test Questions [5] where he makes some strong arguments. If ISTQB was altered along those lines it would make it harder to pass and naturally harder to create, but it would still not solve the main issue with content being out of date and totally wrong in many areas, as I see it. Jonathan Rees brings up other strong arguments about multiple-choice questions in his article “Frederick Taylor In The Classroom: Standardized Testing And Scientific Management” [6].


Just because we have to work up streams does not mean we can keep on having a lousy attitude. I’ve often seen us picture ourselves as victims because of our situation, lack of personnel, time etc. If we are too few to test and if we got too little time, we can only offer to do our best. We can also explain what we could do if we were more and if we had more time. The prior combined with that we often speak in anger when we talk about quality. This only fuels the perception that we are a bunch of idiots, angry ones.

When we get deliverables from developers we are sometimes angry because of the bad quality or the lousy state of a certain build. Do we consider why it is like that, what shortcuts they needed to take or if someone forced the delivery of a new build? Do we really need to focus our blame on the developers? Consider their ever increasing technical debt that they might not get proper priority to adjust.

In most areas of expertise you have lots of education, at various levels of the school system, to back you up. This has just started to get going with testing. At least it is not only a chapter in a book that you skip. There are lots of books, articles, blogs and other sources of information to gain other peoples experience on testing. Why is it ok to think you do not need to learn more about your craft? Why do so many testers with lots of years in the testing craft still state that they have not studied anything to get better at testing? Having that attitude damages the perception on testers by keeping you ignorant of what you claim to be expert at. With the increasing use of agile teams where a tester has a natural part, you are supposed to know at least something about your craft.

What do we do to affect that perception?

If we are continuously providing valuable information to our stakeholders the perception will be altered. This means that you need to know what they find valuable and what could threathen that value. You also need to consider how you communicate, thus in what form, if you are going to use metrics or not, how much subjectivity or objectivity you should use and how you act when communicating. Less drama-queen and more professionalism.

We are working up streams here, so everything that you do that is bad will have a great impact on the perception on testers. Where ever you go you will bring your attitude and ambition. When interacting with non-testers consider what you are saying and how it might appear to them. Consider if you are in the correct crowd to utter your disapproval, if you need to go somewhere else or if you can just go to your manager.

We need to communicate to managers that it is demeaning and de-motivating to be seen as idiots or just anybody. We need to show that having skilled, passionate and motivated testers will give a lot better result. What else can you do to motivate yourselves to get those attributes? Those who have been demoted or are de-motivated, show them how creative and exciting the testing profession can be. Bring in other external passionate testers to give them some new ideas. If nothing of this work, perhaps they need to find what they really want to do and go there.

Before accepting new testers to the team, we need to make sure they are right for the job. Do not accept demoted personnel without explain the consequences. When you as test lead discuss having extra personnel join your team, clarify that you want to test them before accepting them into the group and that some in the team need to be able to veto acceptance.

We need to tell developers that we understand that they must take shortcuts, thus increasing the technical debt, but we can help [7]. Work closer with the developers. Stop building walls between you. The more the developers trust and respect you, the more information you will have before you commence your work as a tester which will lead to a better work done. Remember a good bug is a fixed bug.

Consider how the test organisation is built, how it markets itself and what you communicate to management. See Scott Barbers excellent blog about “What being a Context-Driven Tester means to me” [8] that can be used as a starting point for you and your test organisation. Also consider where you are going with testing [9] to understand where you come from, what your next goal is and perhaps what is pushing you in a certain direction. Are you going in the right direction?


I think the perception on testers is our greatest nemesis, we have to fight it every day.  Certification in testing does not help us, as I see it, but it is not our main target for concern just one of the bullies. There are many things that make us get a bad reputation and are therefore perceived badly. Start changing your own ways and affect those around you to become great, passionate testers who deliver valuable information effectively.


[1] Certification is evil? -

[2] A bit of history about ISTQB certification -

[3] Certification does not assess tester skill -

[4] Search for ISTQB at James blog - or

[5] Writing Multiple Choice Test Questions -

[6] Frederick Taylor In The Classroom: Standardized Testing And Scientific Management -

[7] Developers, let the testers assist with the technical debt -

[8] What being a Context-Driven Tester means to me -

[9] Where are you going with testing -


Page 10 of 28« First...89101112...20...Last »