The Notepad and Visualize Heuristics Rikard Edgren 1 Comment
I was at Nordic Testing Days and had a great time meeting new and old friends. During my presentation about serendipity I showed two heuristics I wanted to share here as well. They both concern observing from different angles, to learn something new, and increase chances of serendipity.
The Notepad Heuristic
Years ago I worked with translations, and once we translated software that wasn’t fully internationalized. Some parts needed to be changed in binary files; meaning we should take great care that the string length was exactly the same.
Maybe this is why I now and then open many kinds of files in a text editor. In the start of image files, you can see which format it actually is, you can search for password strings, and above all, you might learn something new that will be useful.
Works wonders for text files as well, and it’s as fast as you can expect from a quicktest:
The Notepad Heuristic – open any file in a text editor, to see and learn more.
I worked several years for a company producing software for interactive data analysis. We used our software internally, e.g. by looking at the bug system or log files visually. This is not only fun, it also can show you things and it suggests areas to learn even more about.
Not that long ago, we tested an application that calculated driving times and risk coverage for fires in Sweden. It was a huge database with numbers, so we had a look at the data visually, where it showed Sweden with colored dots. We could see that there were holes in the data for a couple of municipalities, which was something we knew we were looking for. But when filtering for bigger fire risks, we found something we didn’t know we were looking for, a squared pattern that in no way can be correct (underlying data had to be rebuilt.)
This is a good serendipity example, you look for something, but you find something else that is valuable.
Visualize Heuristic – look at data visually, to see patterns, trends and outliers.
Charisma Testing Rikard Edgren 4 Comments
Why do you prefer a product even if it has equal functionality to a competitor?
What is it that makes one product stand out amongst the other?
Maybe you have had thoughts about what makes a product feel special?
We all know that this happens in some way, but how do you test for the quality characteristic we like to call Charisma?
Charisma. Does the product have “it”?
Uniqueness: the product is distinguishable and has something no one else has.
Satisfaction: how do you feel after using the product?
Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas/trends?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?
It should be noted that charisma is different for every product; only some of the above characteristics will be relevant for you. And most of them will need more details to provide value in your situation.
For specific solutions for specific customers, it might not be worth spending time on charisma testing at all.
If a product has charisma or not is primarily a subjective judgment. This is most probably the reason why it hasn’t been focused within software testing. But end users are subjective, so why shouldn’t testers be?
So forgive us if you disagree, but we think the iPhone has Charisma, primarily the sub-characteristics Uniqueness, Attractiveness, Satisfaction, Entrancement and Hype. It doesn’t have any unique feature, but, when introduced on market, it had a unique combination of features. The way you slide your finger when unlocking the phone has so much attractiveness that it has been mimicked by competitors, perhaps without success since their touch-sensitivity isn’t equally good.
Satisfaction is common with an iPhone, until it breaks (but this article isn’t about Reliability…) The entrancement is seen for people that picks up the phone to do something with it, when the amount of available time can be counted in seconds. The hype was created by the very same product. The other sub-characteristics are not irrelevant, but not as dominant, in our opinion.
So how can you test for these things when developing a product?
If you are aware of charisma while performing manual testing, you can watch out for violations against the characteristics. You have a lot of heuristics for this, here are some examples:
Applications shouldn’t distract me with information I’m not interested in (Entrancement)
After a demo, I should remember some of the features (Directness)
User Interface feels right for this kind of task and user (Professionalism)
I shall not be upset with a product after using it (Satisfaction)
I don’t react negatively to any language (Attitude)
There should be something to write home about (Hype)
You can also try to notice overall charisma violations, when the product is bland.
Unorthodox Test Methods
If Charisma is important in your project, you might need to do some unorthodox test activities.
deep interviews – ask users why they are hooked, what they love (also talk to those that didn’t start using the product!)
diverse focus groups – investigate reactions and non-reactions.
observations by proxies – manage bias by letting others observe and interpret the product.
uncontrolled users/environments – let people try the software in whatever way they want; analyze results.
trying many versions – look and feel dozens of alternative designs.
competitor comparison – helps you find market-specific charisma drivers.
You can find inspiration by searching for “software desirability”, but for software testers there is not much written; a few remarks at http://thetesteye.com/blog/tag/charisma/ and a welcome addition in James Bach’s Heuristic Test Strategy Model
Subjective bug reports
There are (as far as we know) no appropriate measurements or ways to specify charisma in detail. So when reporting areas with room for improvements, you will have to make a case, or just hope that other people agree. If several people think the same, you can claim inter-subjectivity, and have a better chance.
There won’t be any Charisma standards to adhere to, but subjective comments like “if we change this, it feels better” can make the product more pleasant to use, and thereby better.
In our experience, a conversation with the designer is the best way to communicate good and bad things, and by using our list of characteristics, it might be easier to communicate compelling reasons to fix problems.
Who should test for Charisma?
Many don’t think charisma is something testers should bother with. This is understandable if you see testing as a technical verification that should result in an objective Pass or Fail. But many see testing as an investigation for important quality-related information, so how come charisma isn’t used more often?
Maybe it needs the same journey as usability; 20 years ago it wasn’t a major concern for testers, today many testers provide a lot of value in this area.
It is not evident that testers are the primary interpreters of Charisma. But at the same time, manual testers might be the ones that has the best experience and knowledge about the product as a whole, which might be key for some Charisma aspects. For some sub-characteristics, you might need to use people without product knowledge, especially when first impressions or surprise factors are important.
At the very least, you want the daily testers, who are one of few that know diverse details, to be aware of your product’s unique charisma.
Some might say that it would be superficial to test for things like this; it is what the product can accomplish that provides true value. We agree with this in theory, but in practice, this is the way the world is right now…
Most important is to be aware of this product attribute, to know how important it is for the success of products.
We are confident that most of you agree that many of these characteristics are important aspects for the users’ perception of your product.
Then how come you don’t test charisma??
[co-authored with Henrik Emilsson]
Acting On Answers Rikard Edgren 1 Comment
Asking questions is rightly regarded as important for testers. But I seldom here anything about what we should do with the answers. Not that I believe anyone would ask the question, then not listen to the result, but I think we take for granted that we will understand the answers, and that we can use them straight away for our testing purposes. In my experience, this is often not the case. Of course it can happen that you ask: “What about performance?” and get a “Sure, we have these performance objectives, didn’t you get them?”
But more often you have to do quite a bit of interpretation:
One stakeholder said “fast in, fast out”, which when tied together with other understanding gave three sub-strategies:
- User testing to see typical users will find the information they are looking for.
- Heuristic usability evaluation with focus on operability (default focus, few click, fast-read.)
- Evaluate perceived performance when system has normal high load.
And sometimes guessing:
A developer said “In last release many users helped with testing, so I can’t think of anything specifically that could be tested.”
It seems like the team believes they have a perfect program that don’t need testing. And they assume the testing is good enough if you involve users. Their underlying test strategy probably focus on platforms, new/common functionality and charisma (that they like the new version.) The product’s slogan includes “easy to use”, which can mean many things:
- easy for first timers
- fast for power users
- accessibility for functional disabilities
- good Help
So we could focus on testing towards usability for the shy or non-technical users. We could look at resource utilization, complex situations; and that’s a good start at least. We’ll figure out more when we get to know the software.
So how do we learn to interpret and act on answers? Most people already know it, it is part of being human, but I would assume that your skill in this will increase with your experience. So the best way of teaching this I know is to tell stories, and put learner in situations where they have to do this themselves.
As my grandmother says: “If you don’t ask you won’t get any answers.”
And the answer is only the beginning.
Using Quality Characteristics Rikard Edgren 3 Comments
More than 3 years have passed since we published the first version of our Software Quality Characteristics. It is quite popular, and it is now translated to 8 languages by testing enthusiasts. But it’s about time to talk a bit more about how to use the list, where there are at least three typical scenarios:
Test Idea Triggers
This is the most common usage: read relevant parts of the list, and use it as inspiration for things to test in your product, or feature. Suitable when you have run out of ideas and need new inspiration. Don’t try to test all of them, because all of them won’t be important. But don’t discard any top category to easily either, the right testability suggestion might be your biggest time-savers.
It can be difficult to transform a generic description to actual test execution, so have a look at Lightweight Characteristics Testing for fast ways to get going. If you use the list many times, it is the right time to create your own customized list, with the appropriate characteristics for your situation; your own specific non-functional checklist.
(This is actually the origin of our poster; the print-out of Bach’s Quality Criteria became too cluttered with things we often wanted to test.)
Quality objectives/Non-functional requirements
The list can also be useful to understand what quality means for your software. This should probably involve other people than testers, so you get a good understanding from many perspectives. But the advice is to start without the list; define what quality means to you, and use your own words, because those will better describe what you want to accomplish. When you run out of ideas, then pick up the list of quality characteristics to see if you missed something relevant, or if you get ideas that can make your first ideas even better.
An important part is to try to make the quality statements really useful is to make them specific for the situation. “Easy to use” is very generic, so better examples are:
- First-time users should have no problems creating their first mind map
- Power users should very quickly be able to create complex mind maps (keyboard + mouse navigation)
- Product should adhere to 508 accessibility guidelines
As a tester, I don’t need these to be objectively measurable, they can still guide my testing effort, and help me focus and observe more broadly. Many other people want these to be easy to measure, and my guess is that’s why people don’t have time to inform about what we actually want to achieve.
Review of Test Strategy or Results
If you are in a situation where you should review a test strategy, or test results, the quality characteristics can be very handy to spot holes in the testing. Use the same thought process as above: “What does this mean to us, is it important?”
If you find that performance aspects are missing, ask if they just forgot to mention it, or if they should revise the strategy. Or even better, do this test on your own work, so you can improve immediately.
Lateral Tester Exercise V: FizzBuzz Rikard Edgren 12 Comments
I am always at the lookout for new testing exercises, since I teach a lot. Today I found a programming exercise at http://imranontech.com/2007/01/24/using-fizzbuzz-to-find-developers-who-grok-coding/ that I thought could be turned into something for testers. Since it doesn’t fit any of my current teaching schemes, I blog about it instead of putting it in the closet.
This program is an exercise for software testers. As input it takes an integer between 1 and 1000, and repeats it as output. But if the number is a multiple of three, it should print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five it should give “FizzBuzz” as output.
Test as much as you want, and answer these questions:
1. What would be a good test strategy?
2. What is your test coverage?
3. What are the results from your testing?
4. If you would use this exercise as a teacher, what would you talk about in the debrief?
Ruby file: FizzBuzz.rb
Executable (Windows): FizzBuzz.exe
Winning or losing in gamified test situations Martin Jansson 2 Comments
Games are not always about winning or losing. Each game can have different objectives. In my early youth I started with role playing games and later on story telling games. At the time, people who knew little about it sometimes asked us, “Who is winning?”. Being 7 years old, I didn’t really know what to say, in a good way, to explain to this grown-up that winning or losing is not always the objective. Instead, when we were role playing we worked as a group to go on with the story, to gain experience, to improve the character with skills and items and, probably above all, to have fun.
35 years later, I still hear the same question popping up in role playing situations, but also in similar situation such as when having to do with gamification. When we talk about gamification of testing, I feel it really demotivating if we were to compete against each other within the team or organization at a regular basis. I have already seen and experienced instances in my past when we had leader boards counting bugs in various fashion. We had really destructive discussions, as I see it, about if those who reported the least bugs really provided value to the group.
In a recent article  by Shrini Kulkarni approaches gamification of testing with the mindset of competition. I will look at a few of his arguments.
“This definition is provisional one – I might not be considering types of games that do not falling in this category. I wonder if there is any game where there is notion of victory or defeat.”
Yes, there are many different types of games. Believing that there is only games about winning and losing is too narrow. Some games are for introducing people to each other, others for passing time, others such as role playing and story telling games can be about solving puzzles/mysteries as a group, where you act as a different persona than your own you. The list is infinite.
“How about goals or objectives of a player or team playing games in the first place? Winning of course!”
Richard Bartle  investigated the objective behind different play styles in Massive Multiplayer Online Games (MMO) and Multi-User Dungeons (MUD) and discovered that only a fraction of the players had the objective of winning. Instead, there were other aspects, such as socializing, that were more interesting and in focus.
“How many times you heard the statement “it is not important to win or lose, participation and competing with ones best ability is important”. So if you lose do not feel bad – there is always another chance.”
In a testing context, working against others by competition would, as I see it, harm the organisation and the teams. As I stated in my previous article , when considering gamification you need to consider the regular traps of testing in order to work on areas that provide value. But you also need to consider how to get a good working environment. A competetive environment might not be the best solution? I do not see it fruitful to compete on many of our test activities such as information gathering. How do you weigh one type of information over another? Would you in order to win choose to not share valuable information to others in the team?
“A good test strategy in testing is same as winning strategy in games. But then – what is the meaning of winning the game of testing? Against who?”
We really do not have an opponent in our test strategy, as I see it. Still we can use many aspects from strategies of games and war in our reasoning. I often look for inspiration in the writing of Sun Tzu, Carl von Clausewitz and other strategists. Still, when considering strategy for testing I see it rather as meeting different objectives or goals for retrieving information, instead of winning.
Jonathan Kohl has taken elements from MMO-type-of-gaming and considered quests and adventuring . I believe this is an excellent area to look at. In most situations in role playing, the group cooperates working towards a set of goals or objectives, just like you would as a tester in teams.
So, for those of you who start to dig into the world of gamification in testing… do look beyond the regular winning/losing concept. There are more aspects at play here. Instead, see gamification itself as a complex system, which you in turn apply to other systems in order to enhance cooperation, motivation, feedback and learning among many things.
ISO 29119 – a benevolent start Rikard Edgren 4 Comments
When I test software I try to start friendly, to see the good things about the system, and to not focus too much on problems, that might not be important for the whole. So I did this for the new ISO 29119 testing standard, where I have read the three parts that were published September 2013.
Part 1 – Concepts and Definitions
The terminology definitions are much better than the ISTQB glossary. Less words, and not as rigid.
Also good and humble to omit multi-faceted keywords like quality, stakeholder, usability.
Happy to see my favorite testing heuristic included: Diverse Half-Measures (Lessons Learned in Software Testing, #283) – use many testing strategies in order to get less holes in test coverage.
Quality Characteristics look pretty good, and even include some Charisma in the form of Satisfaction (part of Usability.)
Part 2 – Test processes
I love that ISO 29119 puts a lot more focus on test strategy. It comes before the test plan, which means that key strategic decisions will be communicated early. This not only makes sure focus is on important things, it also makes future reporting easier, since testing objectives are anchored.
I hope this puts even more energy to the current wave of test strategy awakening.
A key benefit of using the standard is that clients will know which documents to review, and where to look carefully. Especially useful since the adherance to the standard can be tailored, it supports both Full and Partial Compliance.
Nice to see that results from test execution isn’t totally binary: Pass, Fail, and unexpected.
Part 3 – Test documentation
The documentation standards also have examples from both traditional and agile projects (the standard duly notes that the example content shouldn’t be used as a template to follow.)
An example for dealing with agile projects: it is perfectly fine to not report status in written format.
I think the test plan will get most followers, and it is indeed a hotter version than IEEE 829:
How about ensuring that testers think about their stakeholders and how to communicate with them!
Also a plus for suggesting visual representation in the usually text-savvy test plans.
A much awaited separation of product and project risks, since we deal with them differently.
Some of these important things might be forgotten otherwise…
So is it useful?
Well, too early to tell, this was just the benevolent start…
Project managers have a great impact in our daily work and can often affect how we work, thus what is meaningful or not.
Several years ago I was involved in an organization which was split on several sites in different countries. Project members were not co-located, instead they were split up on different sites. Communication were often a bit strained and confusion because of cultural differences were not uncommon. All test teams reported to project management, who operated from one site and managed testers both at their own site and at other sites.
The project managers wanted to somehow know that testing had good progress. The traditional test process used test cases as a way to report progress. Traditionalists in testing had recommended that the project managers asked questions to the test teams on how many test cases had been executed each day and what the pass/fail ratio was. I believe this is a common, yet dangerous, recommendation.
One of these test teams did not executed as many test cases as the project management wanted. It seemed to them that the test team did not work as much as they should. So, the project management put some pressure on the test team to run more test cases.
The same test team had gotten a new manager of testing. At this time, the new manager contact me about being worried about how his team of testers were conducting their testing. He explained, in order to show progress to the project management, the test team rerun the same test cases several times on the same system with the same version of the system. When the test team reported to project management, they in turn were happy with the new, promising result of a high count of test cases and their pass/fail ratio. The project management was not aware of that the test team were rerunning the same test cases several times. They only looked at the presented metrics and ratio, no details behind it.
There were many problems with this situation. One of them was “you get what you ask for“. Asking for a high count of test cases and promoting it, will result in a high count of test cases by whatever means possible by test teams. Another problem in this case, is that when we talk about progress of testing they did so in terms of test cases executed. The daily activities in the test team does not consist of only running test cases, instead there are so much more that is being done. Yet another problem is the idea that rerunning the same test cases on the same version of a system would yield a different result.
How do you affect this situation to turn it around? The manager of testing who contacted me, tried to coach the testers in doing things differently by going beyond test cases and the execution of them. I discussed with project management about the current situation, shedding light on what result they got based on the questions they were asking.
A test team have a lot of different activities. Testing or executing test cases is a mere fraction of what we do as testers. Still, our main activity should be to test, in most contexts. To talk about progress and just listen to what testing we do, does not give a full picture. Instead ask progress on test activities and if anything is blocking us, or perhaps if we need help with anything.
Health of the system
A separate question should be about the system, sub-system or whatever that the test team is testing. What is the current health of the system? What issues or bugs do the test team see? What major risks that we did not know about before? What areas are now known, but that we still lack information about?
Information about the system is continuously changing. As new versions of the system are produced, earlier information degrade or might not be valid any more.
If progress is asked on test coverage, remember that coverage is linked to models of the system. As testers we employ many, many models and each have their own coverage. We can show progress in the sense that we can show what we know or think we know. We can also show areas that we want to know more about, that we currently know too little about to be able to show anything coverage related. We have questions to the system, the project or a situation that could still be unanswered, we have no idea what it leads to or what is hiding behind it. So talking about coverage in terms of percent becomes a bit absurd in that sense.
After having a long discussion with the project management, it was time to see what changes were to happen. The main project manager directly started to use a different language in his questioning instead focusing on progress, health of the system and coverage. As a bonus, he wanted the testers to explore the system beyond what we currently knew.
The result was astounding, seeing change in motivation, test result and amount of information produced from testing. As I see it, the new way of working followed methods that are proposed by many testers with knowledge in exploratory testing, thus mostly non-traditionalists.
Using gamification to explain and model testing Martin Jansson 1 Comment
In early 2013 I held a 7 week course on setting up a testing organisation that works well in an agile context. My intent was to explain my own approach and model of how testing is conducted, I wanted the students to see that they needed to create their own. For each part of the course there were exercises for me to evaluate the students knowledge and skill, they also needed to explain testing to me, with their own words and therefore using their own models.
I had one exercise that I wanted the students to do, but in the end I did not let them which I regret. I wanted them to gamify  testing. My idea was that by gamifying it they needed to explain all the intricate details of testing. They needed to identify activities that was considered valuable and motivating while at the same time identify activities that would be wasteful or meaningless. The students would need to model what testing is for themselves and explain that using gamification.
Jonathan Kohl has written many great articles on the gamification of testing  and has in a specific piece elaborated around the concept that Software Testing is a Game . He identifies several aspects that need to be considered when applying gamification to testing. Part of my reasoning is inspired by my brother Ola Janson, who has been in the gamification domain for a long time.
This is roughly how I would suggest to go forward with this as an exercise.
Initially, you would need to identify what tasks and activities that we do in the testing domain. This part is a great opportunity to visualize and clarify what you believe you do when testing, how its parts are connected and what you found valuable. You will or at least should identify things that you probably do not find meaningful to do. An important part is also to categorize and group the tasks and activities in order to enable interesting modelling. Examples of these models are the Heuristic Test Strategy model  and the one that Jonathan Kohl has in Demystifying Exploratory Testing .
Next step is to create a framework with basic rules, challenges and rewards that would lead to meaningful, motivating choices.
Finding challenges is easy, but finding resolutions for them is hard. It is difficult to look at the full scope of testing, instead it might be better to start with one part. In a recent meetup focused on gamification in testing, we discussed what part to select and agreed that regression testing could be a valid part to start at. Everyone had ideas on how to make it more motivating. In many cases the discussion was about that the participants did not want to do regression testing in a way that was meaningless. The group identified several things that could be done to avoid doing tasks in a meaningless way. We considered how to add gamification mechanics to make it more motivating. One reflection that we had was that a lot of what we thought we could gamify had to do with feedback to others or from others, information sharing and basically about improving communication. My own reflection from the meetup was that it was possible to use gamification to communicate value in testing with people that might have different ideas on what testing was.
When working on rewards they would need to consider dangers of metrics , automation snake oil , automation politics , biases and fallacies  among many things. By taking all these traps into consideration when applying gamification, I believe they would better explain their own model of testing. Things such as bug top lists (ladders), counting tests or something similar would for instance be a demotivator instead.
My thesis is therefore that by letting someone gamify testing, they need to have great knowledge and skill in order to create a plausible model of testing. I have my own approach and model of testing that I am now gamifying in order to hone and sharpen my ideas even further.
How would you gamify testing?
Are you able to visualize your own model of testing?
Kahneman and Test Strategy Bias Rikard Edgren 4 Comments
Our minds are fantastic, but can be biased and make dubious decisions. As I read Kahneman’s Thinking, Fast and Slow, I thought about test strategy and found some interesting stuff, based on mistakes I have seen, and done.
Answering An Easier Question
“How could we possibly test everything important?” is a very difficult question. It is natural, but not necessarily good, to exchange with a question that is easier to answer:
- How should we test the new functionality?
- How should we do the same testing as last time, but faster?
- What can we automate?
- How long does the test plan need to be?
When we answer the simpler question, we subconsciously think we have answered the difficult question…
This can also happen when we are asked to estimate how long something will take to test, before we have considered which strategies that are appropriate. Later on, we will probably choose the strategy that will give the result from the estimation. Another version is to split up the problem in smaller pieces, e.g. test phases, where responsibility is distributed, resulting in loss of the whole picture, and important areas uncovered.
Cure: Seek the difficult questions. For each part of your test strategy, consider if the answer (or the question!) is too narrow. Especially look for opportunities where a specific strategy can be used for more additional testing missions.
Example: When performing manual testing, change platforms and configurations as you go, so you won’t need a special compatibility phase. When appropriate, do spot tests for additional platforms.
What You See Is All There Is (WYSIATI)
It happens that the strategy only consider the testers’ (and developers’) testing, when other strategies might be more powerful. But also the opposite happens, that customers are doing acceptance testing doesn’t automatically mean that no one else should consider if the product is “good”. A common mistake is to only “see” the requirements document, and believe those are the only things that are important to test.
Cure: Discuss what else to test, not only what is inside the project plan.
If something is good or bad, we tend to transfer those attributes to other, unknown, areas. As an example, a long and handsome test plan can make us believe that the strategy is reasonable; and a sloppy bug report make us doubt the content. A negative effect happens when the behavior of the product on one area effects our expectations on other things. When something looks good, we might base our strategy on the belief that the rest also is good; if we see a crash at once, we might think that further testing is pointless.
Cure: Don’t build important decisions on single observations; look more, don’t have prejudices about good or bad product quality (let the observations judge.
Counter-example: Distribute your strategy in plain notepad format, so the content is in focus.
Illusion of validity
You run the same strategy as last time, because you found important bugs. That equally many were missed, and that it was very time consuming, is disregarded. One example can seem to make a strategy valid, 1 important issue after investigating 100 backward compatibility files make it worth the effort (which might be true, but there might be other ways that would be better.)
Cure: Even experts can’t predict the future, so make sure to have diversity in your test strategy.
Example: Since we have run free exploratory testing for our final regression tests for the last releases, we will this time do it function by function.
Our plans tend to be optimistic; we think all days will be good, downhill with the sun and wind supporting us. This also happens to our testing strategies, they aim higher than we can reach, especially since the unexpected always happens. That’s why it is important to communicate which parts of the strategy will receive most time, and which parts will be tested lightweight (it is seldom wise to skip relevant parts totally.) Your strategy should be realistic, and consider excessive optimism and unknown unknowns.
Cure: Make sure you communicate your priorities, so the less important parts can be skipped, or tested shallowly.
“No separate part of the testing strategy is as important as you imagine when you think about it.” I still get trapped by this one, especially when I try to convince someone about a certain way to test. The truth is that a lot of test coverage for the most important stuff is overlapping; a severe installation problem is captured by any test method, an alert manual tester can easily see usability, performance and security problems, and the specialists for each area can also look broadly.
Cure: De-focus. Look at the whole, discuss with people not obsessed with you current area.
Regret and responsibility
An important explanation for dubious test strategies could be fear of regret and blame. If you run the same strategy as last time it is less risk for complaints than if you actively changed the way you tested. If you can reference a best practice, people might think that reasonable choices were made. But as with other biases, one should rather choose what one thinks is best, than keeping your back free.
Cure: Diversity in the test strategy, and courage to change when you see the results.
Example: An automation strategy hasn’t produced much in one year. Add more resources or start all over?
Bias is a natural people-thing that often helps us. It can’t be avoided, but it can be managed (by being aware of it.)
In his book, Kahneman often mentions luck, which is important also for testing. But good testers are often lucky, and a good test strategy creates opportunities for good fortune. In a sampling business, serendipity is nothing to be ashamed of!