Lightweight Compatibility Testing Rikard Edgren 3 Comments

In testing text books you can read that compatibility testing should be performed after the functionality testing and bug fixing is completed. I guess the reason is that you don’t want to mix these categories, but hey, what a waste of resources.
My suggestion is to perform the compatibility testing at the same time as you are doing your other testing; when problems arise, trust that you will deal with them.
In my classification, compatibility testing involves hardware, operating system, application, configuration, backward/forward compatibility, sustainability and standards conformance.
Here follows some lightweight methods to tackle these areas.

Basic Configuration Matrix

Basic Configuration Matrix is a short list of platform configurations that will spot most of the platform bugs that could exist in your currently supported configuration matrix.
The simplest example is to use one configuration with the oldest supported operating system, oldest browser etc; and one configuration with the newest of all related software. A more advanced example could use several configurations that use different languages, Application Servers, authentication methods
Often it will take quite some time to run most tests on BCM; so alternate between the configurations while testing your product. Do variations on configurations when appropriate.

Error-prone Machine

Another trick is to setup machines so they have a high chance of stumbling on compatibility issues. You can vary this on your BCM, your personal machine or whatever is suitable. The idea is to get some compatibility testing almost for free.
Examples on Windows include: run as Restricted User, use Large DPI setting, use German Regional Settings, install support for all of the worlds characters, non-English language in Internet Browsers, move system and user Temp folder, activate IE ‘display a notification about every script error’, move Task Bar to the left side of the screen, Windows Classic Theme & Color Scheme, use User Access Control, use an HTTP proxy, use Data Execution Prevention, install new versions as they come, e.g. latest hotfixes, MDAC etc, never install software on default location, run with 2 screens, on a 64-bit system, use different browsers, turn off virtual memory swapping, , install Google/Yahoo toolbar, run Energy Save Program, pull out the network cord every time you leave the computer; and put it back in when you return, turn on the sound!

Technology Knowledge

If you know a lot about the environment the software operates in, you know which things will happen in reality, which settings that usually are altered, and how it is commonly operated.
The lightweight method is to use this knowledge and make sure you test the most important things.

Letting Others Do the Testing

Many compatibility problems happen on basic usage, which indicates that you can let others do a big part of the compatibility testing: developers can use different machines and graphics cards, Beta testing can be done in customers’ production-like environment. If the product is free of charge, you might even get away with addressing problems after your users encounter them (but make sure you have an easy and compelling way for them to do this reporting.)
Crowd-testing could be a way, but so far the payment models from the testers’ perspective are not ethically defensible, to me.

Reference Environments

To quickly investigate if you are experiencing a compatibility issue, it is handy to have reference environments available. It could be someone else’s, a virtual machine, a quickly cloned image, your own machine etc.
Personally I prefer having a physical machine that is running similar things, but on a different OS, different language and/or earlier version. The last years I have had three machines and three monitors, and by switching, I get a lot of compatibility testing done at the same time as testing new features. When I check things on an older version, I can save documents and use them for next tests.

Backward/Forward Compatibility

Backward compatibility is easiest done if you can use real customers most complicated files/data/documents. Use these as you test any functionality (Background Complexity Heuristic).
Occasionally communicate between different versions.
Forward compatibility should be designed in the architecture, as a tester you can point this out.


Have conversations around the question: Is the product compatible with the environment? Have we considered energy efficiency, switch-offs, power-saving modes, support work from home and the likes?


A lightweight method for standards conformance is to identify which ones are applicable, and ask the experts if they understand it, and successfully have managed to adapt it to the new context.
Let’s finish with a non-lightweight method: you can become the standards expert.

No Flourishes and New Connections Heuristics Rikard Edgren No Comments

I used to be a bit skeptic towards the word “heuristic”. It seemed needlessly advanced, and things are possible to explain in other words.
But when I read Gigerenzer’s Gut Feelings about how to catch a flying ball, it all came together.
For software testing, which can’t be complete, is dependent on many factors, with a product to be used with different needs and feelings; techniques are not appropriate. It is about skill, and human skill is very good to describe with a variety of heuristics.

When blogging about some heuristics I think are un-published and worth knowing about, I’ll try to do two at a time; more bang for the bucks.
With English as second language it is difficult to give them good names, so feel free to suggest better names! (and content…)
And remember, heuristics are not rules; they are more like rules of thumb, that might be useful, in specific situations.

No Flourishes Heuristic

At many times you can design and execute straightforward tests, without garnish, fancy tools, and incomprehensible details.
Try this when you have the chance!
See if perceived performance, manually clocked, can give good information.
Use common options instead of combinatorial.
Look at the GUI instead of automating tests.
Try to do something valuable instead of covering all paths in last years use case.
Write your basic test strategies in plain English, so everyone can review them.
Use what you have, and look for what’s important.

New Connections Heuristic

How do you “discover” what is important?
I think it often is about combining different knowledge in new ways.
So you need to know a lot about the product and its context, and look for connections between the wide diversity of knowledge you have.
When reading, talking, thinking, you sometimes instantly think “but this could have big impact if one is trying to do that!”
Or during test execution, when you suddenly get an impulse that you have to add some other things to the stew.
This might seem unstructured, and dependant on chance, but that is OK. If software testing is a sampling problem, we need different ways of discovering what is important.

Working with the testing debt – part 2 Martin Jansson 1 Comment

This is a follow-up from Working with the testing debt – part 1 [1]. The reason for the clarification is that you so easily come up with a tip without a context or example.

Tip 2: Focus on what adds value to developers, business analysers and other stakeholders. If you do not know what they find valuable, perhaps it is time that you found out! Read Jonathan Kohl’s articles [2] on what he thinks adds value for testing.

In one project I worked with a group of experienced developers. I had not worked with them before close hand, but they had received some of my bug reports and knew me by reputation (whatever that was) in the company. When I got their first delivery to me to test I started right away. Immediately I got the feedback that I was not testing the right stuff and there were a bit chilly in their demeanor towards me. I investigated a bit what had happened and found out that they were not really interested in the minor bugs at that moment and that I should focus on the major issues. I explained to them that I report everything I find, I was not expecting everything I found to be fixed though. What was fixed was up to those who prioritized the bugs. Before I started testing I asked them what they wanted me to focus on first. After that they were a lot happier, both knowing I worked on things valuable to them and that they understood that I reported everything that I found.

During another project we were two weeks from the release of the product. We were in the middle of a transition from traditional scripted testing to an hybrid with both scripted and exploratory testing. Rather, we had test scripts that we used as a guideline when we explored, but we reported Pass/Fail on those. At that time Project Management was strict in wanting number of test cases run as well as the Pass/Fail ratio. Earlier test leads had not communicated well why these figures held no value. When we had run all planned test cases project management communicated to their managers that we were done. But we were not, we continued with working on our planned charters and ran sessions. We interviewed the support organisation, business analysts, product management and experts in the test organisation. Eventually we got a long list of risks and areas that we should investigate. We also got a long list of rumours that we intended to confirm or kill. Basically, we were far from done and we still had time before the release as we saw it. We had also received areas that people in the organisation found valuable to get information about. Still, we failed because we had not communicated enough to project management what we were doing. We managed to go through most of the areas and identified lots of new issues as well as killing many old rumours. We failed to bring value to some, but not all.

How does this affect the testing debt?

If you continue to work on things that have no value to you or any of your stakeholders, you must take a stand and change things. Do not accept the situation as it is. If you and everyone around you think you and your test team are not doing anything of value, it will just add to your testing debt.

As I state above Jonathan Kohl gives a good set of questions [2] for you to ask yourself to get back on the path. Also consider what Cem Kaner writes about in Ongoing revolution if software testing [3], because it is still ongoing and it not over.


[1] Working with the testing debt – part 1 –

[2] How do I create value with my Testing? –

[3] Ongoing Revolution of Software Testing –

The automotive industry is not the role model Henrik Emilsson 5 Comments

This began as an answer to Rikard’s post where the discussion on “traditional testing” came up.

I often hear comparisons with our “industry” and the Automotive industry.
In that context, you could say that “traditional testing” corresponds to the methods and practices that are applied in line production of large car companies. And the unorthodox testing can be compared with those specialized and often smaller custom car builder companies out there.

The major issue with this kind of comparison is that large car companies makes thousands of the exact same type of car. Instead this means that the “traditional testing” approach is an attempt to apply line production methods when building custom cars. Applying “traditional testing” as if every project and product were the same is both wrong and dangerous.

And this comparison is not that far-fetched… It seems to me like “traditional testing” is promoted as practices that suits many (if not all?) projects and should be followed in order to enable success. Well, good luck!

Further, many practices that we use today comes from the automotive industry (at least in their latest form).
If we fail to see why they implemented them in the automotive industry and just take them as good practices for being effective in line production, we are doing the opposite of what the automotive industry did. They did some investigation on how they could improve their work. Some of that included seeing the human beings as intelligent and social creatures; utilize the diversity of a group of people; etc. And their productivity and efficiency could be measured by measuring number of cars and their quality. So by treating humans as humans they became more efficient (number of non-defective cars) and improved their work methods (analyzing quality of work).
When Lean development is implemented in psychiatric nursing or software development today, the focus is very often on the quantity measuring part which then misses the whole point. Measuring patients or software as uniform units is very wrong and dangerous.
It’s not that Lean or Kanban is to blame, it’s the implementation; and perhaps mostly the implementors.
The role models for Lean implementations in many healthcare institutions in Sweden have been some successful nursing teams that have increased their efficiency and quality of their work by using Lean development as a method. What those successful teams really did were to take command of their own work and found a method that supported their initiative and commitment. (Anyone had similar experience? Me!)
The problem is that when this is implemented by management at the whole hospital or nationwide, the focus is shifted from “Quality of work” to “Quantity of work” because that is the obvious driver for management and really the only incentive for they to implement it. They only need to say that this is a “best practice” and then it’s OK…
I do need to emphasize that you don’t automatically get high quality work by implementing “best practises”!

This is happening all the time; and the last flavor of the month is Agile.
If you read the Agile Manifesto and then think that you must play planning poker or have standup meetings you are obviously not understanding the Agile Manifesto.
I’m with Matt Heusser on his interpretation here:—-or-the-Manifesto-elaborated/Testing-Software-Test-and-QA-Teams-Strategy-Agile-Development


a word of caution Rikard Edgren 12 Comments

If you are a faithful reader of this blog, you have probably read some challenges of established ways of testing.
I write stuff like “anyone can do non-functional testing”, “look at the whole picture”, “test coverage is messing things up”, “you can skip all testing techniques”, “requirements are covered elsewhere, so focus on what’s truly important”, “Pass/Fail-addiction” or “be open to serendipity”.

It might be a bad idea to start with these alternative activities, people might think you are crazy.
You might have to do thorough, planned, systematic, requirements-based, to-the-bare-bones testing, in order to get the respect from the rest of the organization. The respect you need for your work to be appreciated and used.
You might need to follow the existing practices, to show respect, and learn about the organization.

So even though a radically different test approach would be faster and better, you should consider “traditional testing” as an appetizer for the main course.

Binary Disease Rikard Edgren 17 Comments

I have for a long time felt that something is wrong with the established software testing theories; on test design tutorials I only recognize a small part of the test design I live in.
So it felt like a revelation when I read Gerd Gigerenzer’s Adaptive Thinking where he describes his tools-to-theories heuristic, which says that the theories we build are based, and accepted, depending on the tools we are using.
The implication is that many fundamental assumptions aren’t a good match for the challenges of testing; they are just a model of the way our tools look.
This doesn’t automatically mean that the theories are bad and useless, but my intuition says there is a great risk.

Software testing is suffering a binary disease.
We make software for computers, and use computers for planning, execution and reporting.
Our theories reflect this much more, way much more than the fact that each piece of software is unique, made for humans, by humans.

Take a look at these examples:
* Pass/Fail hysteria; ranging from the need of expected results, to the necessity of oracles.
* Coverage obsession; percentage-wise covered/not-covered reports without elaborations on what is important.
* Metrics tumor; quantitative numbers in a world of qualitative feelings.
* Sick test design techniques, all made to fit computers; algorithms and classifications disregarding what is important, common, risky, error-prone.

When someone challenges authorities, you should ask: “say you’re right, what can you do with this knowledge?”

I have no final solutions, but we should take advantage of what humans are good at: understanding what is important; judgment; dealing with the unknown; separating right from wrong.

We can attack the same problems in alternative ways:
* Testers can communicate noteworthy interpretations instead of Pass/Fail.
* If we can do good testing and deem that no more testing has to be done, there is no need for coverage numbers.
* If the context around a metric is so important, we can remove the metric, and keep the vital information.
* We can do our test design without flourishes; focusing on having a good chance of finding what is important.

Do you think it is a coincidence that test cases contains a set of instructions?
These theories cripple testers; and treat us like cripples.

Now you know why people are saying testing hasn’t developed the last years: we are in a dead end.

And the worst is that if Gigerenzer is right; my statements have no chance of being accepted until we have a large battery of tools based on how people think and act…

Working with the testing debt – part 1 Martin Jansson 2 Comments

Jerry Weinberg made an excellent comment on my previous article Turning the tide of bad testing [1] where he wanted more examples/experience on the tips. It is sometimes a bit too easy just to come up with a tip that lacks context or explains how you used the specific tip in a situation and where it worked for you. There are no best practices, just good ones that might work in certain contexts, still you might get ideas from it and make something that works for you.

Tip 1: Exploratory test perspective instead of script-based test perspective. This can be roughly summarized as more freedom to the testers, but at the same time adding accountability to what they do (See A tutorial in exploratory testing [2] by Cem Kaner for an excellent comparison). And most importantly, the intelligence is NOT in the test script, but in the tester. The view on the tester affects so many things. For instance, where you work … can any tester be replaced by anyone in the organisation? This means that your skill and experience as a tester is not really that important? If this is the case, you need to talk to those who support this view and show what you can do as a tester that makes his/her own decisions. Most organisations do not want robots or non-thinking testers who are affecting the release.

In one organisation I worked in, there was a typical scripted test approach initially. There were domain experts, test leads and testers among other roles. The domain expert and test leads wrote test specifications and planned the tests in a test matrix to be executed. The test matrix was created early and it was not changed over time. Then testers execute the planned tests. Management looked at progress by test cases, how many was executed since last report.

At one time I was assigned to run 9 test cases. I tested in the vicinity of all of them, but I did not execute any of them. I found a lot of showstopper bugs and a few new areas totally uncovered that seemed risky. To the project and to the test lead this meant no progress [3] because no test cases were finished. The test matrix had been approved and we were not to change the scope of testing, so the new risky areas were left untouched. A tester did not seem to have any authority to speak up or address changes in scope. As I saw it, we were viewed as executors of test scripts. The real brain work had been done before us. During this time, management appointed some new temporary personnel to the team that had little domain knowledge and no testing expertise. They did not think any experience or skill was needed. Just because it was so short time the new personnel did not get up to speed with the testing tasks. Some of the testers said it just weighted them down with extra hands. When we executed test scripts we were usually assigned a few and then ran them over a few days, cooperation between testers was not all too clear.

After this assignment I was able to be test lead myself and got all previous testers allocated to me. I started to introduce the exploratory test approach by letting the testers have more freedom, but at the same time being responsible for what they did. We started using test sessions as well as scripted testing as a transition period. We adapted the test matrix over time based on new risks. Temporary personnel were still allocated to us without us having a say in the matter. Still, we made the best of it by educating and mentoring them. Managements view was still the same, but we tried to communicate testing status in new ways such as using low tech dashboard.

A bit later management had changed in how they worked with the test leads. We were allocated temporary personnel, but we were able to say No to those that we knew were not fit for testing. The cooperation between test lead and testers were very different. Everyone in the team was taking part in making the plan and changing it over time. We found more bugs than before, took pride in our testing, test reports and bug reports. Each artifact delivered needed to be in a good shape so that we could affect the ever so present perception, that testers did not know what they were doing or did not care. In the test team we were all testers, some just had a bit more administration. There was no clear hierarchy.

How does this affect the testing debt?

Having a scripted test approach:

  • Does not fit well in an agile and fast moving team or organisation. What are you contributing with to the team? Unclear if you need someone else to write tests for you. Big chance that you have the attitude that you wait for others before you can do your own job. This means that you are a bottle neck and a weight for the organisation. Most of the time you just cost money and is therefore a burden.
  • Being viewed upon as just an executor is demeaning. Not having any authority to affect your working situation will eventually mean you give up or stop caring. When you stop caring you will take short cuts and ignore areas. A group of testers who stop caring will affect each other and will make an even worse job. This means it is a catalyst for Broken Windows [4] and tipping point towards the negative.
  • When you get new temporary personnel that just weigh you down, it would only seem like you have enough to do the job. In reality you are working even slower and with fewer actual testers.
  • When progress is by counting test cases run from one period in time to another, you are missing the point of what is valuable. If the test team is ignorant of this fact no harm might be done, but if they are aware of the fact and dislike the situation, it will cause friction and distrust.

The situations above are extreme, but it is not uncommon, as I see it.

Having a exploratory test approach:

  • You are used to have unfinished plans and unchartered terrain in front of you. Living in a chaotic world where you are able to adapt and be flexible to your team and to the organisation. If there is a build available, you test and make the plan as you go. You do not wait. You will rarely be seen as the bottle neck, unless you have too few personnel to do your job. The view of being quick and agile will affect the view on your test team and therefore will make it easier when you start new projects, thus decreasing the testing debt in the area team composition and flexibility.
  • Progress is viewed upon as what you spend time on. You then need to justify why you tested that area, but if you do that you gain progress. You know that you can never do all testing, but you might be able to do a lot of what is most important and what is most valuable to the project. By doing it this way, you and the team will gain momentum in your work. You will, if possible, fix Broken Windows or at least not create new ones in this area.
  • When you run a test session you know that it is ok to test somewhere else if you think that is valuable. If you find new risks you add them to the list of charters or missions. Your input as a tester is important; you contribute by identifying new risks.
  • In a exploratory test team every tester is viewed upon as an intelligent individual, bringing key skills and knowledge. You have no one telling you exactly what to do, but you will have coaches and mentors who you debrief to. There will be a built in training and a natural way of getting feedback. You will be able to identify testers who do not want to test or want to be testers. The team will grow and become better and better. The debriefing will also assist in identifying new risks, keeping the group well aware of current risks and important issues. This will decrease the testing debt by having a focused, hard-working team of testers doing valuable testing, as they themselves see it.


[1] Turning the tide of bad testing –

[2] A Tutorial in Exploratory Testing –

[3] Growing test teams: Progress –

[4] Broken Windows theory –

Flipability Heuristic Rikard Edgren 8 Comments

Credit cards are taking over the usage of notes and coins.
This has benefits, but it is not possible to toss a coin with credit cards.

Bob van de Burgt coined (!) the term flipability at EuroSTAR 2010 Michael Bolton tutorial, coin exercise.
It is a lovely word, and can be used more generally to describe how products can be valuable in other ways than the intended purpose, it’s part of a product’s versatility.

If you ask your customers, I bet you will be surprised by a couple of ways they benefit from your software. It might be exploitations of bugs, that it might be a bad idea to fix.

As you’re testing software, you can look for other usage that might be valuable. It is probably not your first test idea, but it could be the start of next great feature, or the beginning of a cool story; hence the Flipability Heuristic.

Competitor Charisma Comparison Rikard Edgren Comments Off on Competitor Charisma Comparison

In many cases, it is worthwhile to take a look at how your competitors do similar things. Among competitors I include products you’re trying to beat, in-house solutions (e.g. handmade Excel sheets) and analogue solutions, solving the problem without computer products.

Charisma is difficult to test, but competitor comparison is one way to go. You can ask others, or look for yourself; where is, and can be, the charisma of these solutions?
For your typical customer, your competitors might tell you which aspects of software charisma that are relevant.
Try using the U.S. SPACEHEADS mnemonic:

Charisma. Does the product have “it”?
Uniqueness: the product is distinguishable and has something no one else has.
Sex appeal: you just can’t stop looking at or using the product.
Satisfaction: how does it feel after using the product?
Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product “good-looking”?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?

Be aware that there is nothing more un-charismatic than a bleach copy of the original.
Rather find which characteristics are needed, talk about them, and see what your product team can do.
And put focus on finding your product’s own, original charisma.

Trilogy of a Skilled Eye Rikard Edgren No Comments

I have completed a trilogy on the theme The Eye of a Skilled Software Tester

edition 1: Lightning Talk, Danish Alliance, EuroSTAR 2010

edition 2: Article, The Testing Planet, March 2011 – Issue 4

edition 3: Presentation, Scandinavian Developer Conference, april 2011

Some things have changed over time; in the first two I didn’t focus on the most important “look at many places”, besides specifications we need to know about business usage, technology, environments, taxonomies, problem history, standards, test analysis heuristics, quality characteristics, and more…

I also did the necessary switch from errors/bugs to problems; because it is broader, and better pinpointing the psychological paradox: “want to see problems“, things that make Done further away.

While at it, I uploaded a presentation from SAST Öresund yesterday.
77 Test Idea Triggers, a presentation I’d be happy to give again!