Stories from EuroSTAR TestLab 2010 the test eye 4 Comments

Monday

Henrik started his journey by car from Karlstad and went down to Gothenburg to join with Martin. Both of us were going to take the train down to Copenhagen. Not surprisingly there were delays and the train was cancelled… Instead we headed back to Martin’s house and loaded Henrik’s car and took off. After several hours drive we arrived to Copenhagen, where we went directly to Bella Center and the TestLab at EuroSTAR.

This was the first time we met James Lyndsay and Bart Knaack in person; we had only talked with them over Skype/phone. So we all started with saying hello but there were really no time to sit down and have a cup of coffee – we needed to get to work straight away.

One of the burning issues to resolve the week before the conference was to bring client machines. Henrik was able to bring four laptops (thank you Compare Testlab and Know IT for sponsoring with this!) and Martin one. We worked in the TestLab for a few hours setting up and testing the two networks (James had one and Bart had another) with the servers connected to them.

Each network consisted of a bug tracker (Mantis), a server with OpenEMR, a server with a WebShop based on OsCommerce as well as a link to download a version of FreeMind.

When the conference center was getting near closing hours we rounded up and identified tasks needed to be done for the next coming days. We were assigned to setup and fill both the OpenEMR and the WebShop with new test data. We determined that we first should create a data model in FreeMind; both for reviewing before inserting the data and also so that we could print out the model and use it as a means to discuss with participants in the TestLab. With the model at hand, we thought the participants would also be able to identify test ideas of their own that they could try out.

We packed up everything and took Henriks car together with James and Bart. Half an hour later we were down in the hotel bar for dinner. We first met up with Steve Öberg, Rikard Edgren, Björn Karlsson, and Stefan Thoresson; and later on Markus Gärtner, Neil Thompson & Michael Bolton and some more joined our table and we were discussing testing long into the night.

Tuesday

At 8.45 we met up with James and Bart at the hotel and took a cab to the conference center where we setup the TestLab again. We started on the data models while James and Bart worked on setting up the details of the two networks. Meanwhile the sponsors were installing their applications on the client machines. Late into the afternoon we were starting to get ready with the new data for both systems. During the afternoon several people were visiting us and some of them started testing the applications. In fact, the first bug entered in the bug system was the one that won the prize “Best bug” – Markus Gärtner found it pretty fast…
Around 18:00 we were able to move into the room where the TestLab should be. We moved in all servers, routers and clients and setup the power for it all. Funny thing is that we didn’t find more than one electricity socket, so all machines were hooked up on this only one and we decided to load test the electricity over night. 🙂
During the evening we attended the Rebel Alliance (or Danish Alliance) for 11 interesting lightning talks. The discussions went on after that long into the night. But to our defense we must say that we got in bed in time.

Wednesday

Despite that we got in bed fairly early, we both overslept and missed the grand opening of the TestLab… Such a good start!
But no time to cry over that; we dug in and assisted in welcoming and introducing people that entered the TestLab and told them about it and how to use it; as well as guiding them to start testing. Bugs were reported and the sound of the chicken echoed all over. Nice!

During the day a lot of people were in and really sat down testing. There were also a lot of interesting collaboration going on between people that hadn’t met before. Some of the people that spent more time than others were Shmuel Gershon, Markus Gärtner, Isabel Evans, Michael Bolton, Ajay Balamurugadas, Zeger Van Hese, and Rob Sabourin (we might have forgot some of the names, but you know who you are).

During the day there were a lot of non-arranged  testing going on. But we had two really appreciated sessions during the day.
Firstly, there was a pair-testing session led by James Lyndsay where close to thirty people attended. There was a good mixture of experienced and less experienced testers. Even though the session was only about 30 minutes divided in three 10 minutes mini-sessions, it seemed to give the participants some very nice hands-on experience to bring home. And a lot of good collaboration was happening!

Secondly, during lunch hour it was an improvised weekend-testing session, Ajay Balamurugadas had invited as many as possible during his talk on Weekend Testing right before the session. Ajay and Markus Gärtner led the session with the rest of the participants that had showed up. Since there were so many that showed up we teamed up in pairs or smaller groups. The collaboration in the TestLab was yet again really great! After close to 30 minutes session we had found a bunch of interesting bugs while at the same time many have had the opportunity to team up with these fabulous  testers.

Despite that we had to cancel some planned sessions, we all thought that the first day was a real success . And we measured that by realizing that we didn’t have had time to take breaks or eat a proper lunch because the TestLab wasn’t empty for even a minute; there were at least 5 people there at minimum (except during the key note by Stuart Reid).

On the evening there was the Gala Awards dinner at Copenhagen City Hall and we ended up at a pub called Bryggeriet with the Rebel Alliance discussing testing in all forms. We went home pretty early and met up with some people in the hotel and continue discussing testing.

Thursday

The day kicked off at 08:00 in the TestLab and we used the first 30 minutes preparing for yet another interesting day!

At the AllSTAR testers, a few renown testers were able to join in. We split the TestLab into three groups where each focused on a specific area which they knew well. Each group attacked the systems in various ways, teaching the participants new ideas on how to test the systems. Ari Takanen was showing how he did some security testing (and/or fuzz testing); Rob Lambert showed how he did accessibility testing; and James Lyndsay had a exploratory testing session discussing a lot of test ideas with the participants.

At the round the table of managing ET, James Lyndsay moderated the group consisting of Rob Sabourin, Michael Bolton, Carsten Feilberg, Shmuel Gershon, Zeger Van Hese, Henrik Andersson, Henrik Emilsson, Martin Jansson, Markus Gärtner, John Stevenson, Neil Thompson and some more (we are sorry if we missed to mention someone, but we try our best to remember all people that were there out of those 200 TestLab attendees). We talked about several aspects around managing ET; and the discussion began with how we had managed our own time in the TestLab. James Lyndsay moderated the group in an excellent way. One of the subjects was how often you change/update your charters. Another interesting topic was on how much information you should provide different testers with.

During the lunch session, Markus Gärtner launched a Testing Dojo that was very appreciated. And Michael Bolton was doing a transcription of the session. The testing dojo meant that one tester tested the application with a projector that showed the computer screen. The tester then described through the testing and explaining what happened. All other around the tester could ask questions. The tester should be switched every 5 minutes letting several people show off their skills.
While this was the intention, the testing dojo slightly turned out to something different. But still as much appreciated by those involved.

Thursday evening we, Markus Gärtner, and Michael Bolton headed in to Copenhagen for a dinner with Ajay Balamurugadas, Teemu Vesala, Kalle Huttunen and Mika Hänninen. It took us two hours to get to the restaurant because we were forced to explore the Copenhagen Metro and train system…

Friday

We headed home to Sweden in the morning and arrived in Gothenburg at 15:00 in the afternoon. Then Henrik continued to Karlstad and arrived at 18:30. Seeing the problems other people had with getting out of Denmark by plane or train, we were lucky that car traffic wasn’t too affected by the blizzard…

Some numbers

  • 180 people got in to the TestLab and got introduced to what we do (most of them sat down and tested)
  • 100 hours of testing
  • 100 reported bugs (plus several bugs not reported in the bug system)

Reflections

Considering how the TestLab was run, it was like a mix of having an open house and being a test lead handling several projects at the same time. We served different stakeholders such as the participants in the lab, the speakers, the conference personal and ourselves as apprentices as well as the masters (James and Bart). The plan is ever changing. Each stakeholder value different things. Just like in the real world outside the TestLab.

There were also 5 sponsor presentations from the tool vendors that had their tools on the client machines. We really appreciated those sponsors that had put in some time to really show how their tool could be used on the same applications as we tested in the TestLab. Very useful and meant that there were more value for those interested in the tools to try things out for themselves on the lab machines.

During the TestLab we got a lot of ideas on what we want to do next year. A lot of ideas that we had on beforehand were though only good in theory; seeing the real obstacles and problems in practice was a good lesson.

And finally, we really appreciate the great effort that James and Bart have done in creating the EuroSTAR TestLab and made it to such a remarkable success. We will have the benefit of building upon a very solid and well thought-out platform & concept!

We had a lot of fun and hope that all others had a great time!
See you all in Manchester 2011!

Cheers,
Henrik and Martin

Notes from EuroSTAR 2010 Rikard Edgren 1 Comment

EuroSTAR 2010 took place during the coldest Copenhagen November in 140 years.
Being a program committee member it was a bit different, but not a lot; I met a lot of interesting people, spent too little time in the test lab, and listened to thought-worthy presentations (+ 11 Danish Alliance Lightning Talks.)

The theme was “Sharing the Passion”, and during the presentations there was also a lot of mentioning of collaboration, trust, that testing is difficult, and that people are most important.
I was at an excellent full day tutorial with energetic Rob Sabourin, that is so right in his notions of multiple information sources and diverse test ideas that are generated all the time, but specified in more detail ALAP – as late as possible.
I saw parts of Michael Bolton’s Test Framing tutorial, that had a lot of audience involvement and resulted in a lot of notes put on the walls – beautiful.

I will not do any summary of sessions, but I have some quotes you might like:
“when you make a mistake, think: that was exactly what I needed to know” (Bolton)
“I’m teaching them to write the requirements in Latin, to remove all ambiguity” (Sabourin)
“I wouldn’t put my child in the respirator unless it has been tested exploratory” (Rydberg)
How do you follow up “Right from Me”? -“We don’t. We trust people. We need leaders that trust people” (Nordström)
“MTBS – Managing Testing Based-on Sessions” (Feilberg, opposite-method in order to make it easy to sell SBTM to managers)
Weekend Testing: “freedom for the testers!” (Ajay)
“don’t assume your view is the only view” (Thomas)
How can testers help developers go faster? (inferred from Harty)
“I have nothing against CMM. It just doesn’t work.” (Gerrard)
“Use examples, metaphors and visualizations to improve effectiveness and efficiency in testing” (Zimmerer)
“the industry is broken” (Patti)
“always have two 30-seconds ‘commercials’ in your back pocket” (Galen)
“choose a job you love, and you will never have to work a day in your life” (Confucius)

Synthesizing Test Ideas Rikard Edgren No Comments

It is very difficult to describe the process of synthesizing test ideas. It involves a multitude of information sources, a sense of what’s important, and a dose of creativity to come up with ingenious test ideas, and effective ways to execute them.

The easiest way is to take the requirements document, re-phrase each item, and optionally add some details using equivalence partitioning.

The best way is to use a variety of information sources and generate testworthy test ideas that have a good chance of being effective.
An impossible way is to combine all important information in all possible ways.

Rather you should use each element that is important, and each combination you think is important.

Reviews are helpful, the actual product invites to some tests, and also the speed of the tests will guide you to the best ideas in your situation.

It is recommended to write down the test ideas, at least with a high-level granularity. If reviewing isn’t done, or the software already is available, it is faster to write them afterwards, together with the result.

Don’t try to cover everything, because you can’t. Rather make sure there is a breadth, and count on serendipity to find the information needed for a successful product. Some of the test ideas you will generate on-the-fly, during your test execution when you see what you actually can do with the software.

And don’t stop just because you reach an adequate test idea. Think some more, and you might find better ways, or new solutions to old problems.

I find no better way to elaborate further than to discuss different categories of test ideas:

Ongoing Test Ideas

Ongoing test ideas can’t be completed at once; they keep going as long as more relevant information is revealed. A good example is quality characteristics like stability; the more you test, the more you know about the stability of the product. You probably don’t do only ongoing tests (in the background) for important characteristics, but as a complement, it is very powerful, and very resource efficient.

Another example is ongoing usability testing for free, where the tester keeps the relevant attributes in the back of their head during any testing, and whenever a violation occurs, it is noticed, and communicated.

Classic Test Ideas

Classic test ideas deal with something specific, and can be written in “verify that…” format. They can be a mirror of requirements, or other things the testers know about.

For review and reporting’s sake, beware of granularity; rather use one test “verify appropriate handling of invalid input (empty, string, special ASCII, Unicode, long)” than a dozen of tests. In some situations you can even use “Verify explicit requirements are met”, and put focus on the more interesting test ideas.

Also take the opportunity to recognize which tests are better suited as automated tests, and which are safest to test automatically, and tool-aided and manually.

Combinatorial Test Ideas

Many bugs don’t happen in isolation (that’s why unit tests are very good, but far from a complete solution), therefore testing wants to use features, settings and data together, in different fashions and sequence.

Scenario testing is one way to go, pairwise is said to be good, and the tester with product knowledge can tell you at once which interoperability can’t be neglected.

Combinatorial test ideas use many test elements together, because we think they can have effect on each other, or we don’t know.

Unorthodox Test Ideas

Requirements are often written so they can be “tested” (but what usually is meant is “verified”.) This easily results in requirements being quantified and often in a contractual style, instead of describing the real needs and desires.

The risk of missing what is really important can be mitigated with testing that isn’t necessarily aimed towards falsifying hypothesis. “You are one of few who will examine the full product in detail before it is shipped [Testing Computer Software]

Unorthodox test ideas are open, based on something that is interesting, but without a way to put a Pass/Fail status, and more aimed towards investigation and exploration for useful information.

With creativity you can come up with original ways of solving a testing problem.
Usability testing can be addressed by asking any heavy user what they think.
Charisma might be best evaluated by having a quick look at a dozen of alternative designs.
If there are a hundred options, you might want to take a chance and only look at five random.

These are especially vital to get feedback on, since one unorthodox test idea can generate two others, that are better, or be dismissed as irrelevant.

Visual Test Ideas

Some things can’t be expressed with words. And many things are much more effective to communicate with images. Try to use visual representations of your tests, also because they can generate new thoughts and understandings.

Examples: state models, architectural images, technology representation, test elements relations, inspiring images of any kind

The visual test ideas, as all the others, can be transformed to other types, morphed to many, or what is best suited.

What and How?

Most test ideas will probably be focused on what to test. Some will also state how to test it. This can have many benefits, e.g. if you write you’re going to use Xenu’s Link Checker, and someone points out that there’s another tool that will fit the purpose a lot better.

These categories are artificial, and only used to better explain the processes.
When you know them, forget about them, and re-build your own model of the test design that suit your, your colleagues’ and your company’s thinking. Make sure that you change and add to your test ideas as you learn more.
Your questions, hypothesis and answers will emerge.

Testworthy Rikard Edgren 8 Comments

I have had some problems with the notion of Risk-Based Testing.
I mean, aren’t all testing based on risk in some sense, making the term redundant?

When using risk techniques, you come up with a list of areas to investigate first or most.
But what about those items that are extremely rare, but with very high impact?
What about not so risky things that would be extremely cheap to fix if we knew about the problems?
Imagine that Security is down-prioritized; that probably wouldn’t mean that passwords can be displayed in clear text, or publically available to anyone?
This is handled in many risk-based strategies by using light testing on items with lower risk.
But is the list of items in the risk assessment broad enough, and contain all relevant areas?
And how do you go from the general risks to the details involved in the actual testing?

There is also the notion of serendipity to consider; just a quick look at something might render very valuable information you didn’t know you were looking for.
So what about the tests that just take a few seconds to execute?
And what if the interpretation of the risk assessment contains mistakes?

I haven’t found a good way to express another way to look at it, until I stumbled on the word testworthy.

A test is testworthy if we think the information we can get is worth the time spending on it, regardless of risks, requirements, test approaches et.al.

The benefit of risk assessment rather is that stakeholders get involved with a language they understand.
Not sure if testworthy can help there…

Factoring/Fractionation Rikard Edgren 1 Comment

It is a natural instinct for a tester to break down product information to elements that can be used in testing. It can be elaborations on a requirement, or insights from talking to a customer, or feature slogans from a web site et.al.

Michael Bolton (and James Bach) calls this factoring – “Factoring is the process of analyzing an object, event, or model to determine the elements that comprise it.”

Edward deBono has a general lateral thinking technique fractionation, which explicitly includes a creative aspect:
“One is not trying to find the true component parts of a situation, one is trying to create parts.”
[Lateral Thinking – Creativity Step by Step, p.135 in 1990 Perennial edition]

I’m not sure which name to use for this important (and often invisible) testing activity, but I’m certain that it contains more than the mathematical factorization, that analyzes and finds the exact components that make up the whole.

In testing we are rather looking for elements that are useful, that contains something we want to test in order to find important information about the product.
We might create unjustified artificial divisions, because they help us.
If a use case includes Firefox 3.6, we will add all other browsers and platforms we think are relevant and worth to test; we will automatically think about settings in browsers.
We will add things we know or think we know; leveraging our understanding about what is important, to get elements to synthesize to good test ideas.

This area is remarkably un-elaborated in testing, and my guess is that it is because “traditional” testing don’t need this; what should be tested is already “completely” described in the requirements document.
With the emerging insight that testing requires a multitude of information sources, factoring/fractionation will inevitably be thoroughly investigated.

But what should the name be: factoring, fractionation, or something even better?

Lateral Tester Exercise I – Status Report Virus Rikard Edgren 4 Comments

I’m re-reading deBono’s excellent Lateral Thinking.
Here is a Generate Alternatives exercise for software testers; try to think of as many different alternatives as possible.
There is no right answer, the focus is to train yourself in re-structuring information.
And at the same time come up with many different ideas that might generate fruitful thoughts.
And no, there is no more information, do the best with what you have.

The release meeting starts in one hour.
You’re just about to put the final touch on the status report.
The status report template has been virus-infected, and all your data is lost.
What do you do?

Do this for yourself, or put a comment if you want to contribute to thoughts around status reporting.

Turning the tide of bad testing Martin Jansson 7 Comments

Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in run-down ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one unrepaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)

A quote from an article published 1982 in Atlantic Monthly [1] and was written by James Q. Wilson and George L. Kelling. This metaphor can be applied to many situations in our daily lives. It is often used in sociology and psychology.

At Artima Developer [2] Andy Hunt and Dave Thomas, authors of The Pragmatic Programmer: From Journeyman to Master (Addison-Wesley, 1999), discuss and elaborate around an important part of being a Pragmatic Developer namely fixing Broken Windows. The book mentiones many ways of keeping clear of the broken window in order to avoid an increasing technical debt.

One broken window – a badly designed piece of code, a poor management decision, that the team must live with for the duration of the project – is all it takes to start the decline. If you find yourself working on a project with quite a few broken windows, it’s all too easy to slip into the mindset of ’All the rest of this code is crap, I’ll just follow suite’.

I’ve seen developers focus on getting the amount of warnings down. I’ve seen project managers addressing the flow of bugs, thus measuring the fix rate, find rate, resolve rate and close rate to keep the project from becoming unhandlable. Another project manager was stating that developers must fix “shitty bugs” or the smaller bugs, before the build reaches the official testers. We see different treatments of this in development.

How do we see Broken Window Theory affect testing?

  • When you have stopped caring about how you test.
  • When you have stopped caring about how you report bugs and status.
  • When you have stopped caring about how you cooperate with others.
  • When you have lost focus on what adds value.
  • When you do not cooperate with developers and have stopped talking to them.
  • When you complain about the requirements and have stopped talking to the business analysers.
  • When you avoid testing areas because you know that bugs won’t get fixed there anyway.
  • When you avoid reporting bugs because it doesn’t matter.
  • When you report status as you always have been, without any real content.
  • … and so on …

All this create Broken Windows and, as I see it, result and are summarized in a Testing Debt, which was inspired by Ward Cunninghams definition of Technical Debt [3]. Jonathan Kohl has made a definition of Testing Debt [4] and Johanna Rothman another [5].

How do you identify things that increase this Testing Debt?

You can find a long list [6] of things that increase the testing debt . Don’t be discouraged, it is possible to fixing  the broken windows and descreasing the testing debt.

What do we do to contribute? What do we do to provide value? Where do you start?

What can you do to decrease the Testing Debt?

There are lots of things that you and your team can work on and excel in. There are also some areas which you can start with directly without depending on anyone but yourself and those you interact with.

Tip 1: Exploratory test perspective instead of script-based test perspective. This can be roughly summarized as more freedom to the testers, but at the same time adding accountability to what they do (See A tutorial in exploratory testing [7] by Cem Kaner for an excellent comparision). And most importantly, the intelligence is NOT in the test script, but in the tester. The view on the tester affect so many things. For instance, where you work … can any tester be replaced by anyone in the organisation? This means that your skill and experience as a tester is not really that important? If this is the case, you need to talk to those who support this view and show what you can do as a tester that makes his/her own decisions. Most organisations do not want robots or non-thinking testers who are affecting the release.

Tip 2: Focus on what adds value to developers, business analysers and other stakeholders. If you do not know what they find valuable, perhaps it is time that you found out! Read Jonathan Kohl’s articles [8] on what he thinks adds value for testing.

Tip 3: Grow into a jelled team (read Peopleware by Timothy Lister and Tom deMarco for inspiration). Peopleware identifies, among other things, things that you should NOT do in order for a group to grow into a jelled team. As a team it is so much easier to gain momentum in testing and especially so if you are jelled. Do you need to reconsider how you work as a team?

Tip 4: Work on your cooperation. Improve your cooperation with the developers. Assist them in what they think is hard or not their job. Polish on the areas of improvement that the developer think you lack in. Show them that you take their ideas seriously. A good cooperation is one of the keys to success. Improve your cooperation with the business analysers. The ideal situation would be when you are able to give them feedback early, during and after their work on requirements. What can you do to get to that situation? What do they want?

Before you or your group start testing an area, invite the business analysers to let them explain theirs thoughts on the feature. Invite the developers so that they can explain the design, risks etc. Invite other parts of the organisation that you think can contribute with ideas. When you have the different stakeholders with you, show them how you work and what your thought patterns are. Explain how you conduct testing. Pair-testing (tester + tester, tester + other stakeholder) is an excellent tool for getting to know the strength and weaknesses of your test group but also for education and showing others how you work. If the stakeholders do not trust in your work, this might be a method to show them your skill. It is common that the tester is left on their own. You must take the initiative! Invite them to you.

Tip 5: A good status report can (and should) affect the release decision, but it might also affect how testing is perceived. Still, keep to the truth as you see it. Dare to include your gutfeeling, but expressing it clearly that it is just that. If you have different metrics included, be sure to add context and how you as a tester interpret it.

Tip 6: The bug report is one of the most important artefacts that come from the tester. A bad bug report can have so much negative impact, while a good one can have the opposite. If possible, review the bugs before you send them. By doing this you will get new test ideas as well as raising the quality of the bug report. Train your skill in reporting bugs. Notify project management, developers, etc that NO bug report with bad quality is to be accepted from your team and that you want feedback on how to improve. You really want to make the life easier for the bug classification team and developers trying to pinpoint  and eventually fix the bugs.

In a project a co-worker and I had worked on a bug for a time. It was a late friday afternoon and almost everyone were heading home. It was only a few days before the release. Each bug reported engaged lots of people, no matter how small they were. The bug we found though was a blocker. We considered if we were going to hand in the bug as it was or if we were going to go to the roots of it. A week or two earlier the developers had worked the whole weekend trying to fix a set of bugs that were very badly written and hard to reproduce. So, we decided that we were going to collect as much information as we possibly could for the bug to get a good classification and possibly get fixed. Our goal was to make a good bug report [9]. We started keeping track of the frequency and noticed that it was 10 out 50 times. We collected logs and reports from all parts of the system. When we knew that the repro-steps were correct, as we saw it, and that the content was ready. Then the bug was released into the bug system. After an analysis by the bug classification team the determined that the bug was both hardware and software related, so several teams got involved. When hardware thought they were ready they moved the bug to software. It was possible to track the communication and collaboration between the different teams. Not once were there any hesitation that any information was missing or that it was incorrect. In total there were close to 30 people working on the bug. Eventually it all got fixed and was sent back for verification. So, we spent a few extra hours to make a good bug report and saved/minimized time as well as lessening frustration.

Most of these tips are easy to start with, but they are also important to work with continuously.

Summary:

  • Raise your ambition level
  • Care about your work and those that you work with
  • Prioritize testing before administration
  • Cooperate and collaborate
  • Report World class bugs
  • Create status reports that adds value

DO NOT LIVE WITH BROKEN WINDOWS!

———-

[1] Broken Windows – http://www.theatlantic.com/magazine/archive/1982/03/broken-windows/4465/

[2] Don’t Live with Broken Windows – http://www.artima.com/intv/fixit.html

[3] Explanation of Technical debt – http://en.wikipedia.org/wiki/Technical_debt

[4] Testing debt – http://www.kohl.ca/blog/archives/000148.html

[5] An Incremental Technique to Pay Off Testing Technical Debt – http://www.stickyminds.com/sitewide.asp?ObjectId=11011&Function=edetail&ObjectType=COL

[6] Behind the scenesof the test group – http://thetesteye.com/blog/2010/07/behind-the-scenes-of-the-test-group/

[7] A tutorial in exploratory testing – http://www.kaner.com/pdfs/QAIExploring.pdf

[8] How do I Create Value with my Testing? – http://www.kohl.ca/blog/archives/000217.html

[9] The Impact of a good or bad bug report – http://thetesteye.com/blog/2009/07/the-impact-of-a-good-or-bad-bug-report/

Software Quality Characteristics 1.0 the test eye 4 Comments

With all due respect, this is the announcement of the perhaps most powerful public two-page document in the history of software testing.
It is an extended re-write of James Bach’s Quality Criteria Categories, and has been developed to 12 categories (CRUCSPIC STMP) and 93 sub-categories for software quality characteristics/attributes/factors/dimensions/properties/criteria/aspects.
This list is not objectively true, and not easy to use for measuring (compare with ISO 9126-1), but you can adapt it to your context and be inspired by it when understanding/creating/reviewing software quality related stuff.

The PDF is available here, and goes under the Creative Commons Attribution-NoDerivates license, as all our Publications.
Use the list to generate test ideas, or to discuss what is important; get inspired by the concept of Charisma (SPACE HEADS mnemonic)
Suggestions for improvements to the list are very welcome!

We will continuously blog about some of the categories (and sub-categories) in order to provide more information and examples of usage.

Rikard, Henrik, Martin
the test eye

Windows Focus Rikard Edgren No Comments

That applications have focus on the right place is essential to a good user experience.
You have to trust that pressing Del on keyboard will have the intended effect.
Problems with this is very common, at least on Windows, and especially in applications with dialogs and panels and stuff of different types.
Addressing symptoms agressively easily ends up with flickering dialogs, and sometimes worse user experience.
Less is less, but it has to be the right code…

Focus can also cause miscellaneous functional problems;
this is my latest favorite example:

Client: Windows Vista or Windows 7, .NET 3.5, Office 2007

1. Launch Word 2007
2. Type any text
3. Add a Footer, and add any text
4. Put focus on “main text”
5. Press Ctrl+F and search for something you wrote in Footer
6. When the footnote is found, click on the Footer title bar (this is the crucial step!)
7. Click “Find Next” again
8. Click OK in dialog “Word has finished searching the document”

Result: Word crashes.

Expected: That I am able to continue writing my document.

Discussion around the content of a test proposal Martin Jansson 8 Comments

This is a follow-up on a previous post about Rapid Test Preparation. Some of the commenter’s asked for an example; I’ve tried to go half-way at least. Added some new sections based on some good feedback from Henrik Emilsson.


Test Proposal – <area>

See the test proposal as a work in progress document to enhance communication and sharing of your ideas. I’ve used test proposals for both small and large areas. You stop using it when you start testing, transferring the content of the test proposal into something else. At this stage you have probably prepared yourself as much as you could and will hone our details other kind of documentation. If you go back and look at the test proposal it is probably for reference or what thoughts guided you.


1. Test Mission

Create a list of like 1-5 rows of information objectives. A test mission that guides you on what is extra important and that would meet the stakeholder’s quality goals.

2. Related information

When you start looking for information, what is important to you and perhaps your team? What else should be added that you think would add value?

2.1 Background

Is there any background information that needs to be brought up for everyone to get a better understanding?

2.2 Documentation

When you dig deeper into what a feature really is you will stumble and browse through a lots of documentation. Add those that you think give value to yourself and other readers of this test proposal.

2.3 Contact persons

Consider which people that would care about this and if they care as much that they want to be included in the loop.

* Business analysts

* Documentation responsible

* Developers

* etc

3. Requirements

Are there any requirements that guide you? You can list those here and perhaps discuss around them.

4. Test Model

Are you able to create a model on how you perceive the area or feature? Is it easier to talk around the subject based on this? Did you perhaps misinterpret something which has now been cleared up?

5. Test ideas

Use different heuristics to guide and categorize your ideas for testing. Some of the ideas might become charters others might be as base material for a set of test cases or test scenarios. A test idea, as I see it, is the essence of a test in the minimal amount of words bringing out the most important aspects of it. It can be good to add a note on who came up with the test idea, in case it need clarification. For further reading on test ideas, see Rikards papers and earlier blog posts.

6. Risks, Open Issues and questions

When you start working on the feature you will find issues that are not covered or questions that are not answered. Since you hopefully use this as a document for collaboration and communication it becomes clear what issues that are open and what has been answered.

7. Coverage

Is there any coverage related information that you need to communicate? Is there any hardware coverage or anything related that need to be discussed? Is there anything that you intend not to cover? These discussions help to increase or decrease your estimates. I’ve sometimes experienced complete misunderstanding about the scope of coverage, where I had included too much thus thinking that I needed to test a lot more.

8. Work Packages

In order to give a rough estimate I usually try to include what we intend to spend time on, including things not directly related to the actual testing. I use http://thetesteye.com/blog/2010/05/utopic-estimations-in-testing/ to guide my work packages and estimates. Test estimation is hard and usually inaccurate, but by listing things you think you might need to do can at least put you on the map (hopefully).