Notes from EuroSTAR 2009 Rikard Edgren

It was Stockholm again this year. Good to not have to travel far, but since you are travelling I wouldn’t object to something more exotic, and warmer. Next year it is Copenhagen, again.
I had a full-packed program with 4 days of tutorials, workshops, tracks, short talks, test-labbing, conversations, so in total it is quite an amount of ideas since new things appear when you combine what you hear with your won reality and thoughts on testing. I am a bit exhausted after all this, which is a very good thing!

Monday gave Exploratory Testing Masterclass with Michael Bolton. Even though I would have expected a bit more advanced level, it was very good; here are some highlights:
it’s the scripted guys that sit and play with the computer; most of what we look for is implicit, it is tacit; develop a suspicious nature, and a wild imagination; checks are change detectors; some things are so important that you don’t have to write them down; episode 28 is important (I listened to it, but didn’t understand what was so important); managers fear that Exploratory Testing depends on skill, is unstructured, unmanageable, unaccountable; we need to build a “management case” (or should it be middle-management case?)
Michael showed an improved Boundary Value Analysis for a more complex example, where there are many boundaries; whatever focus for coverage you have, you will get other coverage for free; visualizing Test Coverage with sticky notes on a model is a good way of creating charters for Session-Based Test Management.
Go beyond use cases, create rich scenarios; emphasize doing, relax planning; Test Coverage Outline and Risk List to guide future sessions; don’t try to find bugs in the beginning, it takes time away from building a model; HICCUPPS + F (Familiar Problems); you learn best when you are in control of the learning process (and have fun); who said something valuable should be that easy?
Reports to make number people happy; SBTM debriefs are important for keeping quality of the testing and the report (and good for coaching and mentoring); the principal interrupter of testing work is bugs; Weinberg: “everything is information”; Dr. Phil: “How’s that working for you?”
He also had good exercises, and a nice movie, a Detective Story.
I haven’t been to Michael’s tutorials before, so it was about time.

Tuesday started with Tim Koomen tutorial “From Strategy to Techniques”; there’s a gap between the test strategy and the actual tests.
He is very knowledgeable, and walked through the basic testing techniques that every tester should have in his toolbox: Equivalence Partitioning, Boundary Value Analysis, Classification Tree Method, Pairwise, Path Coverage, Condition/Decision Coverage, Input/Output Validation, CRUD, Operational profiles, Load profiles, Right/Fault paths, Checklist.
The examples are focused on functionality, and a magazine discount example is shallow; it doesn’t consider if the person is just about to become 20 or 65 years old, or if you don’t know the age, or if an incorrect age is corrected. And now we haven’t even considered everything else that interacts with this small piece of functionality.
Every time I see this list, I think that they don’t sum up to the testing techniques I actually use when I design tests.
So my highlight was this feeling combined with my shallow knowledge about Grounded Theory; maybe we could have a super-advanced error guessing test technique, that describes the really, really good test design that happens all over the world, where we are looking at a lot more things than the requirements (more to come on this…)
Tim showed PICT tool (consider 1-wise!), and audience mentioned that Mercedes-Benz also has a free tool (see for a long list of tools)
I learned a new thing: the modified conditional coverage, where you omit tests that aren’t likely to catch errors.
Sometimes I wonder how many of the tests from the classic test techniques that preferably are automated in unit tests.

The actual conference started with Lee Copeland talking about nine innovations you should know about: Context-Driven School (the search of best practices is a waste of time); Test-Driven-Development (help you write clean code); Really Good Books (too few testers read the books!); Open Source Tools; Testing Workshops (Specialized focus, participatory style); Freedom of the press (He is no fan of twitter, but like blogs); Virtualization (Rapid setup, state capture, reduced cost); Testing in the Cloud (rent an awful amount of machines, very cheap); Crowdsourced Testing (Lee did not mention the ethical payment dilemma)
“sincerity is the key – once you learn to fake it…”
Keys for future innovation: creative, talented, fearless, visionary, empowered, passionate, multiple disciplines. Do we have all of these???

Johan Jonasson explained Exploratory Testing and Session-Based Test Management, but since this was a short track, there wasn’t so much time left for the real juice. “ET has specific, trainable skills” (Bolton)
Julian Harty, Google (where the testers seem to have huge responsibility areas) explained the concept of Trinity Testing, 30-90 minutes walkthrough(s) by Developer, Tester, Domain Expert. Not radically new, but it felt very fresh and effective. Julian was the only one I saw that brought a hand-out, one paper with the essentials.
Geoff Thompson talked about reporting, that “it’s the job of the communicator to communicate.” 1/10 of men (1/50 for women) are color-blind, and maybe you want everyone to understand the report? (I saw two other presentations, where red-green was used to highlight important differences.) “Know your recipients, what information do they want?”, “honesty is always the best option.”
Michel Bolton had a short session on “Burning Issues of the Day” that is available here. Very funny, very thought-worthy, very good.
Jonathan Kohl talked about Agile having lost a lot of its original value, it is re-branded, old stuff, and has become business. Process focus can distract from skill development, the point is: focus your work on creating value.
I asked Jonathan afterwards about Session Tester (where not much has happened lately), and he said that the programmers are too busy, but it will happen things pretty soon.

Wednesday’s first keynote was Naomi Karten about change; change that represents the loss of control, change that we often respond to in an emotional and visceral way.
Hofstadter’s Law: It always take longer than you expect, even when you take into account Hofstadter’s Law.
Regularly communicate the status of the change, also when you don’t have any news, or when you’re not allowed to tell the news (say that you can’t say anything!)
Listening and empathy are the most important change management tools.
The biggest mistake is to forget the chaos; and in chaos: don’t make any irreversible decisions.
This was my favorite keynote, and as I’m writing this I understand there was some really important information in the presentation.
Mike Ennis talked about software metrics, that help you manage the end game of a software project.
The end game term is taken from chess, where the outcome is almost decided, it is just a matter of technique, primarily about not making mistakes. Mike used the analogy that if you can anticipate what will happen, you know what to do next.
He defined example release criteria, which often aren’t met, but business decisions can overrule the criteria.
40% of the code is about positive requirements, “not a huge fan of exploratory testing, do it if you have time, after the standard tests have been run”.
He used a Spider Chart (aka Radar Plot) to visualize The Big Six Metrics (Test Completion Rate, Test Success Rate, Total Open Defects, Defects Found this week, Code Turmoil, Code Coverage.)
A question was raised that there is a risk of over-simplifying things, and the answer was: “Yes, but these are indicators only.”
Erik Boelen talked about Risk-based test strategy, if you do it with different roles it is like Läkerol, it makes people talk.
He likes games, the we-versus-them game with developers is good, at his place developers with many bugs buy drinks to the testers; and testers aren’t allowed to say one word for a week; last week that word was testing…
A very interesting and nice thing about the presentation was that he explained their (very good, but for some, very provocative, I assume) test method as a natural and obvious way:
They take the entry paths from the Risks and perform Exploratory Testing. For High and Medium risk they document test cases as they explore, and for Low risks they just report the results.
“Eventually testing will rule the world.”
Shrini Kulkarni talked about dangerous metrics, and that software development must consider where it is suitable with measurements. (Shrini hates SMART by the way, so I like him.)
A root cause is that metrics/measurements represent rich multi-dimensional data, there is inevitable information loss.
People might say “we can’t improve without metrics”, but you could use metrics as clues to solve and uncover deeper issues.
We can report with stories attached to the numbers, but still, we are losing information.
Susan Windsor had a double session on communication styles where time flied. In the audience, everyone said No to “Exploratory Testing adds no value”
Art of Storytelling involves: Random, Intuitive, Holistic, Subjective, Looks at wholes (two of my favorite adjectives!)
Research shows that interviewing is the most ineffective method when hiring.
She noted that a high proportion of testers also do creative things like music, poetry (which seems natural, it is good to have trained a lot at being creative.)
We looked at four different Personal Communication Styles (why is it always 4 different types of persons??): Strategist, Mediator, Presenter, Director.
Gitte Ottosen had the ending keynote of the day with a presentation about combining Agile and Maturity Models (“CMM = Consultant Money Maker”)
“Metrics, I know they are dangerous, but also necessary.”
Manual Testing involves using the story to do exploratory testing (“continuous learning as we implement the feature.)

On Thursday I was wise enough to skip 2 sessions in order to have a late breakfast and practice my presentation.
So the first presentation of the day was Zeger van Hese (he won the best paper award for the second time this year) that shared his experiences of introducing Agile, but only doing parts of the full-blown, capital A stuff (resulting in a Real-world, semi-Agile process.)
They used this strange mix of Waterfall and Agile that many, many companies have, and got a better and better situation as more members of the team sat in the same room.
But in the end they fell back to old behavior, there were many late changes, many Release Candidates, and a one month delay. But; excellent quality and stability.
3 Agile goals: better feedback, faster delivery, less waste.
They did a big Agile no-no by using manual testing, which seems like a wise deviation to me.
A quote attributed to Einstein, and several others: “In theory, there is no difference between theory and practice; In practice, there is.”
Next presentation was my favorite of the whole conference: Fiona Charles, Modeling Scenarios with a Framework Based on Data.
They built a conceptual framework at 2 levels: an overall model of the system (testing), and the tests to encapsulate in that model.
They did a structured analysis of all attributes for each framework element, and then used these attributes to build simple, and then more complex, scenarios. This is difficult to do for many testers, so careful review of this work is a way to make sure the results are good.
I think this is an example of the test design technique I thought about on Tuesday, a very advanced, structured way of designing tests that can’t be captured by the classic test design techniques (error-guessing is closest, but there’s a lot more to it.) I like to call this Grounded Test Design (more to come on this…)
“scenario testing is a nice thing to add to your repertoire”, “combine two or more models”, “don’t ever fall in love with your model”; they found 478 bugs, and all except 20 was essential to fix for the customer.
What you need to do something like this: testers with domain experience, business input and scenario review (and maybe an industry book), a model, structured analysis.
After lunch, I had a second session in the Test Lab, so I could report some of the bugs Zeger and I found the day earlier. It was great to test on real stuff, but I didn’t have the time that I would have liked in order to understand the product and its failures. There weren’t time (at least for me) to discuss in depth the findings with other testers, which is something I hope to be able to do next year (I’m hoping the Test Lab will continue.)
At the last presentation slot, I did my thing on “More and Better Test Ideas”. People were tired, but looked interested, so I’m happy with the presentation. I won’t recapitulate the session, but I did talk about the potato, but had to skip the new Find Five Faults analogy (unexpected time pressure, I’m still in doubt that I got the 38 minutes I was supposed to.)
The paper is available here, the presentation here, and it will be given as a EuroSTAR webinar at December 15th.
Good questions, and also examples of how similar approaches are used by others. A bit more than 10% of (almost 100?) attendants use test ideas/conditions.
The next day I got a mail stating that ideas from my presentation could be used at once; the best feedback to hope for.

The Test Lab organizers (James Lyndsay and Bart Knaack) seemed happy when presenting the results, and it’s good to know that the efforts might make open-source medical product OpenEMR a bit better (there is certainly room for improvements…)
At the final panel debate half of the audience voted that certification is important, Tobias Fors shared the insightful “as a developer I was scared about code review, but then I realized it really was about my low self-esteem.”
Regarding teaching testing in school, it was said that critical thinking should be taught early.
“How do we breach the barriers and invite the developers to our world?”
Dorothy Graham (who reviewed every presentation!) ended the conference and announced the next years programme chair John Fodeh.

Overall it was a very nice conference, at the expo Robert from ps_testware was nice and let me win a chess game this year also.
Recurring themes were Agile/Exploratory Testing (why are they grouped together?) and now and then the importance of a Story was emphasized.
Unknown source: “The higher and more complex quality objectives you have, the more manual testing is needed.”
Attending a conference isn’t about learning truths from the experts, it’s more about getting input to be able to create your own ideas that apply to your job, and to meet people, hear stories, interact with people that share your passion: software testing.
See you next year!


Justin Hunter December 9th, 2009


Thank you for posting such a thorough summary of your experiences at the conference. It made for good reading and makes me want to see if I can go next year.

– Justin

Markus December 9th, 2009

Hi Rikard,
thanks for the write-up. I wished I could have made it to Stockholm after visiting the con last year.

I’ll keep an eye out for more summaries, during the next days 🙂


Torbjörn Ryber December 10th, 2009

OK, reading your report from EuroSTAR proves that there really is a lot of focus on Agile and Exploratory testing on conferences. The fact that 50% of the delegates though certification was important may be because a very large number of attendees are fairly new to the craft and for many of them it is their first conference. It is some basic facts to hold on to regardles if the facts are good or not.

The answer to why there are always four personalities I belive lies in the following two fact (no joke!)
1) It is really hard to remember more than five things,
2) a square can easily be divided into four parts of equal size and shape.

The same danger comes with test modelling and measurements. What you measure or model is controlled to a large extent on how easy it is to display it graphically. That is partly why ET is so effecient – it is not controlled by a simplified graphical model or detailed scripts.

Parimala Shankaraiah December 10th, 2009

Hi Rikard,

Thank you very much for detailed updates on EuroSTART 2009. For some reason, I am unable to open the presentation for viewing. I consistently get a ‘No text converter installed’ error even though I installed the converter

Can you please email your presentation at

Parimala Shankaraiah

Rikard Edgren December 10th, 2009

Nice that the long post was worth reading.
Torbjörn; you should know that the things I went to might not be representative; for instance, I did not attend the session on ISO 29119.
“Easy to remember” is a common and dangerous way of deciding how to present. The only solution might be to always know that everything is simplified, there is more to learn!
Parimala; since the presentation I sent you worked better, I have replaced the old one on this site, including some spelling corrections.