Serendipity Questions Rikard Edgren No Comments

This Tuesday I held a EuroSTAR webinar: Good Testers are Often Lucky – using serendipity in software testing (about how to increase the chances of finding valuable things we weren’t looking for)
Slide notes and recording are available.
I got many good questions, and wanted to answer a few of them here:

How can we advocate for serendipity when managers want to cut costs?

Well, the “small-scale serendipity” actually doesn’t cost anything. It just requires a tester to be ready for unexpected findings, and sometimes spend 20 seconds looking at a second place. The cost appears when investigating important problems, but in that case, I would guess it is worth it (never seeing any problems or doing no testing at all would be the lowest cost…)
I also know that many testing efforts involve running the same types of tests over and over again. When you know these tests won’t find new information, maybe it is time to skip them sometimes and do something rather different?

Do you have issues finding the root of the problem considering you are doing many variations?

If it is a product I know well, I don’t have problems reproducing and isolating. But if it is a rather new product it can be more difficult, but I would rather see these problems and communicate what I know, than not see them at all!
To take more detailed notes than normally, or to use a tool like Problem Steps Recorder (psr on Windows) can help if you expect this to happen.

Is there any common field for automated testing and serendipity?

Yes!
It is easy to think that automation is a computeresque thing without a lot of manual involvement and tinkering with the product. But in my experience, you interact a lot with the product while learning and creating your tests. And I make mistakes that can discover problems with the product’s error handling.
I know this combination of coding and exploratory testing happens a lot, but it is not very elaborated in the literature (but the recent automation paper by Bach/Bolton have good examples on this.)

Another example of automation and serendipity is that you combine human observations while the tests are running. A person can notice patterns or anomalies, or maybe see what the users perception is when the software is occupied with a lot of other things.
Computers are marvellous, but they suck at serendipity.

New, new perspectives (EuroSTAR 2015 Lightning Talk) Rikard Edgren No Comments

I believe one of the most important traits of testers is that we bring new perspectives, new ideas about what is relevant.
I probably believe this from my experiences from the first development team I joined, so I will tell you about the future by telling an old story.

This was in Gothenburg, 15 years ago, and we developed a pretty cool product for interactive data analysis. Data visualization, data filtering and calculations, and we could even use the product on our own bug system. The team consisted of quite young men and women who had all gone to Chalmers, the technical high school in Gothenburg.
They had taken the same lectures, they had done the same exercises.
They collaborated well, using the modelling tools, and the thinking thinking patterns, they had learnt in school.
They weren’t exactly the same of course, they had different haircuts, personalities, specialities, but all in all, they had roughly the same ideas about how to design and develop products.

I was the first tester in their team, and I had not gone at Chalmers. At University I read philosophy, musical science, history of ideas, pratical Swedish; and rather stumbled on testing because I wanted to be a programmer. I did not think at all like the rest of the team, and that was the good part!
I saw perspectives they didn’t see, my set of mental models contained other elements than theirs.
So when they agreed a solution was perfect, I asked “but what if this box isn’t there?” or “can we really know that the data is this clean?” or “what if the user tries this?” or “isn’t this too different from this other part of the product I looked at yesterday?” or “how on earth should i test this?” or “how useful is this really?”
They were a great team, and they used my perspectives to make the product better.
I felt valuable, and maybe that’s when I started loving testing (well, maybe earlier, I have always enjoyed finding big bugs, and I will always love it. But that’s more a kind of arousal, I am talking about a deeper love, when you feel that you provide value others can’t.)

So when we get to 2030, a lot of things will be the same, and a lot of things will be different. There will definitely be a need for people carefully examining software, and bringing new perspectives, and new questions. A richer set of mental models are needed, regardless if we are called testers or something else.
But it will be new, new perspectives, and you should look out for these, and use them.
You should learn stuff, you should test software appropriately, you should embrace new situations and perspectives, and you will be ready in 2030.
I hope I will too.

Test Strategy Checklist Rikard Edgren 8 Comments

In Den Lilla Svarta om Teststrategi, I included a checklist for test strategy content. It can be used in order to get inspired about what might be included in a test strategy (regardless if it is in your head, in a document, or said in a conversation.)

I use it in two ways:

  • To double-check that I have thought about the important things in my test strategy
  • To draw out more information when I review/dicuss others’ test strategies

It might be useful to you, in other ways, and the English translation can be downloaded here.

Special thanks to Per Klingnäs, who told me to expand on this, and Henrik Emilsson, who continuously give me important thoughts and feedback.

Lessons learned from growing test communities Martin Jansson 3 Comments

Background

In 2011-2012 a friend, Steve Öberg, started a local community with a few test friends. We talked about testing, sharing experiences and discussing/arguing about various test topics. It was a great initiative, but I wanted something more and bigger. I had a discussion with Emily Bache, who run a local meetup on programming. She inspired me to take the step into meetups. It seemed like a great way to organize meetings. I created Passion for Testing and started to invite friends interested in testing.

Creation of the meetup Passion for Testing

I had a few ideas on what I wanted to achieve and things I did not want to see.

I created a recruitment message about the meetup as a way to find people who had  similar mindset. Here is roughly what I included in the meetup description of people who should probably join, what they probably have interest of:

  • a passion for testing
  • a will to learn new things
  • an interest to learn more about exploratory testing
  • an interest to try new techniques and methods in testing, to see what might work in your context
  • an interest to practise testing as well as theorize about it
  • an interest in experimenting with collaborative testing
  • an interest to explore how to improve cooperation between developers, testers and other roles in a team or project
  • an interest to help the open source community by providing them with valuable quality-related information about their products or services
  • an interest to help research in testing by saving artifacts produced by our testing and experiments
  • an interest to network with other people who are interested in testing in Gothenburg

For instance, I did not want a meeting where one person did the talk and the rest listened, then walked home. Thus without discussion or conferering. At many conferences the idea of having a discussion after the talk has been popular, something I find strange not to have.

I also wanted the meetup to be inclusive, welcoming all who wanted to learn more about testing. I wanted people who was inquisitive and interested in testing to join. It was ok to just come and listen, knowing that there were some in the audience who would debate and question what was said.

I wanted us to challenge existing practises and ideas by experimenting with them, trying the theory in practise. I welcomed new ideas on how to do things. If someone had a half-finished course, talk or idea our network was an excellent place to try it out.

I wanted to participants to ignore who their current competitors were and instead share their experience and ideas for free. Sharing material, links or other artefacts to help each other in our daily work.

I wanted to help the local startups and open source communities by helping them with testing, knowledge about testing and perhaps a handful of bug reports. Helping startups with testing, who quite often lacked testers, felt like a good thing. If that enabled them to reach a level of quality that made them succeed, then that was great.

I wanted to make the artefacts produced from these meetups to be public, so that they could be used by anyone and possibly in research. Still, this was difficult and hard to do in practise. The intent was there.

I did not want a too large group, perhaps max 15-20 people attending. More than that you have a hard time discussing.

I welcomed any non-tester to be part of our meetups. To see their perceptions added to the discussions, but also perhaps to give them some new ideas on what to expect from testers.

I wanted each meetup to have different sponsors, avoiding to sponsor the events with my own companies to avoid making them biased. I wanted the sponsor to be part of what we did and what we talked about, not just as a way for cheap exposure.

I wanted inexperienced speakers to speak up in a friendly, small audience. To practise their speaking skills. For me personally, this proved to be a great way to become better at speaking infront of an audience. As a separate, but similar effort I especially liked the initiative by Fiona Charles and Anne-Marie Charrett called Speak Easy to help speakers practise.

I welcomed topics that were difficult or that weren’t well polished, I wanted the speakers to go out on the thin ice. The heated dicussions were excellent. I have had several of these and learned a lot. Failing is a great experience.

I also wanted to get to know more testers locally, to see if they walked the walk and talked the talk. I wanted to see them solve puzzles, handle discussions on test strategy, both difficult and new areas in testing. I wanted a group of passionate testers whom I probably wanted to test with in future projects or companies. Probably people who I would someone recruit or help others recruit. This has been proved true and extremely effective. If anyone of those, who I thought were great, had a certificate or not was not important at all.

I wanted students in testing, junior testers or unemployeed people to get free lectures, workshops and courses. I also wanted them to be able to show how good they were to enable them to meet possible emplyers or recruits. This has also proved successful.

Personally, I use this meetup to become better at testing, talking about testing and finding new areas in testing that need exploring. It is my own playground.

Evolving Passion for Testing

After organizing a lot of the meetups myself, I realised that it is not possible to do this as a one-person-show. I wanted help from co-organizers to evolve what we were doing. Steve Öberg and Fredrik Almén helped me by organising Test Clinics, inspired by Michael Boltons 1 day test clinit at the end of his RST courses. The Test Clinic was held at a local pub were we as a group helped each other resolve our daily problems/obstacles in testing.

In the coming months I am trying a light version of the LAWST-style conference by letting the local meetup have an evening together. We have one subject presented by one speaker. We anticipate several hours of talk and discussion. We invite both experienced and inexperienced attendents. Everyone is expected to speak up and will be able to using the moderator form from LAWST. It will be interesting to see if we are able to make this a great, humble learning experience for all those participating.

I want more people to join but also more people to help organize events in testing. If you are interested to help out, feel free to contact me. All help is welcome!

This is my shared experience and ideas on setting up a meetup on testing. If you want to use my ideas on this, feel free to do so. If you wish to discuss the setup you can reach me on skype.

If you wish to join the meetup, you can do this here:
http://www.meetup.com/Passion-for-testing-Goteborg/

Not on Twitter Rikard Edgren No Comments

I don’t have a Twitter account.

I read Twitter now and then, it contains useful information, but I don’t have the time to do it properly. For me, doing it properly would mean to often write thoughtworthy things within 140 characters.

I only have one of those, so better publishing it in a blog post:

What about doing manual regression testing to free up time to make valuable automation?

So, that was one Christmaas tweet, and it did the opposite of decreasing my blogging frequency (which is the general drawback of testing tweeting in my opinion.)

Lots of test strategy Rikard Edgren No Comments

I have been doing lots on test strategy the last year. Some talks, a EuroSTAR tutorial, blog entries, a small book in Swedish, teaching at higher vocational studies, and of course test strategies in real projects.

The definition I use is more concrete than many others. I want a strategy for a project, not for a department or the future. And I must say it works. It gets more clear to people what we are up to, and discussions get better. The conversations about it might be most important, but I like to also write a test strategy document, it clears up my thinking and is a document to go back to for other reviewers.

Yesterday in the EuroSTAR TestLab I created a test strategy together with other attendants, using James Bach’s Heuristic Test Strategy Model as a mental tool. The documented strategy can be downloaded here, and the content might be less interesting than the format. In this case, I used explicit chapters for Testing Missions, Information Sources and Quality Objectives because I felt those would be easiest to comment on for the product manager and developer.

I have also put my EuroSTAR Test Strategy slides on the Publications page.

Happy strategizing!

Testing Examples Rikard Edgren 8 Comments

I believe we need a lot more examples of software testing. They will be key in transferring tacit knowledge (they will not be all that is required, but an important part.) They work best when done live, so you can discuss, but that doesn’t scale very well.

So I have created a few examples in video or text format:

Exploratory Testing Session – Spotify Offline

Bug Finding – Spotify Space, Image Galumphing

Scenario Testing – LibreOffice Compatibility

Test Analysis – Screen Pluck

I am very interested in knowing if they are useful to you, and how.

Which other public testing examples are your favorites?

Den Lilla Svarta om Teststrategi Rikard Edgren 4 Comments

I am quite proud to announce a new free e-book about test strategy.

It contains ideas Henrik Emilsson and I have have discussed for years.

It is not a textbook, but it contains many examples and material that hopefully will inspire your own test strategies (the careful reader will recognize stuff from this blog and inspiration from James Bach’s Heuristic Test Strategy Model.)

Reader requirement: Understand Swedish.

Download Den Lilla Svarta om Teststrategi

DenLillaSvarta

On ISO 29119 Content Rikard Edgren 4 Comments

Background

The first three parts of ISO 29119 were released in 2013. I was very skeptic, but also interested, so I grabbed an opportunity to teach the basics of the standard, so it would cover the costs of the standard.

I read it properly, and although I am biased against the standard I did a benevolent start, and blogged about it a year ago, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/

I have not used the standard for real, I think that would be irresponsible and the reasons should be apparent from the following critique. But I have done exercises using the standard and had discussions about the content, and used most of what is included at one time or another.

Here are some scattered thoughts on the content.

 

World view

I don’t believe the content of the standard matches software testing in reality. It suffers from the same main problem as the ISTQB syllabus: it seems to view testing as a manufacturing discipline, without any focus on the skills and judgments involved in figuring out what is important, observing carefully in diverse ways, and reporting results appropriately. It puts focus on planning, monitoring and control; and not about what is being tested, and how the provided information brings value. It gives an impression that testing follows a straight line, but the reality I have been in is much more complicated and messy.

Examples: Test strategy and test plan is so chopped up that it is difficult to do something good with it. Using the document templates will probably give the same tendency as following IEEE 829 documentation: You have a document with many sections that looks good to non-testers, but doesn’t say anything about the most important things (what are you trying to test, and how.)

For such an important area as “test basis” – the information sources you use – they only include specifications and “undocumented understanding”, where they could have mentioned things like capabilities, failure modes, models, data, surroundings, white box, product history, rumors, actual software, technologies, competitors, purpose, business objectives, product image, business knowledge, legal aspects, creative ideas, internal collections, you, project background, information objectives, project risks, test artifacts, debt, conversations, context analysis, many deliverables, tools, quality characteristics, product fears, usage scenarios, field information, users, public collections, standards, references, searching.

 

Waste

The standard includes many documentation things and rules that are reasonable in some situations, but often will be just a waste of time. Good, useful documentation is good and useful, but following the standard will lead to documentation for its own sake.

Examples: If you realize you want to change your test strategy or plan, you need to go back in the process chain and redo all steps, including approvals (I hope most testers adjust often to reality, and only communicate major changes in conversation.)

It is not enough with Test Design Specification and Test Cases, they have also added a Test Procedure step, where you in advance write down in which order you will run the test cases. I wonder which organizations really want to read and approve all of these… (They do allow exploratory testing, but beware that the charter should be documented and approved first.)

 

Good testing?

A purpose of the standard is that testing should be better. I can’t really say that this is the case or not, but with all the paper work there are a lot of opportunity cost, time that could have been spent on testing. On the other hand, this might be somewhat accounted for by approvals from stakeholders.

At the same time, I could imagine a more flexible standard that would have much better chances of encouraging better testing. A standard that could ask questions like “Have you really not changed your test strategy as the project evolved?” A standard that would encourage the skills and judgment involved in testing.

The biggest risk with the standard is that it will lead to less testing, because you don’t want to go through all steps required.

 

Agile

It is apparent that they really tried to bend in Agile in the standard. The sequentiality in the standard makes this very unrealistic in reality.

But they do allow bug reports not being documented, which probably is covered by allowing partial compliance with ISO 29119 (this is unclear though, together with students I could not be certain what actually was needed in order to follow the standard with regards to incident reporting.)

The whole aura of the standard don’t fit the agile mindset.

 

Finale

There is a momentum right now against the standard, including a petition to stop it http://www.ipetitions.com/petition/stop29119 which I have signed.

I think you should make up your own mind and consider signing it; it might help if the standard starts being used.

 

References

Stuart Reid, ISO/IEC/IEEE 29119 The New International Software Testing Standards, http://www.bcs.org/upload/pdf/sreid-120913.pdf

Rikard Edgren, ISO 29119 – a benevolent start, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/

ISO 29119 web site, http://www.softwaretestingstandard.org/

The Notepad and Visualize Heuristics Rikard Edgren 1 Comment

I was at Nordic Testing Days and had a great time meeting new and old friends. During my presentation about serendipity I showed two heuristics I wanted to share here as well. They both concern observing from different angles, to learn something new, and increase chances of serendipity.

The Notepad Heuristic

Years ago I worked with translations, and once we translated software that wasn’t fully internationalized. Some parts needed to be changed in binary files; meaning we should take great care that the string length was exactly the same.

Maybe this is why I now and then open many kinds of files in a text editor. In the start of image files, you can see which format it actually is, you can search for password strings, and above all, you might learn something new that will be useful.

.bmp has png content...

.bmp has png content…

Works wonders for text files as well, and it’s as fast as you can expect from a quicktest:

The Notepad Heuristic – open any file in a text editor, to see and learn more.

Visualize Heuristic

I worked several years for a company producing software for interactive data analysis. We used our software internally, e.g. by looking at the bug system or log files visually. This is not only fun, it also can show you things and it suggests areas to learn even more about.

Not that long ago, we tested an application that calculated driving times and risk coverage for fires in Sweden. It was a huge database with numbers, so we had a look at the data visually, where it showed Sweden with colored dots. We could see that there were holes in the data for a couple of municipalities, which was something we knew we were looking for. But when filtering for bigger fire risks, we found something we didn’t know we were looking for, a squared pattern that in no way can be correct (underlying data had to be rebuilt.)

fire risk data is skewed...

fire risk data is skewed…

This is a good serendipity example, you look for something, but you find something else that is valuable.

Visualize Heuristic – look at data visually, to see patterns, trends and outliers.

 

Page 1 of 2712345...1020...Last »