Test Strategy Checklist Rikard Edgren 2 Comments
In Den Lilla Svarta om Teststrategi, I included a checklist for test strategy content. It can be used in order to get inspired about what might be included in a test strategy (regardless if it is in your head, in a document, or said in a conversation.)
I use it in two ways:
- To double-check that I have thought about the important things in my test strategy
- To draw out more information when I review/dicuss others’ test strategies
It might be useful to you, in other ways, and the English translation can be downloaded here.
Special thanks to Per Klingnäs, who told me to expand on this, and Henrik Emilsson, who continuously give me important thoughts and feedback.
Lessons learned from growing test communities Martin Jansson 1 Comment
In 2011-2012 a friend, Steve Öberg, started a local community with a few test friends. We talked about testing, sharing experiences and discussing/arguing about various test topics. It was a great initiative, but I wanted something more and bigger. I had a discussion with Emily Bache, who run a local meetup on programming. She inspired me to take the step into meetups. It seemed like a great way to organize meetings. I created Passion for Testing and started to invite friends interested in testing.
Creation of the meetup Passion for Testing
I had a few ideas on what I wanted to achieve and things I did not want to see.
I created a recruitment message about the meetup as a way to find people who had similar mindset. Here is roughly what I included in the meetup description of people who should probably join, what they probably have interest of:
- a passion for testing
- a will to learn new things
- an interest to learn more about exploratory testing
- an interest to try new techniques and methods in testing, to see what might work in your context
- an interest to practise testing as well as theorize about it
- an interest in experimenting with collaborative testing
- an interest to explore how to improve cooperation between developers, testers and other roles in a team or project
- an interest to help the open source community by providing them with valuable quality-related information about their products or services
- an interest to help research in testing by saving artifacts produced by our testing and experiments
- an interest to network with other people who are interested in testing in Gothenburg
For instance, I did not want a meeting where one person did the talk and the rest listened, then walked home. Thus without discussion or conferering. At many conferences the idea of having a discussion after the talk has been popular, something I find strange not to have.
I also wanted the meetup to be inclusive, welcoming all who wanted to learn more about testing. I wanted people who was inquisitive and interested in testing to join. It was ok to just come and listen, knowing that there were some in the audience who would debate and question what was said.
I wanted us to challenge existing practises and ideas by experimenting with them, trying the theory in practise. I welcomed new ideas on how to do things. If someone had a half-finished course, talk or idea our network was an excellent place to try it out.
I wanted to participants to ignore who their current competitors were and instead share their experience and ideas for free. Sharing material, links or other artefacts to help each other in our daily work.
I wanted to help the local startups and open source communities by helping them with testing, knowledge about testing and perhaps a handful of bug reports. Helping startups with testing, who quite often lacked testers, felt like a good thing. If that enabled them to reach a level of quality that made them succeed, then that was great.
I wanted to make the artefacts produced from these meetups to be public, so that they could be used by anyone and possibly in research. Still, this was difficult and hard to do in practise. The intent was there.
I did not want a too large group, perhaps max 15-20 people attending. More than that you have a hard time discussing.
I welcomed any non-tester to be part of our meetups. To see their perceptions added to the discussions, but also perhaps to give them some new ideas on what to expect from testers.
I wanted each meetup to have different sponsors, avoiding to sponsor the events with my own companies to avoid making them biased. I wanted the sponsor to be part of what we did and what we talked about, not just as a way for cheap exposure.
I wanted inexperienced speakers to speak up in a friendly, small audience. To practise their speaking skills. For me personally, this proved to be a great way to become better at speaking infront of an audience. As a separate, but similar effort I especially liked the initiative by Fiona Charles and Anne-Marie Charrett called Speak Easy to help speakers practise.
I welcomed topics that were difficult or that weren’t well polished, I wanted the speakers to go out on the thin ice. The heated dicussions were excellent. I have had several of these and learned a lot. Failing is a great experience.
I also wanted to get to know more testers locally, to see if they walked the walk and talked the talk. I wanted to see them solve puzzles, handle discussions on test strategy, both difficult and new areas in testing. I wanted a group of passionate testers whom I probably wanted to test with in future projects or companies. Probably people who I would someone recruit or help others recruit. This has been proved true and extremely effective. If anyone of those, who I thought were great, had a certificate or not was not important at all.
I wanted students in testing, junior testers or unemployeed people to get free lectures, workshops and courses. I also wanted them to be able to show how good they were to enable them to meet possible emplyers or recruits. This has also proved successful.
Personally, I use this meetup to become better at testing, talking about testing and finding new areas in testing that need exploring. It is my own playground.
Evolving Passion for Testing
After organizing a lot of the meetups myself, I realised that it is not possible to do this as a one-person-show. I wanted help from co-organizers to evolve what we were doing. Steve Öberg and Fredrik Almén helped me by organising Test Clinics, inspired by Michael Boltons 1 day test clinit at the end of his RST courses. The Test Clinic was held at a local pub were we as a group helped each other resolve our daily problems/obstacles in testing.
In the coming months I am trying a light version of the LAWST-style conference by letting the local meetup have an evening together. We have one subject presented by one speaker. We anticipate several hours of talk and discussion. We invite both experienced and inexperienced attendents. Everyone is expected to speak up and will be able to using the moderator form from LAWST. It will be interesting to see if we are able to make this a great, humble learning experience for all those participating.
I want more people to join but also more people to help organize events in testing. If you are interested to help out, feel free to contact me. All help is welcome!
This is my shared experience and ideas on setting up a meetup on testing. If you want to use my ideas on this, feel free to do so. If you wish to discuss the setup you can reach me on skype.
If you wish to join the meetup, you can do this here:
Not on Twitter Rikard Edgren No Comments
I don’t have a Twitter account.
I read Twitter now and then, it contains useful information, but I don’t have the time to do it properly. For me, doing it properly would mean to often write thoughtworthy things within 140 characters.
I only have one of those, so better publishing it in a blog post:
What about doing manual regression testing to free up time to make valuable automation?
So, that was one Christmaas tweet, and it did the opposite of decreasing my blogging frequency (which is the general drawback of testing tweeting in my opinion.)
Lots of test strategy Rikard Edgren No Comments
I have been doing lots on test strategy the last year. Some talks, a EuroSTAR tutorial, blog entries, a small book in Swedish, teaching at higher vocational studies, and of course test strategies in real projects.
The definition I use is more concrete than many others. I want a strategy for a project, not for a department or the future. And I must say it works. It gets more clear to people what we are up to, and discussions get better. The conversations about it might be most important, but I like to also write a test strategy document, it clears up my thinking and is a document to go back to for other reviewers.
Yesterday in the EuroSTAR TestLab I created a test strategy together with other attendants, using James Bach’s Heuristic Test Strategy Model as a mental tool. The documented strategy can be downloaded here, and the content might be less interesting than the format. In this case, I used explicit chapters for Testing Missions, Information Sources and Quality Objectives because I felt those would be easiest to comment on for the product manager and developer.
I have also put my EuroSTAR Test Strategy slides on the Publications page.
Testing Examples Rikard Edgren 6 Comments
I believe we need a lot more examples of software testing. They will be key in transferring tacit knowledge (they will not be all that is required, but an important part.) They work best when done live, so you can discuss, but that doesn’t scale very well.
So I have created a few examples in video or text format:
Exploratory Testing Session – Spotify Offline
Scenario Testing – LibreOffice Compatibility
Test Analysis – Screen Pluck
I am very interested in knowing if they are useful to you, and how.
Which other public testing examples are your favorites?
Den Lilla Svarta om Teststrategi Rikard Edgren 2 Comments
I am quite proud to announce a new free e-book about test strategy.
It contains ideas Henrik Emilsson and I have have discussed for years.
It is not a textbook, but it contains many examples and material that hopefully will inspire your own test strategies (the careful reader will recognize stuff from this blog and inspiration from James Bach’s Heuristic Test Strategy Model.)
Reader requirement: Understand Swedish.
Download Den Lilla Svarta om Teststrategi
On ISO 29119 Content Rikard Edgren 4 Comments
The first three parts of ISO 29119 were released in 2013. I was very skeptic, but also interested, so I grabbed an opportunity to teach the basics of the standard, so it would cover the costs of the standard.
I read it properly, and although I am biased against the standard I did a benevolent start, and blogged about it a year ago, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/
I have not used the standard for real, I think that would be irresponsible and the reasons should be apparent from the following critique. But I have done exercises using the standard and had discussions about the content, and used most of what is included at one time or another.
Here are some scattered thoughts on the content.
I don’t believe the content of the standard matches software testing in reality. It suffers from the same main problem as the ISTQB syllabus: it seems to view testing as a manufacturing discipline, without any focus on the skills and judgments involved in figuring out what is important, observing carefully in diverse ways, and reporting results appropriately. It puts focus on planning, monitoring and control; and not about what is being tested, and how the provided information brings value. It gives an impression that testing follows a straight line, but the reality I have been in is much more complicated and messy.
Examples: Test strategy and test plan is so chopped up that it is difficult to do something good with it. Using the document templates will probably give the same tendency as following IEEE 829 documentation: You have a document with many sections that looks good to non-testers, but doesn’t say anything about the most important things (what are you trying to test, and how.)
For such an important area as “test basis” – the information sources you use – they only include specifications and “undocumented understanding”, where they could have mentioned things like capabilities, failure modes, models, data, surroundings, white box, product history, rumors, actual software, technologies, competitors, purpose, business objectives, product image, business knowledge, legal aspects, creative ideas, internal collections, you, project background, information objectives, project risks, test artifacts, debt, conversations, context analysis, many deliverables, tools, quality characteristics, product fears, usage scenarios, field information, users, public collections, standards, references, searching.
The standard includes many documentation things and rules that are reasonable in some situations, but often will be just a waste of time. Good, useful documentation is good and useful, but following the standard will lead to documentation for its own sake.
Examples: If you realize you want to change your test strategy or plan, you need to go back in the process chain and redo all steps, including approvals (I hope most testers adjust often to reality, and only communicate major changes in conversation.)
It is not enough with Test Design Specification and Test Cases, they have also added a Test Procedure step, where you in advance write down in which order you will run the test cases. I wonder which organizations really want to read and approve all of these… (They do allow exploratory testing, but beware that the charter should be documented and approved first.)
A purpose of the standard is that testing should be better. I can’t really say that this is the case or not, but with all the paper work there are a lot of opportunity cost, time that could have been spent on testing. On the other hand, this might be somewhat accounted for by approvals from stakeholders.
At the same time, I could imagine a more flexible standard that would have much better chances of encouraging better testing. A standard that could ask questions like “Have you really not changed your test strategy as the project evolved?” A standard that would encourage the skills and judgment involved in testing.
The biggest risk with the standard is that it will lead to less testing, because you don’t want to go through all steps required.
It is apparent that they really tried to bend in Agile in the standard. The sequentiality in the standard makes this very unrealistic in reality.
But they do allow bug reports not being documented, which probably is covered by allowing partial compliance with ISO 29119 (this is unclear though, together with students I could not be certain what actually was needed in order to follow the standard with regards to incident reporting.)
The whole aura of the standard don’t fit the agile mindset.
There is a momentum right now against the standard, including a petition to stop it http://www.ipetitions.com/petition/stop29119 which I have signed.
I think you should make up your own mind and consider signing it; it might help if the standard starts being used.
Stuart Reid, ISO/IEC/IEEE 29119 The New International Software Testing Standards, http://www.bcs.org/upload/pdf/sreid-120913.pdf
Rikard Edgren, ISO 29119 – a benevolent start, http://thetesteye.com/blog/2013/11/iso-29119-a-benevolent-start/
ISO 29119 web site, http://www.softwaretestingstandard.org/
The Notepad and Visualize Heuristics Rikard Edgren 1 Comment
I was at Nordic Testing Days and had a great time meeting new and old friends. During my presentation about serendipity I showed two heuristics I wanted to share here as well. They both concern observing from different angles, to learn something new, and increase chances of serendipity.
The Notepad Heuristic
Years ago I worked with translations, and once we translated software that wasn’t fully internationalized. Some parts needed to be changed in binary files; meaning we should take great care that the string length was exactly the same.
Maybe this is why I now and then open many kinds of files in a text editor. In the start of image files, you can see which format it actually is, you can search for password strings, and above all, you might learn something new that will be useful.
Works wonders for text files as well, and it’s as fast as you can expect from a quicktest:
The Notepad Heuristic – open any file in a text editor, to see and learn more.
I worked several years for a company producing software for interactive data analysis. We used our software internally, e.g. by looking at the bug system or log files visually. This is not only fun, it also can show you things and it suggests areas to learn even more about.
Not that long ago, we tested an application that calculated driving times and risk coverage for fires in Sweden. It was a huge database with numbers, so we had a look at the data visually, where it showed Sweden with colored dots. We could see that there were holes in the data for a couple of municipalities, which was something we knew we were looking for. But when filtering for bigger fire risks, we found something we didn’t know we were looking for, a squared pattern that in no way can be correct (underlying data had to be rebuilt.)
This is a good serendipity example, you look for something, but you find something else that is valuable.
Visualize Heuristic – look at data visually, to see patterns, trends and outliers.
Charisma Testing Rikard Edgren 4 Comments
Why do you prefer a product even if it has equal functionality to a competitor?
What is it that makes one product stand out amongst the other?
Maybe you have had thoughts about what makes a product feel special?
We all know that this happens in some way, but how do you test for the quality characteristic we like to call Charisma?
Charisma. Does the product have “it”?
Uniqueness: the product is distinguishable and has something no one else has.
Satisfaction: how do you feel after using the product?
Professionalism: does the product have the appropriate flair of professionalism and feel fit for purpose?
Attractiveness: are all types of aspects of the product appealing to eyes and other senses?
Curiosity: will users get interested and try out what they can do with the product?
Entrancement: do users get hooked, have fun, in a flow, and fully engaged when using the product?
Hype: does the product use too much or too little of the latest and greatest technologies/ideas/trends?
Expectancy: the product exceeds expectations and meets the needs you didn’t know you had.
Attitude: do the product and its information have the right attitude and speak to you with the right language and style?
Directness: are (first) impressions impressive?
Story: are there compelling stories about the product’s inception, construction or usage?
It should be noted that charisma is different for every product; only some of the above characteristics will be relevant for you. And most of them will need more details to provide value in your situation.
For specific solutions for specific customers, it might not be worth spending time on charisma testing at all.
If a product has charisma or not is primarily a subjective judgment. This is most probably the reason why it hasn’t been focused within software testing. But end users are subjective, so why shouldn’t testers be?
So forgive us if you disagree, but we think the iPhone has Charisma, primarily the sub-characteristics Uniqueness, Attractiveness, Satisfaction, Entrancement and Hype. It doesn’t have any unique feature, but, when introduced on market, it had a unique combination of features. The way you slide your finger when unlocking the phone has so much attractiveness that it has been mimicked by competitors, perhaps without success since their touch-sensitivity isn’t equally good.
Satisfaction is common with an iPhone, until it breaks (but this article isn’t about Reliability…) The entrancement is seen for people that picks up the phone to do something with it, when the amount of available time can be counted in seconds. The hype was created by the very same product. The other sub-characteristics are not irrelevant, but not as dominant, in our opinion.
So how can you test for these things when developing a product?
If you are aware of charisma while performing manual testing, you can watch out for violations against the characteristics. You have a lot of heuristics for this, here are some examples:
Applications shouldn’t distract me with information I’m not interested in (Entrancement)
After a demo, I should remember some of the features (Directness)
User Interface feels right for this kind of task and user (Professionalism)
I shall not be upset with a product after using it (Satisfaction)
I don’t react negatively to any language (Attitude)
There should be something to write home about (Hype)
You can also try to notice overall charisma violations, when the product is bland.
Unorthodox Test Methods
If Charisma is important in your project, you might need to do some unorthodox test activities.
deep interviews – ask users why they are hooked, what they love (also talk to those that didn’t start using the product!)
diverse focus groups – investigate reactions and non-reactions.
observations by proxies – manage bias by letting others observe and interpret the product.
uncontrolled users/environments – let people try the software in whatever way they want; analyze results.
trying many versions – look and feel dozens of alternative designs.
competitor comparison – helps you find market-specific charisma drivers.
You can find inspiration by searching for “software desirability”, but for software testers there is not much written; a few remarks at http://thetesteye.com/blog/tag/charisma/ and a welcome addition in James Bach’s Heuristic Test Strategy Model
Subjective bug reports
There are (as far as we know) no appropriate measurements or ways to specify charisma in detail. So when reporting areas with room for improvements, you will have to make a case, or just hope that other people agree. If several people think the same, you can claim inter-subjectivity, and have a better chance.
There won’t be any Charisma standards to adhere to, but subjective comments like “if we change this, it feels better” can make the product more pleasant to use, and thereby better.
In our experience, a conversation with the designer is the best way to communicate good and bad things, and by using our list of characteristics, it might be easier to communicate compelling reasons to fix problems.
Who should test for Charisma?
Many don’t think charisma is something testers should bother with. This is understandable if you see testing as a technical verification that should result in an objective Pass or Fail. But many see testing as an investigation for important quality-related information, so how come charisma isn’t used more often?
Maybe it needs the same journey as usability; 20 years ago it wasn’t a major concern for testers, today many testers provide a lot of value in this area.
It is not evident that testers are the primary interpreters of Charisma. But at the same time, manual testers might be the ones that has the best experience and knowledge about the product as a whole, which might be key for some Charisma aspects. For some sub-characteristics, you might need to use people without product knowledge, especially when first impressions or surprise factors are important.
At the very least, you want the daily testers, who are one of few that know diverse details, to be aware of your product’s unique charisma.
Some might say that it would be superficial to test for things like this; it is what the product can accomplish that provides true value. We agree with this in theory, but in practice, this is the way the world is right now…
Most important is to be aware of this product attribute, to know how important it is for the success of products.
We are confident that most of you agree that many of these characteristics are important aspects for the users’ perception of your product.
Then how come you don’t test charisma??
[co-authored with Henrik Emilsson]
Acting On Answers Rikard Edgren 1 Comment
Asking questions is rightly regarded as important for testers. But I seldom here anything about what we should do with the answers. Not that I believe anyone would ask the question, then not listen to the result, but I think we take for granted that we will understand the answers, and that we can use them straight away for our testing purposes. In my experience, this is often not the case. Of course it can happen that you ask: “What about performance?” and get a “Sure, we have these performance objectives, didn’t you get them?”
But more often you have to do quite a bit of interpretation:
One stakeholder said “fast in, fast out”, which when tied together with other understanding gave three sub-strategies:
- User testing to see typical users will find the information they are looking for.
- Heuristic usability evaluation with focus on operability (default focus, few click, fast-read.)
- Evaluate perceived performance when system has normal high load.
And sometimes guessing:
A developer said “In last release many users helped with testing, so I can’t think of anything specifically that could be tested.”
It seems like the team believes they have a perfect program that don’t need testing. And they assume the testing is good enough if you involve users. Their underlying test strategy probably focus on platforms, new/common functionality and charisma (that they like the new version.) The product’s slogan includes “easy to use”, which can mean many things:
- easy for first timers
- fast for power users
- accessibility for functional disabilities
- good Help
So we could focus on testing towards usability for the shy or non-technical users. We could look at resource utilization, complex situations; and that’s a good start at least. We’ll figure out more when we get to know the software.
So how do we learn to interpret and act on answers? Most people already know it, it is part of being human, but I would assume that your skill in this will increase with your experience. So the best way of teaching this I know is to tell stories, and put learner in situations where they have to do this themselves.
As my grandmother says: “If you don’t ask you won’t get any answers.”
And the answer is only the beginning.