SWET1 fragments the test eye 8 Comments

The delegates of the first Swedish Workshop on Exploratory Testing were:
Torbjörn Ryber, Simon Morley, Christin Wiedemann, Petter Mattsson, Anders Claesson, Oscar Cosmo, Johan Hoberg, Rikard Edgren, Henrik Emilsson, Martin Jansson, Ann Flismark, Henrik Andersson, Michael Albrecht, Johan Jonasson, James Bach.
This write-up surely contains mistakes and important omissions, and it might be too heavy to read, but we bet you can jump in anywhere, read for a minute, and get something interesting out of it.

“15 people spending a weekend of their spare time to talk about Exploratory Testing!” (Andersson)
“This is the 1% of the 1%” (Bach)

Henrik Andersson – add Learning and Adaption to SBTM debriefing

Have experienced that testers stop doing debriefing after some time, which leads to lower Session Notes quality, or that they are skipped.
If it is a time issue, several debriefs can be made at once, or that testers debrief each other, or that all debriefs are made at once (30 minutes) at the end of the day.
The PROOF (Past, Results, Obstacles, Outlook, Feelings) mnemonic doesn’t support so much of coaching and learning, therefore Henrik advocates adding Learnings and Adaption (PROOF L.A.) that makes people discuss, learn, and use debriefs better (and stop skipping them.)

During debriefings you come up with new test ideas (Edgren)
Master the skill of note-taking
“the debrief formally accepts the session report” (Bach)
“The PROOF is (meta-)data regarding the session; LA is personal data (gathered from session)” (Emilsson)
“LA is not written down as a part of the debrief; it is only discussed orally” (Andersson)

The discussion went to many places, including status reports, and the need for training stakeholders if necessary.
“In my status reports, feelings are very important!” (Andersson)
“you almost need a handshake” (Morley)
Claesson shared a story from an East Asian company where the man-in-charge went to the most experienced troubleshooter and asked “Do you think the product is ready?”
“Troubleshooter as a career path in testing?” (Jansson)

debriefs can differ a lot depending on tester, area, type of work, experience etc.
a project manager should care about the team members’ long-term development
“testing is a learning experience in itself” (Claesson)
“don’t add things to models that already are good” (Mattsson)
“I love lists” (Wiedemann)
“do debriefs with a developer” (Flismark)
“break patterns to see what happens” (Andersson)
“are you satisfied with testers that don’t want/know how to learn?”

“it’s not a tool to fill the database” (Jansson)
As a single tester in a SCRUM team, you can have testers from different teams debrief each other.
“personal debriefing” is for the experienced
for inexperienced testers you don’t want to miss the coaching opportunity of debriefings
if you take away debriefs you might get too much freedom?
This area can be further analyzed and discussed, especially Group Debriefs.

Petter Mattsson – From 10.000 test cases to SBTM

At UIQ, producer of mobile phone software, they were running 10.000 manual test scripts, and did not find many bugs. These were instead found by customers…
Petter went to the Rapid Software Testing course and his conception of the world changed.
They decided to start using Exploratory Testing with Session-Based Test Management.
They got the opportunity to do a pilot that resulted in more, and more interesting, bugs.
A key to getting the Go decision, was a 2 hour workshop where all managers discussed questions like What is Quality? What is Testing?
They also did an exercise that showed the difference between scripted and exploratory testing.
The workshop was attended by all, the invitation was sent by a manager.

Michael Bolton did the RST course for all testers, that got inspired and motivated.
“RST with bonus day is great”
“we start with people that were hired two weeks ago, without testing experience”
They started with pair testing only, and then mixed with alone-testing.
Mixed different personality types, testers/developers, experienced/inexperienced.
“in the beginning you need to push them, but not after a while”
They developed a web-based solution for session report management (based on Perl tool)
They kept 500 test cases for verifying the basic functionality, and threw away the rest.
“we always need to do that mix” (of ET and ST)
They had really good results with the new approach, but a big phone comapny bought UIQ’s customer (that also was an owner), and the company was shut down.

Common transition problem: “ET is nice, but we have this problem, so we need to get back to the test cases.”
At another company, Petter has seen resistance to transition from testers, and also from managers, that don’t allow time to be spent.
“Just sit with them and test, let it grow organically”
“Why wait so many years for improving the way you work? Why wade in the mud when you can just step out?” (Jansson)
“They are not interested in testing; testers are used as a blame-shield” (Bach)
“do more of the robot dance” (Bach)
“If noone admits that there is a problem, noone owns the problem, and there is nothing to resolve” (Emilsson)
“ET helps you find important problems quickly”
“I’d like to have the testers off-site for a month, and just talk to them” (Mattsson)
“run the same test title, but without the steps”
heavy pain gives motivation
“They need to say: I am an alcoholic. My testing strategy really sucks” (Claesson)
“all organizations are perfectly designed to get what they get”
“small-scale change with one word: serendipity; scripted testing with deviations is a good start” (Edgren)
“I just removed the steps from the scripts, nobody noticed” (Cosmo)
Is RST the silver bullet? No, a money machine 😉
“I’ll talk to one, that talk to others, and it spreads all around” (Mattsson)
It takes time to grow an exploratory testing team.
“If you say it will be hard, it will be hard, it’s a self-fulfilling prophecy” (Claesson)
“Improvisational Exploratory Scripted Testing” (Bach)
“What’s good in these test cases live on in your heart. You shouldn’t tell people to throw away something that is useful.” (Bach)
The scripted test become a ceremonial testing.

“it is the tester’s responsibility to improve” (Many)
“we should have a class: how to evangelize testing” (Bach)
You evangelize by giving real examples, from your own organization.
“the important thing is to talk about good testing” (Morley)
“It is not necessary to create a revolution, sometimes an evolution is better” (Jansson, Emilsson)
“motivation is the most important factor, a motivated team can achieve whatever they want” (Emilsson)
if nothing helps, you should use hard facts – bug stories, from real customers
take your most painful bugs, and talk about them.
FDA recalls often say “under certain circumstances”; that’s when you need Exploratory Testing (Bach)
http://www.fda.gov/MedicalDevices/Safety/RecallsCorrectionsRemovals/ListofRecalls/default.htm

Christine Wiedemann – starting with SBTM

They are an autonomous test group of 2 persons, that works with external customers.
Customer A – many requirements, a couple of hundred test cases, 3 weeks to execute.
Customer B,C,D – no requirements.

Customer A found showstopping bugs after production, and they realized:
“what we do doesn’t work; we have to do better testing”
In March/April they stopped writing test cases.
Inspired by SBTM, they started writing test execution notes in Word, sometimes working in pair.
“Now, we’re having fun when we are testing”, and they find the defects before production.
They are cooperating with developers and customers, and have even done ET workshops with the enemy (customers)
Customers are doing ET on the delivery, they have even taken the RST course!
“Management still doesn’t care what we do.”
We have a better understanding of the product and current high-risk areas.
Went from “trying to reach coverage” to “trying to find defects before production”
shared responsibility for quality

So what is left to do:
* more structured environment
* more documentation
* better time-boxing
* automation

“unfortunately, developers are too creative, and new features appear”
“I don’t need requirements. I love it.”
They use diagrams that describe the functionality (yEd)
they report bugs to the “ether”
“I have stopped being personally attached to bugs. Stopped arguing for bugs, instead telling developers ‘I saw something strange…'” (Ryber)
you can also say “this one is probably too difficult…”
Reality Steam Roller Method – let them go into suffering, but help them (Bach)
HICCUPPS – “I invented the name. I noticed people doing this.” (Bach)
Regarding more structured environment: “You are an explorer, explore a project and try to find out what might be important during a project’s lifecycle; and use this as a checklist for deciding on what to do and when.” (Emilsson)

granularity of Session notes may vary, richer reports have more information, but are harder to read.
“Sometimes I write down the humidity in the room” (Wiedemann)
Why do you want structure? Rather, be more aware of the structure.
“be the author, not the victim”
“unserendipity – to get through the test case without finding bugs” (Edgren)
“focus on what you want to achieve” (Albrecht)
“most important traits are awareness and willingness to learn” (Claesson)
trigger yourself for new test ideas
Anders Claesson shared a story that started as a tester learning about customer’s usage had ripple effects and evolved to a very good customer relation.

Ann Flismark – SBTM and KPI??

Went to STARWEST and got inspired.
Started with SBTM in December 2009, had problems integrating it in existing system, and are working on their own tool.
“what I prepared doesn’t make sense anymore”, so instead the focus was on requests for KPI, maybe they can help us as testers as well?

“Just say no” (All)
“Do testers feel productive?” (Edgren)
“How good do you think the product is?”
“tell the manager you will make something up” (Emilsson)
“you need to learn how to do status reporting” (Bach)
The true problem: how to deal with manager requests you don’t think are meaningful.

James Bach – Experience report involving Thread-Based Test Management

At a project, it was impossible to stick to completing the sessions.
There were many and continuous interruptions, and tasks could not be completed due to various reasons, “a bunch of pots boiling”.
The simple idea for this is: Thread-Based Test Management
“work backwards from the status report you want to be able to give”
“monitors status of test activities”
Why name it? So you can say “we switched from a session-based approach to thread-based.”
“This is such a simple idea. Too simple for a book, maybe a pamphlet.”
compare activity-based, artefact-based, and metrics-focused management

Is this the same as Kanban, without limiting work in progress?
No, key idea is “you acknowledge the fact that test activities rarely are finished” (Edgren, Emilsson)
In testing, you can’t always check off items on your To-do list (which (together with throughput) is the point of Kanban)
“the simple (and hard!) thing: think of things I’m working on” (Emilsson)
get out of the “Are you done yet?” trap
Waterfall and V-model might have started as jokes?
“project management tool focused on activities that rarely are finished”
Test Storytelling Tool (test management tools only handles test cases, not stories)
Are there any other professions where you want to transform activity results to status reports?
“don’t want to over-identify this until I have talked to you” (Bach)
which are areas of importance that need to be developed?

Cosmo uses a web forum for TBTM (thetesteye have used Word)
Mind Manager could be a tool, you can use icons to filter with. Tags or table columns for other tools.
“Do you feel you are trapped by SBTM or that SBTM is your silver bullet?” (Jansson)
James said he could get biased by SBTM, but “there are no silver bullets”, Fred Brooks
Common mistake: transform the map (model) to a list that should be Done (this doesn’t fit ongoing activities) (Emilsson)
There are many situations where it doesn’t matter if you can measure if you’re done.

“Testing doesn’t deliver anything except information about the product. Information that continuously emerges. Threads describe activities that lead to information that is part of our report” (Edgren)
There are (at least) two stories; one about the testing, and one about the product.
Story-based test management – threads are weaving into a report that is the story about the product
Test framing – thread design
Testing transforms a naive story about the product to a sophisticated product story. (Bach)
naive rumors -> testing -> empirically grounded information
different threads can be executed with one test activity
“twist together into a chord”
go out of the abstract world, and visualize testing
“everybody is already doing TBTM”
developer to a tester struck by the process: “don’t worry, we’re not gonna do anything they say anyway”
“you can all say ‘James Bach says I am a world expert in thread-based test management.'” (Bach)

Everybody was very happy with the peer conference, and let’s end by quoting Andersson’s check-out: “I’m pretty damn sure we have a bright future.”

/Rikard, Henrik, Martin

Scripted Testing: Filling out templates Henrik Emilsson 5 Comments

I saw an interesting interview with Rob Sabourin today http://www.youtube.com/watch?v=HZRXdaN7gkY  (Thanks for the tip, Jon Bach!)
One thing he says in this video is: “… There are a lot of template junkies out there. [Testers are]  filling out templates and not actually testing. That frustrates testers…”

Hey, isn’t this the same thing that happens in strictly scripted testing environments?
When testers are following detailed test scripts they are actually ending up filling out templates. With the same frustration as when filling out templates where you also don’t have to bother use your brain.
The result is the same for scripted testing as with a filled out template. Little mind work has been involved and so the information is very narrow and/or shallow…

It is sometimes said that templates are good because it is prevent someone from doing mistakes and/or making sure that the format is the same all the time. But isn’t a template to a document the same as what a script is to a test? A prescribed recipe on how to complete the task. I.e. it is about management control.

Isn’t it so that you easily can be hit by inattentional blindness as you fill out the template and only worrying about not screwing up the format? As with scripted testing, the focus lies in the format rather in the content. Both in scripted testing and in filling out templates you worry about making sure all things are filled out as expected, not the actual testing or the actual writing of informative text.

And, as with scripted tests, if we could fill out the templates automatically, we wouldn’t need to do them manually.

The Crashing Paper Airplane Heuristic Henrik Emilsson 2 Comments

I thought of this the other day when rethinking a situation that was described in the experience report from Petter Mattson on “Swedish Workshop on Exploratory Testing” (SWET1).

Let’s say that you have 100 Paper Airplane builders in your team. They all follow scripted instructions on how to fold a paper in order to create a specific paper airplane; and the goal is to build airplanes that can fly.
But you notice a problem. None of the airplanes can fly! No matter of how much strength you use to throw them away, they instantly fall to the ground and crashes.

What you could do in this situation is to keep the scripted instructions and hope that mother nature changes so that the current airplanes can fly.
Or,  you could throw away the scripted instructions and let people use their minds, creativity, and exploratory skills in order to find out how a paper can be folded in order to get a functional paper airplane that can fly.

If what you are currently doing isn’t working, i.e. not fulfilling your goal (in this case to get an airplane to fly), a change cannot be more unsuccessful. It is instead an opportunity to get at least one airplane up and flying, even if it doesn’t look like one of the intended airplanes. Plunge in and each small progress is something that you will learn from.
I think that it is enough with just 1 paper airplane builder being successful and the other 99 just sitting down and having a cup of coffee. This is at least more successful than having 100 paper airplane builders not creating any value at all.

The Crashing Paper Airplane Heuristic: If you follow scripted instructions on how to build paper airplanes and you don’t create anything that can fly, throwing away those instructions and start building paper airplanes from scratch cannot be more unsuccessful.

Rapid Test Preparation Martin Jansson 9 Comments

At many bigger companies where you have several teams working with the same test scope there is often a need to communicate how you intend to test something and what areas you intend to cover. It is common that you have a test plan and perhaps a test strategy, they usually do not go into detail on each feature or area that you intend to test. Instead you create a test specification containing test cases with repro-steps and lots of information so that anyone can execute the test case. The test specification need to be reviewed before you commence with testing. Many times you assign one person to be responsible for creating such a spec. It is not uncommon that you set someone to write on this test specification for several weeks. If you have never seen this situation, then good for you!

The rest of you who also see this as a fairly common situation… here is a different way of working with test specifications (or rather not working with them).

Let us assume that you are a member of a test team that is on one test site out of many in a project. The head project manager asks you, as a test lead, to give an estimation on how much time you need to system test feature X. At the same time the head project manager asks another test lead on how to test feature X, but at a lower test level such as integration or function. Instead of giving an estimation directly, the test leads asks to prepare and plan a bit then come back shortly.

Instead of doing it the regular way, as in both test leads giving an estimate to the project manager without communicating or writing down their thoughts, one of the test leads start writing on a test document, namely a Test Proposal (TP). All ideas, risks, work packages, unresolved issues, todos, estimations, contact persons, etc are listed in this TP. The test lead can show the rest of the team the thoughts that have guided him so far and can assist with the rough estimation. Shortly they can return to the project manager with an estimate and explain roughly the plan outlined in the TP. It is not uncommon that some of these features are not able to be added to the overall project, so the test lead does not spend any more time at this stage. If possible the two test leads have cooperated and written in the same TP, but worked on different perspectives and areas. A lot of information would have been the same for both test leads.

A few weeks later feature X appears on the radar to be included in the release after all. The test leads or those responsible for the feature can now continue preparing. A group of two or three testers, depending on size of feature, form up to further prepare the feature. If the feature is available to be looked at in the release, the group starts do a minor test in order to understand how it worked and getting a feeling what to expect. Then they sit down and brainstorm on the feature further. More content is added to the TP and eventually it is ready to be shown to business analysts and developers responsible for the feature as well as to the other test teams working on the same feature. The TP is quite short because of the one-liner test ideas, thus effective to be communicated and reviewed. After a few rounds between the different parties the list of test ideas, open issues, work packages and estimations etc begin to feel ready (as far as ready can be in testing). The TP is therefore approved and the test teams are considered to be prepared to start testing. The total time spent is probably a few days at the most.

Comparing this method with how test specifications and estimations are done, you will by using this method not loose as much information since you collect your thoughts in the TP. It is otherwise common that the person doing the estimation does not give any information on what was included.

Reviewing and communicating a test specification containing all your test scripts with all your repro-steps is incredibly hard. It is not effective to give feedback on huge documentation, no one usually have that much time to spare. Instead the TP only contains the guiding thoughts and is easy to review. A test idea is usually a one-liner with the essence of the test intended, which is perfect for communication.

The key to this is collaboration and cooperation in the team as well as with other stakeholders. You focus on learning a new feature and the domain around it while at the same time hone on your test design. The power of the TP lies in it being focused on the essentials of the planned testing where you do not go into too much detail instead have the rough scetches and therefore making it easy to communicate. This method works well in combination with SBTM and other styles of exploratory testing.

Exploratory Testing Best Practices* Rikard Edgren 9 Comments

When testing software, the very best inspiration and reality check comes from the software itself. It helps you test beyond requirements, and investigate what the software really is capable, and incapable, of.
These are my best practices for exploratory testing.

1. understand what’s important – we can’t test everything, and we can’t find all bugs. But it can be feasible to find all important bugs. This is very difficult, doesn’t involve a great deal of luck, but a good understanding of multiple, and diverse information sources. An Exploratory Testing trap is to spend a lot of time finding bugs that are interesting to the tester, but not to the project.
A key to important matters is to be subjective. I guarantee you, if your software is made for people, they will be subjective when they are using the product. Use your humanity and your feelings, they are worth more than the explicit requirements. The truth is the subjectivity (Kierkegaard)

2. be open to serendipity – the most important things are often found when looking for something else. If we knew exactly where all bugs were located, it would suffice with automation or detailed test scripts (or maybe we wouldn’t have to bother with tests at all?) So make sure that you look at the whole screen, and many other places as well. Odd behavior might be irrelevant, or a clue to very important information. Testing always involves sampling, and serendipity can be your friend and rescue.

3. work hard, but don’t – manual, exploratory testing can give enormous coverage in a short time when pieces are in the right place. At the same time, you should pause your fast runs and look from different perspectives, both in order to see more things, but also to not get too tired. Sometimes you need to investigate details for a long time without progress, sometimes you need to do something completely different to get fresh eyes. Stay focused, lose focus. Exploratory Testing Dynamics is a profound document with lots on this.
Structure and discipline are key parts of any successful testing effort, but also to combine disparate information sources (often called creativity.)

4. work close with developers – in physical location, and in time (tight feedback loop: code-test-fix-verify). Talk to them instead of reporting a vague bug; earn their trust, and share information.
But testers should think differently, if everybody looked at the software in the same way, we wouldn’t get the broad coverage it (hopefully) will be exposed to after release.
It is good to work with other roles as well, but developer collaboration is the most important.

5. one-liner test ideas – if you lightly document the test ideas you intend to execute, you can give ideas to developers even before they have written the code (with the inevitable bugs.) You can also get feedback on importance and improvements by fellow testers, other interested and stakeholders. With appropriate granularity you can go below 100 test ideas, and make it possible for others to understand the testing that will be performed, and comment on what is missing.

The list could be made longer (broad learning, work in other roles, CRUSCPIC STMPLA, see the big picture, SBTM, creativity…), but I think it’s better to focus on a few important. Who knows, I might be totally wrong, and don’t want to spend too much of your precious time.

* There is no such thing as something absolutely, objectively best. This goes for all situations.
An Olympic champion was the best at that moment according to the rules that were used in that competition.
Newton’s physics were the best for a while, and are still good for approximations in real life.
“The Beatles are the best band in history” is a subjective statement, shared by many.
So in this article, feel free to exchange “Best Practice” with “good examples for me that I hope are inspiring”.
Ask Wittgenstein, language is a game. You decide what to do with the words.

Misunderstood Soap Opera Testing Rikard Edgren 1 Comment

Some years ago I read about Soap Opera testing too hastily, and started using it at work being convinced that it meant the following:

A soap opera test involves normal operations, but a large amount of them, for a long time. As in the TV shows, they go on, and on, and on, and on, and on.

We use it regularly by having a document that is edited, saved and sent around the team of testers. It is unscripted, on-the-fly, and contains operations that should be stable, on their own.
It is a collaborative effort that often finds issues we might, or might not, have found otherwise.

Now I know that Buwalda’s “real” Soap Opera testing rather focus on condensed histories with exaggerations and the strange things that happen in soap operas, e.g. ‘suddenly the unknown son appeared, but now with the name Lucy’.
Anyway, either method (or the combination) is useful in software testing, to capture issues that happen in scenarios, but not in isolation.

Note: If you only are looking for functionality/stability issues, you might benefit from a test monkey; automated random execution in long sequences.

Exploratory Testing – the learning part Henrik Emilsson 5 Comments

Let me begin by a quote from T.S. Eliot:

We shall not cease from exploration, and the end of all our exploring will be to arrive where we started and know the place for the first time.

One of the most important things with Exploratory Testing is that it allows for you to learn something during the exploration. It is urging you to think for yourself; draw conclusions; misunderstand; learn from mistakes; see the full picture; noticing details.
Learning about the product and the context is a key element in discovering problems that might bug users and stakeholders.

I have a problem with remembering and learning stuff if I (or someone else) have written something down and I’m using the text as the answer each time. E.g. a set of instructions on how to perform a special action; a login password; etc. But if I have done the thing myself, I have no difficulties in performing the special action. And perhaps most importantly, I have an understanding of every needed step and why each step is needed.
When I am asked to review something, it is important that I explore the area in order to learn something from it. To me, this is (almost) the only way to see what is missing. I can do a good job by finding problems in the things that are already in there, but in order to see what is missing I have to learn about the area and what it really means.

It is the same thing with Scripted Testing vs. Exploratory Testing. If tests are scripted, there is no need to reflect upon what you do. The script is there so you don’t have to think. An exploration, on the other hand, requires thought work. It requires that your brain is in an alert state and that you learn something from what you perceive. ***

I also think that the learning part of exploratory testing is what matters when comparing two testers with similar resumes. Ask the candidates if they have had an exploratory approach in the reference projects, and ask them what they have learned.
I bet that you will spot those who have learned something and thereby improved their testing skills, including how to think about the context and its affection. I also believe that they have improved their problem solving skills and boosted their creativity along the way.

Do you have any learning experiences that you can share with us?
Have you experienced when you didn’t learn something? What did that do to you?

*** Note: Of course it depends on where you are on the Scripted vs. Exploratory continuum.

The Complete List of Testing Inspiration Rikard Edgren 4 Comments

It is often said, with right, that you need to consider a lot more than the explicit requirements in order to be able to test a product well.
Often a few examples are included, e.g. something about the customer needs, or the necessity of reading the code, or knowing the technology the software operates in, or an understanding of quality characteristics in the specific context.
But I have not seen a long, yet brief, list of things that might be important as inspiration, so I thought we together could come up with a “checklist when building your own checklist”.
This is what I have so far:

MODELS

Requirements: explicit, combinatorial, implicit, incorrect, changing, vague, different, impossible, aiding
Specifications: conceptual, technical, functional, design, test specifications
Code: old, new, shaky, read, reviewed?
Help: online, pdf, web sites
Actual software: prototypes, in progress, released, competitors
Other people’s models

HISTORY

Previous versions: what did the old version do?
Bugs/Error catalogs: what bugs occurred for similar functionality?
Test Ideas/Cases/Strategies/Results: what can you learn from previous test efforts?
Claims: which features are used in marketing material?
Reviews: what has been said by others about your product?

USAGE

Support department: what experiences are channeled to support?
User scenarios: how many ways of actual usage of the software do you know of?
Customer stories: what problems are the customers trying to solve?
Dog food: can you use the product internally “for real”?
Competitors: software products, in-house tools, analog systems
Training material: learn about how customers learn your software
Business: needs, logic, information, knowledge, standards, laws

TECHNOLOGY

Environment: Hardware, OS, Applications, 3rd Party Components
Tools: development tools, (static) test tools, monitors, editors, brain
Systems: what does the big system look like?

PROJECT

Functionality: recently “improved”, core, problematic, high interoperability, complex, popular
Risks: important, omitted, forgotten, changing, unknown
Project plan: when, what, how?
Process: Agile/Waterfall-mix
Infrastructure: configuration management, test environment
Test execution context: what, when, where, why, who, how?
Quality objectives: what is always important?
Deliverables: executables, interfaces, all sorts of documentation, Release Notes, readme:s, meta data

PEOPLE

Team: developers, interaction designers, leaders, managers, technical writers, testers, experts
Stakeholders: have you talked to (product) managers lately?
YOU: your knowledge, experience and subjectivity
Users: needs, knowledge, feelings, impairments, personas, have you seen the real users in action?

SKILLS

Analytical thinking: model details and the big picture, follow paths
Critical thinking: see problems, risks and possibilities
Creativity: broaden test spectra, generate new ideas, lateral thinking
Factoring: break down to testable elements
Investigation: explore and learn
The Test Eye: wants to see errors, sees many types, looks at many places, looks often

SOFTWARE TESTING

Quality Characteristics: Capability, Reliability, Usability, Charisma, Security, Performance, IT-bility, Compatibility, Supportability, Testability, Maintainability, Portability, Localizability, Auditability
Generic test ideas: quicktests, tours, mnemonics, heuristics
Tricks: error-prone machine, Basic Configuration Matrix, attacks, Cheat Sheets
Information: books, courses, blogs, forums, web sites, conferences, conversations
Testing theory: with many different techniques/approaches you have a higher chance of finding important information

Suggestions for improvements are very welcome!!

Everything is not always relevant, but I recommend spending a few seconds on each sub-category. Think about what is important, and what would be worthwile to investigate more.
By using multiple information sources, including the actual product and customer needs, your tests will get a better connection with reality, they will be grounded.

Status of Software Testing Professionals Rikard Edgren No Comments

Many testers feel underrated; they don’t think they get the respect they deserve.
There are more reasons for this than suggested solutions.
One proposed solution is to define the profession more thoroughly, to get standards and certifications that can guarantee more than the bare minimum of test quality.
I am confident this isn’t the “good” path.

Main reason is that the role is so dynamic, it depends so much on the environment that standard processes or certifications often won’t be good practices. This is true for many professions, but some things are special for testing:

Expectations – there’s a huge difference between testing of a pacemaker, and testing of a personal blog. For some software the importance lies in reliability and security, and for others attractiveness is all that matters. Sometimes testing has traceability requirements, sometimes there just ain’t no time for planning.

Responsibility – even if the goals are clear, you have to figure out what is covered by other roles. If developers have good unit tests, you might not have to bother so much with regression testing.
There might, or might not, be usability, security or performance experts whose work you don’t want to overlap too much. Customer testing might cover requirement holes, and limiting contracts or “physical” access might set the scope. Regardless of your title you
might also deal with customer support, quality assurance or configuration management.

The solution for our status is small-scaled: do a darn good job by doing the testing that the product needs, and isn’t covered by others.
This will be valuable, and you will get respect, and higher status eventually.

This might mean creating automated regression tests, or very thorough testing of some details, or manual, lightweight testing beyond the requirements, or all of these and a lot outside and in between.
Your test team needs to figure out where and how you provide most value, other people can only guess (and help with expectations, responsibiities, goals.)

Long-term, we should promote testing so we get more talents, more diversity, more ideas; more, merrier and better.
University degrees might be nice, but our reputation will come with the products we help deliver.

Inside the Capability Characteristic Rikard Edgren No Comments

I think quality criteria/factors/attributes/characteristics are extremely powerful.
It helps you think in different ways, and makes it easy to get a broader coverage of your test ideas.
See Software Quality Models and Philosophies for McCall, Boehm, FURPS, Dromey, ISO 9126 models, or CRUSSPIC STMPL for a version without focus on measurability.
The granularity of this (and all other) categorization can be discussed, so here are suggested sub-categories for Capability, with some thoughts for inspiration:

Capability – The set of bigger and smaller things you can accomplish with the software. This is usually covered by requirements or similar, and with the addition of some help from developers telling about small or hidden features, you can cover this with thorough and hard work. I suspect some test efforts stop here.
The most lightweight testing approach is to ignore this totally, it has been considered by others, and you will cover parts in more interesting ways when performing other testing.

Completeness – Are the functionality really enough? Or are there small things in between, and on the edges that are needed to create a killer app? As a tester you won’t decide the scope for a release, but you can identify small things that can be implemented almost for free, and since you have knowledge of the system you can identify bigger things that someone else can put up on the list.
Lightweight method: think about what is missing while performing system testing.

Correctness – This could be seen as an obvious part of capability, but if you think about precision for real/double, or other corner cases, there are a lot to investigate, and probably a lot to ignore as well…
I don’t know about any lightweight technique for this, except asking detailed questions about what’s important.

Efficiency – Does the product do what it is supposed to do in an effective manner, without doing what it isn’t supposed to do?
Lightweight: keep your eyes open and look at more places than the apparent ones.

Interoperability – In a requirements document you will find a lot of things the software should be able to do, but you will not get a list of all important combinations and interactions that surely exists. Pairwise testing is a theoretical solution to this dilemma (and I guess it is effective for some situations), but with knowledge about the product you will see that some combinations are more error-prone than others.
A lightweight testing solution consists of two test ideas: 1) turn on everything 2) turn off everything

Concurrency – Can the software or functions therein operate simultaneously? How many concurrent actions? What if they are dependent on each other?
Lightweight testing: start more operations now and then while system testing.

Extendability – all features wanted by customers can’t be implemented, so it can be nifty with an API that allows extensions of various types.
Lightweight testing: get hold of an API implementation, use it and change it.

What is wrong, what is missing?