Open Letter to EuroSTAR organizers – testing introduction Rikard Edgren


Thanks for your request of a high level summary of software testing. You would get different answers from each tester, and here’s what I think you should know.

1. Purpose of software

Software is made to help people with something. If people don’t have use of it, the product doesn’t work. This is complex, because people have different needs, and there are many ways that software can fail, in big or small ways.

2. Why we are testing

For some products, problems aren’t big problems. When they are encountered they can be fixed at that time, and the loss of money or image is not high enough to require more testing. But if it is important that the software is really good, the producer want to test, and fix, before releasing to end users. A generic purpose of testing is to provide information about things that are important. Specific missions are found by asking questions to people involved; the mission can be to get quantitave information about requirements fulfilment, and/or subjective assessments of what could be better, and/or evaluation of standards adherance etc.

3. Context is king

Every product is unique (otherwise we wouldn’t build it), so what is important differ from situation to situation. Good testing provides reasonable coverage of what matters. The strategies for accomplishing this can be difficult to find, but I know that I don’t want to put all my effort in only one or two methods. If you want to engage in conversations, start with “What’s important to test at your place?” and select from the following for follow-up questions: “What about core/complex/error-prone/popular functionality?”, “What about reliability, usability, charisma, security, performance, IT-bility, compatibility?”

4. How to test

Testing can be done in many ways, a generic description is “things are done, observations are made”. Sometimes you do simple tests, sometimes complex; execution ranges from automated unit tests in developers’ code, to manual end-to-end system integration tests done by testers/product owners/(Beta) customers. There are hundreds of heuristics and techniques, but you don’t need to know them; rather practice by seeing examples and discussing how something could be tested to find important problems.
Key skills are careful observations, enabling serendipity, vary behavior in “good” ways.

5. Test reporting

Testing is not better than the communication of the results. Testing doesn’t build anything, the output is “only”  information that can be used to make better decisions. While the testing can be very technical, the reporting is done to people, and this is one of many fascinating dynamics within testing. The reporting ties back to the purpose of software and testing (but also includes other noteworthy observations made.)

And with that we have completed a little loop of my testing basics. Any questions?


Peter Hamilton March 14th, 2013

Thanks for the help Rikard, much appreciated. If you could give me more of idea of the types of tools that testers use it would help me with identifying companies that might be interested in exhibiting at the conference. I am learning about automated testing tools and platforms but would like to know more about what they test, e.g. performance, across platforms, desktop, mobile, API etc.

It would also be very useful to know your opinion on which tools people in the EuroSTAR Community want to know about, new and innovative solutions for example.

Rikard Edgren March 14th, 2013

Hi Peter

The tool need is very different for different testers, and sometimes it works best with commercial or open-source, sometimes it needs to be crafted for one specific purpose.
Some are not testing specific, e.g. Excel, WinMerge, browser developer tools etc.
The tool myself and most testers need most, is one that help them think faster and better, but they are not available, yet.

A list with tools in different areas are available at Minstry of Testing: