Reflection from Let’s Testlab 2013 Martin Jansson 1 Comment

Planning a test lab takes a lot more time than you think. You prepare a lot of things and try to make the event great. Here are a few things that I considered …

What applications/systems to test?

  • how many would be enough?
  • what type of system?
  • how big or complex?
  • are they fun to test?
  • do they have enough faults in them?
  • are they open source?
  • is the project/company interested in feedback?
  • has the system been used elsewhere in test labs recently?
  • can we gather enough test data or other material for it to be testable?
  • would these trigger interesting discussions?

How do you share information about the test lab?

  • wiki or web page or something else?
  • does anyone read anyhow?
  • how to report a bug is probably a good idea?
  • does testers with different skill level require different information?

What is your schedule and how are events set up in the test lab?

  • have you gotten hold of any of the speakers that want to be part of the test lab to extend their talk?
  • will the speaker show up, you do get tired after a talk?
  • depending on the conference, the participants will be receptive to different things
  • depending on the conference, the main schedule will be different
  • if the test lab is done during the evening, it should probably end before people fall dead
  • what other sessions or events are performed during the test lab that you might want to sync with

What venue assistance do you get for the test lab?

  • can you get print outs during the test lab in any format, size and color?
  • do you gain access to white boards, flipcharts, pens, scissors, tape and papers?
  • do you have power and cords everywhere?
  • can you present in the test lab in two places?
  • how many can be in a room without violating security or restrictions for safety?

How have you handled sponsors to help with the test lab?

  • do you have a sponsor that handles all the client machines?
  • do you have a sponsor that handles the servers and wifi?
  • do you have sponsors that want to install tools?
  • do the main conference sponsors have specific requirements for the test lab?
  •  have all sponsors installed their tools on the client machines?
  • will the sponsors participate in the test lab and help out other testers during the events?
  • are the sponsors aware of the expectations in the test lab?
  • is it possible to have sponsors in the test lab?
  • what shallow agreements do you have with the sponsors?

How do you intend to handle bug reports for the various systems in the test lab?

  • will you set up categories for the bugs so that you tailor where they report bugs?
  • will you leave the categories open so that the testers have more freedom?
  • will you use a separate bug system such as bugzilla or mantis?
  • will you use a solution such as Redmine or Trac to handle wiki and bug system in one?
  • will you have a few bugs reported before hand to help guide participants?
  • will you review bug reports and help participants when reporting?
  • will you when finished report all bugs to the project owners of the systems?

How do you handle information from the projects and owners of systems under test?

  • do you ask for what they would think is valuable?
  • do you ask for a mission for the testers?
  • do you ask for their fears, risks or rumors that they wish investigated?
  • are you at all interested in what they think?

How do you handle builds and versions?

  • do you set up oracles such as earlier versions?
  • do you have nightly builds?
  • do you have a recommended version on a USB?

How do you handle existing information about the testing of the systems under test?

  • do you gather session notes, mind maps, test matrices or some other kind of artifact?
  • do you gather models of the system and coverage models?

How do you handle test data for the systems under test?

  • have you set up test data based on a domain analysis?
  • have you structured the test data for ease of use?
  • have you documented the test data so that testers of different expertise can understand and use it?
  • have you test data to perform load tests or performance tests?

How do you organize testers in the test lab?

  • do you let them just sit down and test randomly?
  • do you try to gather them based on specific missions or skills?
  • do you let teams form on the spot?
  • do you have predefined teams that have booked spots?
  • do some speakers have booked spots for others to join up around?

How do you handle pins, prizes, awards or rewards?

  • do have your regular set of test lab pins to hand out?
  • do you have specific prizes for specific parts of events?
  • what do you promote in the test lab that you would give an award to?
  • are you inspired by spirit of the game award from Ultimate Frisbee?
  • do you have any fake certificates to hand-out to promote a special ability or skill?
  • do you have T-shirts or any free give-aways that can be handed out?

How do you handle bells and whistles?

  • do you promote participants making sounds when they find bugs?
  • do you promote focus and silence instead of interrupting sounds?

How do you work with your partner[s] in the test lab?

  • do you split things between you?
  • do you cooperate on everything?
  • do you have a day each?
  • do you have a partner?

How do you handle artifacts generated from the test lab?

  • do you follow Open Source Testing principals by storing the artifacts in a public repository?
  • do you save them for the next conference?

OK, now you see a few of the things we consider before the conference. So, what happened at the conference?

James Bach talked, among many things, about something called “Shallow Agreement”. This is the first thing I experienced regarding lab setup. One of the sponsors had sent me 15 laptops that we could use in the test lab. Perfect! What was shallow with our agreement between us was what a tester do with a laptop. I will not, hopefully, do the same mistake again. What I should have clarified was that I expected the laptops to have full administrative access. Tester want to install, uninstall, reinstall, monitor applications, install our favorite tools and share them among fellow testers. We probably want to use our favorite editors and some probing tools. We probably want to be able to start and stop services, kill any process and change the behavior of the system. Basically, we want to be admins on the environment we are testing. Without that we are blocked. So, the next morning I sent all laptops back to the sponsor since I was not able to gain admin access because of sysadmin policies. It was a stupid mistake of mine to not make it clear to the sponsor what I expected and how we should operate the laptops. Since the sponsor was a test tool vendor I just presumed that they knew. But no matter what company you work with there will be shallow agreement that you need to identify and eventually avoid.

Now the first day of the conference started. Me and James planned out the details for the schedule of the first night and imagined that with lots of tweets people would bring their own laptops. We were just going to finish up the setup real quick in the test lab so that we could join the sessions and talks. 7 hours later we ate dinner. Then we were almost ready. Compare Testlab, the sponsor who setup the server and run the wifi, helped us in an excellent way. Torbjörn Wiger from Compare managed to keep his calm and helped us throughout the conference to the last day. We were so happy that we had that sponsor aboard. This way, we could focus on the event and activities in the test lab and instead try to minimize the activities around the equipment.

4-5 teams had registered for the test lab, we were ready for them! Clock struck 8 and the test lab was open. The test lab was empty, apparently many got stuck in the bar. So, our nemesis for this evening was the bar. As people started arrive they saw that there were no laptops, some arguments arose about the lack of laptops. Yes, I know… it would have been excellent with laptops, but only if they had admin access. Teams started to arrive and were trying to setup their laptops, install test data and understand what they were doing there. Apparently many had missed our tweets about bringing laptops. Oh well, so much for information overload.

I didn’t release how angry I was about the laptops until I started to introduce everyone to the test lab. It did not help that I argued about the laptops, or lack off them, with one of the testers. I started blabbing and blundering something, then gave the word to James Lyndsay who took over without a sweat. James facilitated the test lab the first evening, while I went around trying to help the best I could. Note to self, shallow agreements on sponsored items is bad.

Our initial schedule was broken and meaningless. Just like reality in any regular projects. We changed the plan and managed to add some kind of debriefing. A few teams had gotten very far while others had just started. We realized that the time needed to get started was long. Something to consider for coming test labs. The teams debriefed and expressed what they had found. It was not a perfect first day of the test lab.

The next day, directly after lunch, some of the conference participants joined us in the test lab to run a few sessions, exploring XBMC. It was nice to test together, sharing techniques and looking at issues found. At 20.00 we started the test lab for the second night. This time people were a bit more prepared, more were on time and the focus was great. James Lyndsay had the great idea of creating certificates, somewhat fake ones, that we had posted on a wall to be handed out both of the nights. Our idea was that these should hint the participants on what was valued in the test lab, thus emphasis on diversity, creativity, persistence and so on. Participants were to hand the certificates out to others who they thought were worthy of them. James elegantly created most of them, I believe I can only created the one called Best dressed tester.

The participants in the test lab is used to being a bit stressed with tight deadlines and short scenarios/events. But this time, we went around to the teams at a bit after 8.00 and told them we are holding a debrief 9.40, letting them dig deep and focus. The room was fully of energy and focus. Michael Bolton worked on a mindmap and exploration of Mumble. Pradeep did the same, but investigated the black boxes created by Altom, based on Flash programs from James. I was surprised that so many wanted to focus on the Mumble application, but I guess I was a bit biased with my focus on XBMC.

Close to 10.00 we started to debrief and each time presented what they had found. Many of the participants were not used to working collaboratively with planning, testing and reporting. Some said that had learned a lot by watching testers present as well as trying out new tools in their collaboration. When everyone was done, it seemed like a great success. One of the reason for success were the participants sharing techniques, tools, ideas and showing how they tested. Those who participated could at the end of the conference say that, “Yes, I was at Let’s Test and I tested!”, which I think is a great.

Summing it up, I would like to thank Compare Testlab again for helping out in the lab, James Lyndsay for being such a great partner and to all who joined to participate in the test lab during the conference. I am also thankful to get lots of experience with shallow agreements even if it brought me lots of trouble.

Book reflection: Tacit and Explicit Knowledge Rikard Edgren No Comments

Harry Collins’ Tacit and Explicit Knowledge is a book about scripted and exploratory testing. Explicit knowledge is what can be told and is able to transfer knowledge. Tacit knowledge is what can’t be told (yet), knowledge transferred in other ways than reading/listening. There is nothing strange or mystical about this, “experience goes along with having tacit knowledge”.

The notion of explicit knowledge is pretty new, it is the tacit that is normal life in society. So when traditional test techniques came around, they put our industry’s focus on the explicit knowledge, and ignored the tacit. Scripted test cases may pass on some of the explicit knowledge, but never the tacit.

For a while, exploratory testing was seen as something odd, when it in fact is the test cases that are unnatural. Exploratory testing handles tacit knowledge, but while we can give hints and explain testing, we can’t say how it is really happening. It is a collective tacit knowledge – learning to adopt to context – and to get good at it, the best way could be socializing, or “hanging around”, with those who have learnt how to do it. The transfer of this tacit knowledge involves direct contact, which matches my experiences of individual feedback being crucial when teaching testing.

Collins also explains polimorphic (complex actions in context) and mimeomorphic (specific things), which could be used to expand on manual and programmed (automatic) testing.
The concept of Mismatched Saliences feels important for testing missions and test strategy; different parties don’t know that the other ones aren’t aware of a crucial fact.

This is my take on the three kinds of tacit knowledge for software testing:
Relational Tacit Knowledge – unspoken things handled by conversations
Somatic Tacit Knowledge – test execution
Collective Tacit Knowledge – test strategy

Relational can be made explicit if we put the effort into it, somatic deals with limits of our bodies and minds, collective is the strongest, and cannot be explicated. Blindly following requirements is an example of explicit knowledge taking overhand. But by working with requirements – asking questions to people – we can get to the collective tacit knowledge, to understand what’s really important. When you do this, you’ll see a lot more, whatever you are testing (serendipity!)

It’s not a one-bite book, but for me it was perfect, probably because I like philosophy, teach a lot, and currently research test strategy. Surprised though that there were more than a dozen typos.

A proof of a really good book is that it sticks, that it pops up now and then and help you. This has happened to me several times in the last months: I realized weaknesses in a knowledge transfer document I wrote (no tacit there!), I have tweaked teaching exercises so they have even more interaction, and my (and Emilsson’s) evergoing quest – how to become an excellent tester – has gotten more fuel. I think it is an important book, and I find it more than cool that professor Collins is coming to Göteborg for EuroSTAR testing conference.

I suspect it is tacit knowledge to apply this book to testing, so if you’re interested, you need to read for yourself and discuss with others about what it means to you.

(Lateral) Tester Exercise IV – Quality Characteristics Rikard Edgren No Comments

* Take any product, or a part of it.

* Choose one category of the quality characteristics.

* Go through each sub-category and consider if it is relevant.

* For these, write a quality objective anchored in the product, that is useful to many roles.

* Design and execute tests that challenge these objectives.

* Summarize your findings to a quality assessment for this area.

 

Note: This can be a quite time-consuming exercise, but it will hopefully accelerate your learning and find important information (done well, it could be valuable work time.)

Open Letter to EuroSTAR organizers – testing introduction Rikard Edgren 2 Comments

Hi

Thanks for your request of a high level summary of software testing. You would get different answers from each tester, and here’s what I think you should know.

1. Purpose of software

Software is made to help people with something. If people don’t have use of it, the product doesn’t work. This is complex, because people have different needs, and there are many ways that software can fail, in big or small ways.

2. Why we are testing

For some products, problems aren’t big problems. When they are encountered they can be fixed at that time, and the loss of money or image is not high enough to require more testing. But if it is important that the software is really good, the producer want to test, and fix, before releasing to end users. A generic purpose of testing is to provide information about things that are important. Specific missions are found by asking questions to people involved; the mission can be to get quantitave information about requirements fulfilment, and/or subjective assessments of what could be better, and/or evaluation of standards adherance etc.

3. Context is king

Every product is unique (otherwise we wouldn’t build it), so what is important differ from situation to situation. Good testing provides reasonable coverage of what matters. The strategies for accomplishing this can be difficult to find, but I know that I don’t want to put all my effort in only one or two methods. If you want to engage in conversations, start with “What’s important to test at your place?” and select from the following for follow-up questions: “What about core/complex/error-prone/popular functionality?”, “What about reliability, usability, charisma, security, performance, IT-bility, compatibility?”

4. How to test

Testing can be done in many ways, a generic description is “things are done, observations are made”. Sometimes you do simple tests, sometimes complex; execution ranges from automated unit tests in developers’ code, to manual end-to-end system integration tests done by testers/product owners/(Beta) customers. There are hundreds of heuristics and techniques, but you don’t need to know them; rather practice by seeing examples and discussing how something could be tested to find important problems.
Key skills are careful observations, enabling serendipity, vary behavior in “good” ways.

5. Test reporting

Testing is not better than the communication of the results. Testing doesn’t build anything, the output is “only”  information that can be used to make better decisions. While the testing can be very technical, the reporting is done to people, and this is one of many fascinating dynamics within testing. The reporting ties back to the purpose of software and testing (but also includes other noteworthy observations made.)

And with that we have completed a little loop of my testing basics. Any questions?

Regards,
Rikard

Black Box vs. White Box Henrik Emilsson 7 Comments

I have heard and seen the difference between Black Box Software Testing and White Box Software Testing being described as “whether or not you have knowledge about the code”; and that Gray Box Software Testing is a mix of the two.

But is it really about how much of the code you see?

I rather think in another way; and this is my take on an explanation:

Black Box Software Testing – When you worry about what happens outside the box.
White Box Software Testing – When you worry about what’s going on inside the box.
– Sometimes you know about the inside when doing Black Box Software Testing.
– Sometimes you know about the outside when doing White Box Software Testing.
What matters is that there might be a difference in scope, question formulation, and information objectives.

Another take which would mean the same thing:

Black Box Software Testing – When you don’t need to worry about what’s going on inside the box.
White Box Software Testing – When you don’t need to worry about what’s happening outside the box.

================================================

Disclaimer: As a tester I cannot say that I often categorize my testing into White/Black/Gray Box Testing. However, it can sometimes be helpful to think about the transition to another perspective; so I use Black and White Box as a test idea trigger (or heuristic).

How I Write Conference Abstracts Rikard Edgren No Comments

I guess some of you are writing, or thinking about writing, abstracts for EuroSTAR 2013, deadline is at 13 February.
You should do this, not just because Alan said so.
You should do it because you want to tell stories, enhance your own understanding of something that is important to you.

This is my process for writing session abstracts:

1. Think

I consider the theme, to see if it inspires me, but I don’t feel limited by it.
I know that a great abstract can get accepted, regardless of any link to the theme.
I follow my energy, and usually there are some topics I would like to talk about.
Sometimes I re-write an abstract from last year (twice this has given me conference spots!)

2. Research

Has this been addressed by other people?
What did they say?
What’s unique about my abstract?
What should I read or do to understand more?

3. Do a full outline

I think the thing through, all the way, because I want to write an abstract, not a trailer.
I want a lot of material, so I have the luxury of discarding the less useful/appealing stuff.
I try to include the most important things in the abstract; I can’t afford secrets, and it should be clear why this is a good session.
I often forget Weinberg’s Rule of Three: if you can’t think of three things that can make this a bad idea, you haven’t thought it through.

4. Let it rest

If an idea still is promising after one week, it is probably a good idea.
My sub consciousness does some work, for free, and I usually make some twists and turns in order to learn what would be a good session for me, and for attendants.

5. Polish

Proof-reading is important, one spelling error hurts the confidence of many that will review the abstract.
I also let another tester review the abstract, it is so easy to take things for granted, and if the abstract isn’t understood, it isn’t good.
The title is very important, and with a flow in the reading, it will feel polished, and readers believe it will also be a good talk.

This process has worked well for me (it has been implicit up until now), it won’t work for you; but I hope it can help in some way.

Double testing – converging or diverging models in testing Martin Jansson 6 Comments

I have experienced that many test leads, managers and project managers are worried about something called double testing. In short, it is the idea that some tester is testing the same thing as another tester. The term double testing might be a local term, but then you know it by another name with same properties and confusion.

I think the idea of double testing is about what models we use in testing. More precisely our mental models on our test approach/perspective on testing, system boundaries, system parts, levels of testing, terminology in testing, ideas of test coverage, usage of test techniques, test idea sources, test planning techniques and so on.

I will elaborate around some of the models that we use in testing and its relevance towards the idea of double testing.

Black Box Testing vs. White Box Testing

The box metaphor is used as one way to visualize how we perceive the system while testing. By talking about either black box or white box (and in some cases grey box) we can determine whether if we are able to see inside the solution or not, but also if we are able to take advantage of any of the artifacts that the system produces for us to understand the health of the system.

I have seen testers to choose a black box approach even if they had access to valuable information about the system. They had the possibility to do grey box testing that would have been a lot more enriched; still they selected the black box approach because that is what the customer will see.

Let us assume that we have the strategy of splitting up between black box, white box or grey box approach to testing the system as a way to ensure that no double testing is done.  The box is a model of the system that shows different levels of transparency of the system. It tells something about the approach to testing, not the system itself. It might tell us about how we ask questions to the system and how we monitor the system while we try to get answers. If we choose to ignore information available from the system then I conclude that we will limit what kind of questions we ask of the system. With more information available, we probably ask more questions. If we ask fewer questions and base them on less amount of information, then we indirectly increase the chance of double testing.

Unit testing vs. Integration testing vs. System testing

If we instead break the system into different parts where we also show the integration between these units, we will use the model of unit, integration and system to visualize what we test. This is by nature a simplified model of the system. The boundaries of the system and its sub parts is unknown or at least vague, therefore the representation of this model is in theory only.

If we ask questions that have to do with a unit such as a class or function in the code, you still might want to repeat the question when you have extended or expanded the context. You ask the same question but the environment around it is altered and therefore it is not the same test, thus not a double test. The same goes for when you wish to ask questions higher up in the system. A unit test is also limited by factors such as performance and speed. This makes the unit test limit its focus on what it will test and what it can guarantee, in theory.

A unit test, an integration test and a system test have different objectives when testing. This should mean that if each of those types of tests follows those objectives, then they would not be subject to double testing because they give different types of information and in some cases interpreted differently by different stakeholders.

But I question if it is effective to split the work between different teams by that of unit, integration and system testing as a general solution to avoid double testing.

Testing vs. Checking

A check asks a binary question, while testing asks an open-ended question.

If one team limit the test effort to perform only checks, while another team most probably does  a bit of both they might avoid double testing, but instead have some double checking. Based on that we are asking different types of questions would it really be possible that we performed double testing?

Charters vs. Test cases/Test scripts

If we compare two setups: one team that use test specifications with detailed test scripts with testing planned before hand that execute the planned tests no matter what was actually seen, another team use charters and missions with a general outline where to go testing, but it was up to the tester to explore and document what was actually experienced. This testing adapted to what was seen before them, following the fire or smoke leading them to possibly valuable information. We could say that the first team would be able to review each others test cases and scripts to avoid double testing, but on the other hand they would perhaps miss out on important information. The second team using charters would have a hard time not to guarantee double testing, but it is a bigger chance that they would find important, valuable information. Still, if several teams are testing the same thing using charters they would probably follow the same fire or smoke, thus increased change of performing double testing. But then again, they could just avoid that by letting the team members do collaborative testing and share the charter.

Scripted test approach vs. Exploratory test approach

The teams following the scripted approach to testing has a tendency to be more hierarchical, where the decision paths take longer time and where there is more time spent on things that team members think are meaningless. Since changing decisions take a longer time, there is also a bigger chance that you continue with a planned task even if it might result in double testing. For a more empowered approach such as the exploratory, the team members have more freedom but also more responsibility to do meaningful tasks. The chance to do double testing would in that case be less likely. Still, this is how I perceive and have experienced the impact of the two approaches.

Regression testing

There are many different strategies for conducting a regression test. The traditional one is to rerun the test specification for areas that worked previously. If several teams are using similar test specifications and use the system in the same way, then there is an increased chance to perform double testing. If you instead use a check-driven test strategy for regression testing, then you will have an idea of areas/features you will check to see that they work to some extent and continue to test around those areas. The chance for double testing is less likely because of the pointer to where to go rather than how to test. Depending on how you set this up the chance for double testing will vary.

Smoke tests

A smoke test should be focused on what is important for their sub-system or for the solution as a whole. Each test in the smoke test probably focuses on critical areas that must work for the sub-system or solution to be considered worthwhile to test further. The most obvious functional areas or features are probably just checked and the more obscure areas are probably tested. A smoke test is usually quite shallow in depth and should go quickly. If several teams run the same set of smoke tests they will probably perform double checking, but less likely for double testing. Still, a smoke test is cheap as far as time spent. The information gathered can still be worthwhile even if double testing is done.

Definition of the system, sub-system or solution

Each team probably sees the solution or system differently from the other teams. They probably see the depth and complexity in their own sub-system more than in other teams. They might be aware of all connections to other sub-systems or they might not. When creating a model of the solution that consists of systems of systems of systems there is a big chance that not a single tester sees the solution and its sub-systems in the same manner. If we do not see the same system before us, how could any of us perform double testing? We base our testing on our mental models and if they differ it is less likely we perform double testing.

Ideas of test coverage

If a team focus on test coverage by only covering all the explicit requirements, then their test coverage will be a model based on explicit requirements only. They will miss out on a plethora of other sources for test ideas which then could have a test coverage model for each new source. Those teams that look in other areas will have different coverage models. If the idea of test coverage differ from team to team then the likelihood for double testing is not great either.

Configuration of the system

If each team uses the exact same system or solution simultaneously, then there is a chance for double testing or seeing the same things. But if teams have configured the system or sub-system differently then they will probably not perform double testing since they use a different system or a different setup of the solution as a whole. If the system can be configured and setup in many different ways, then it can be tested in many different ways which means less likelihood for double testing.

Platform of the system

If each team uses the same kind of platform for the system or sub-system, then they might be double testing. If teams use different platforms then the chance for double testing is less likely. If the platform itself consists of systems and sub-systems then the configuration and definition of the system is applied to determine the chance of double testing.

Static or dynamic system

If the system or sub-system is static, that nothing changes over time then it is more likely that teams are performing double testing. If the system is dynamic such as that log files are created from usage or that that data storage grow over time, then the chance for double testing less likely. Tests performed early in the week might be different from the ones performed later in the week. Does it matter than you run tests during the night or day? If it does, then it is less likely that you perform double testing.

System usage

When test teams use a system or sub-system, do they use it as a certain persona or role that is applicable for their sub-system or the solution as a whole? If you use a persona that is applicable to your sub-system alone, then it is less likely that you perform double testing. If the roles or personas are applicable to the solution as a whole you might still have an entry point or focus that is most related to your sub-system, then you will probably not perform double testing because the information that is related to your team will be different from the information that is related to another team.

Transparency of the system

If test teams are testing the solution or sub-systems without being interested in collecting information from the system itself on its health, then you will probably perform double testing more likely.

Obvious bugs

If all the teams start testing at the same time they are prone to see the most obvious bugs that fall on them as they start testing. The obvious bugs barely need a conscious thought from the testers to recognize that there is something wrong. I would say that these are bugs that found before real testing is performed. Independently of what approach to testing the teams have the obvious bugs would be found, unless you have testers who do not see bugs at all. If all teams report the same obvious bugs you have some other problem than double testing.

Re-testing of bugs

Some use retesting of bugs as part of a regression test strategy. They might select a certain severity or priority of the bugs that they wish to use as a guide when performing regression testing on a new build. When test teams are using the same bugs to retest then there is an increased chance that they will perform double testing, but only if they follow the bugs repro steps in a strict way. If they instead use it as a guide, change the test data or the order of things and then they are probably not performing double testing.

Isolation of testing

There is an old, smelly idea that testers should not be affected by others, that testers need to work in isolation. This is related to the different schools of testing, how the role of testing is perceived and what testers should do. If you are vigilant about the idea that your test team need to be isolated from other teams, then there is an increased chance that you will perform double testing because you are not able to communicate about your test ideas.

Reflections

I think the likelihood that double testing is performed is very small. If you are a decision maker that worries on double testing, you can stop unless your test teams are context-oblivious. Instead, worry more on if your teams are good and effective at sharing information. Information sharing from planning, testing and reporting is important to avoid covering the same areas with tests. Still, you might look at the same information differently and might have different objectives and will probably act differently on the information gathered.

Information shared through 400 page test specifications is harder to understand if there are areas that are subject to double testing. So find new ways in planning and preparing for testing such as using models, mind maps or test proposals.

Wolf Pack – a collaborative test compilation Martin Jansson 2 Comments

You are part of a pack of wolves.

You are hungry and have not found food for several weeks.

When you move, you run covering lots of ground quickly.

You are out hunting, cooperating and collaborating with the rest of your pack.

You are seeking the big game, not a flea, nor a rabbit or rat.

An elk is ok, a mammoth is great, but you look for a stranded whale or a leviathan.

You might take note of the smaller game, but as a pack it is not your focus.

When you find tracks or clues of the bigger game, you howl and notify the rest of the pack.

As a pack, you circle and take down the prey.

The lone wolf is no hero in this context.

 

Can this be an effective compilation when you get a new build, as a form of smoke test, acceptance or regression test. It could train you in collaboration, not crying wolf for small prey, but instead wanting the bigger juice.

Pass to Fail, Fail to Pass Heuristic Rikard Edgren 2 Comments

When teaching scripted testing (yes, I actually do this!) I found the Pass to Fail, Fail to Pass heuristic (used by many, but now with a catchy name.)

The essence is that when a not-overly-simple test has resulted in a Pass, think about it some more, and try to make it Fail instead.
When a not-overly-simple test Fails, think about it some more, and try to make it Pass.
This will stop you from jumping to conclusions; you will find out if the essence of the test case was Pass or Fail; and you might have isolated a bug or two in the process.

Example: Test case is about resizing 200 images at once. The essence of the test (many images) actually works, but some testers might report a Fail, besause it didn’t work (but the reason was that (at least) one of the images wasn’t handled properly.) When Pass is reported, you might have missed a chance to run a complex test that could find important problems.

This is a specific instance of the generic Do Variations Heuristic (Creep and Leap is another instance) As with all heuristics, it is to be used with judgment.

1000 Comments on TheTestEye the test eye 6 Comments

Very soon the 1000th comment will be published on thetesteye.com.
Comments are our main reason for writing blog posts, because they take our thinking further.
Our ideas are challenged, taken in other directions, opening new possibilities (and closing some…)
Thank you!!

To celebrate this, we will reward the author of the 1000th valid comment with a prize.
It is not a big prize, so please don’t spam with “great job”.
We prefer comments that add value to us, and the readers.

This quantitative milestone requires a qualitative statement:
Comments sharpen thoughts.