Turning the tide of bad testing Martin Jansson
Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in run-down ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one unrepaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)
A quote from an article published 1982 in Atlantic Monthly  and was written by James Q. Wilson and George L. Kelling. This metaphor can be applied to many situations in our daily lives. It is often used in sociology and psychology.
At Artima Developer  Andy Hunt and Dave Thomas, authors of The Pragmatic Programmer: From Journeyman to Master (Addison-Wesley, 1999), discuss and elaborate around an important part of being a Pragmatic Developer namely fixing Broken Windows. The book mentiones many ways of keeping clear of the broken window in order to avoid an increasing technical debt.
One broken window – a badly designed piece of code, a poor management decision, that the team must live with for the duration of the project – is all it takes to start the decline. If you find yourself working on a project with quite a few broken windows, it’s all too easy to slip into the mindset of ’All the rest of this code is crap, I’ll just follow suite’.
I’ve seen developers focus on getting the amount of warnings down. I’ve seen project managers addressing the flow of bugs, thus measuring the fix rate, find rate, resolve rate and close rate to keep the project from becoming unhandlable. Another project manager was stating that developers must fix “shitty bugs” or the smaller bugs, before the build reaches the official testers. We see different treatments of this in development.
How do we see Broken Window Theory affect testing?
- When you have stopped caring about how you test.
- When you have stopped caring about how you report bugs and status.
- When you have stopped caring about how you cooperate with others.
- When you have lost focus on what adds value.
- When you do not cooperate with developers and have stopped talking to them.
- When you complain about the requirements and have stopped talking to the business analysers.
- When you avoid testing areas because you know that bugs won’t get fixed there anyway.
- When you avoid reporting bugs because it doesn’t matter.
- When you report status as you always have been, without any real content.
- … and so on …
All this create Broken Windows and, as I see it, result and are summarized in a Testing Debt, which was inspired by Ward Cunninghams definition of Technical Debt . Jonathan Kohl has made a definition of Testing Debt  and Johanna Rothman another .
How do you identify things that increase this Testing Debt?
You can find a long list  of things that increase the testing debt . Don’t be discouraged, it is possible to fixing the broken windows and descreasing the testing debt.
What do we do to contribute? What do we do to provide value? Where do you start?
What can you do to decrease the Testing Debt?
There are lots of things that you and your team can work on and excel in. There are also some areas which you can start with directly without depending on anyone but yourself and those you interact with.
Tip 1: Exploratory test perspective instead of script-based test perspective. This can be roughly summarized as more freedom to the testers, but at the same time adding accountability to what they do (See A tutorial in exploratory testing  by Cem Kaner for an excellent comparision). And most importantly, the intelligence is NOT in the test script, but in the tester. The view on the tester affect so many things. For instance, where you work … can any tester be replaced by anyone in the organisation? This means that your skill and experience as a tester is not really that important? If this is the case, you need to talk to those who support this view and show what you can do as a tester that makes his/her own decisions. Most organisations do not want robots or non-thinking testers who are affecting the release.
Tip 2: Focus on what adds value to developers, business analysers and other stakeholders. If you do not know what they find valuable, perhaps it is time that you found out! Read Jonathan Kohl’s articles  on what he thinks adds value for testing.
Tip 3: Grow into a jelled team (read Peopleware by Timothy Lister and Tom deMarco for inspiration). Peopleware identifies, among other things, things that you should NOT do in order for a group to grow into a jelled team. As a team it is so much easier to gain momentum in testing and especially so if you are jelled. Do you need to reconsider how you work as a team?
Tip 4: Work on your cooperation. Improve your cooperation with the developers. Assist them in what they think is hard or not their job. Polish on the areas of improvement that the developer think you lack in. Show them that you take their ideas seriously. A good cooperation is one of the keys to success. Improve your cooperation with the business analysers. The ideal situation would be when you are able to give them feedback early, during and after their work on requirements. What can you do to get to that situation? What do they want?
Before you or your group start testing an area, invite the business analysers to let them explain theirs thoughts on the feature. Invite the developers so that they can explain the design, risks etc. Invite other parts of the organisation that you think can contribute with ideas. When you have the different stakeholders with you, show them how you work and what your thought patterns are. Explain how you conduct testing. Pair-testing (tester + tester, tester + other stakeholder) is an excellent tool for getting to know the strength and weaknesses of your test group but also for education and showing others how you work. If the stakeholders do not trust in your work, this might be a method to show them your skill. It is common that the tester is left on their own. You must take the initiative! Invite them to you.
Tip 5: A good status report can (and should) affect the release decision, but it might also affect how testing is perceived. Still, keep to the truth as you see it. Dare to include your gutfeeling, but expressing it clearly that it is just that. If you have different metrics included, be sure to add context and how you as a tester interpret it.
Tip 6: The bug report is one of the most important artefacts that come from the tester. A bad bug report can have so much negative impact, while a good one can have the opposite. If possible, review the bugs before you send them. By doing this you will get new test ideas as well as raising the quality of the bug report. Train your skill in reporting bugs. Notify project management, developers, etc that NO bug report with bad quality is to be accepted from your team and that you want feedback on how to improve. You really want to make the life easier for the bug classification team and developers trying to pinpoint and eventually fix the bugs.
In a project a co-worker and I had worked on a bug for a time. It was a late friday afternoon and almost everyone were heading home. It was only a few days before the release. Each bug reported engaged lots of people, no matter how small they were. The bug we found though was a blocker. We considered if we were going to hand in the bug as it was or if we were going to go to the roots of it. A week or two earlier the developers had worked the whole weekend trying to fix a set of bugs that were very badly written and hard to reproduce. So, we decided that we were going to collect as much information as we possibly could for the bug to get a good classification and possibly get fixed. Our goal was to make a good bug report . We started keeping track of the frequency and noticed that it was 10 out 50 times. We collected logs and reports from all parts of the system. When we knew that the repro-steps were correct, as we saw it, and that the content was ready. Then the bug was released into the bug system. After an analysis by the bug classification team the determined that the bug was both hardware and software related, so several teams got involved. When hardware thought they were ready they moved the bug to software. It was possible to track the communication and collaboration between the different teams. Not once were there any hesitation that any information was missing or that it was incorrect. In total there were close to 30 people working on the bug. Eventually it all got fixed and was sent back for verification. So, we spent a few extra hours to make a good bug report and saved/minimized time as well as lessening frustration.
Most of these tips are easy to start with, but they are also important to work with continuously.
- Raise your ambition level
- Care about your work and those that you work with
- Prioritize testing before administration
- Cooperate and collaborate
- Report World class bugs
- Create status reports that adds value
DO NOT LIVE WITH BROKEN WINDOWS!
 Broken Windows - http://www.theatlantic.com/magazine/archive/1982/03/broken-windows/4465/
 Don’t Live with Broken Windows - http://www.artima.com/intv/fixit.html
 Explanation of Technical debt - http://en.wikipedia.org/wiki/Technical_debt
 Testing debt - http://www.kohl.ca/blog/archives/000148.html
 An Incremental Technique to Pay Off Testing Technical Debt - http://www.stickyminds.com/sitewide.asp?ObjectId=11011&Function=edetail&ObjectType=COL
 Behind the scenesof the test group - http://thetesteye.com/blog/2010/07/behind-the-scenes-of-the-test-group/
 A tutorial in exploratory testing - http://www.kaner.com/pdfs/QAIExploring.pdf
 How do I Create Value with my Testing? - http://www.kohl.ca/blog/archives/000217.html
 The Impact of a good or bad bug report – http://thetesteye.com/blog/2009/07/the-impact-of-a-good-or-bad-bug-report/