Turning the tide of bad testing Martin Jansson

Social psychologists and police officers tend to agree that if a window in a building is broken and is left unrepaired, all the rest of the windows will soon be broken. This is as true in nice neighborhoods as in run-down ones. Window-breaking does not necessarily occur on a large scale because some areas are inhabited by determined window-breakers whereas others are populated by window-lovers; rather, one unrepaired broken window is a signal that no one cares, and so breaking more windows costs nothing. (It has always been fun.)

A quote from an article published 1982 in Atlantic Monthly [1] and was written by James Q. Wilson and George L. Kelling. This metaphor can be applied to many situations in our daily lives. It is often used in sociology and psychology.

At Artima Developer [2] Andy Hunt and Dave Thomas, authors of The Pragmatic Programmer: From Journeyman to Master (Addison-Wesley, 1999), discuss and elaborate around an important part of being a Pragmatic Developer namely fixing Broken Windows. The book mentiones many ways of keeping clear of the broken window in order to avoid an increasing technical debt.

One broken window – a badly designed piece of code, a poor management decision, that the team must live with for the duration of the project – is all it takes to start the decline. If you find yourself working on a project with quite a few broken windows, it’s all too easy to slip into the mindset of ’All the rest of this code is crap, I’ll just follow suite’.

I’ve seen developers focus on getting the amount of warnings down. I’ve seen project managers addressing the flow of bugs, thus measuring the fix rate, find rate, resolve rate and close rate to keep the project from becoming unhandlable. Another project manager was stating that developers must fix “shitty bugs” or the smaller bugs, before the build reaches the official testers. We see different treatments of this in development.

How do we see Broken Window Theory affect testing?

  • When you have stopped caring about how you test.
  • When you have stopped caring about how you report bugs and status.
  • When you have stopped caring about how you cooperate with others.
  • When you have lost focus on what adds value.
  • When you do not cooperate with developers and have stopped talking to them.
  • When you complain about the requirements and have stopped talking to the business analysers.
  • When you avoid testing areas because you know that bugs won’t get fixed there anyway.
  • When you avoid reporting bugs because it doesn’t matter.
  • When you report status as you always have been, without any real content.
  • … and so on …

All this create Broken Windows and, as I see it, result and are summarized in a Testing Debt, which was inspired by Ward Cunninghams definition of Technical Debt [3]. Jonathan Kohl has made a definition of Testing Debt [4] and Johanna Rothman another [5].

How do you identify things that increase this Testing Debt?

You can find a long list [6] of things that increase the testing debt . Don’t be discouraged, it is possible to fixing  the broken windows and descreasing the testing debt.

What do we do to contribute? What do we do to provide value? Where do you start?

What can you do to decrease the Testing Debt?

There are lots of things that you and your team can work on and excel in. There are also some areas which you can start with directly without depending on anyone but yourself and those you interact with.

Tip 1: Exploratory test perspective instead of script-based test perspective. This can be roughly summarized as more freedom to the testers, but at the same time adding accountability to what they do (See A tutorial in exploratory testing [7] by Cem Kaner for an excellent comparision). And most importantly, the intelligence is NOT in the test script, but in the tester. The view on the tester affect so many things. For instance, where you work … can any tester be replaced by anyone in the organisation? This means that your skill and experience as a tester is not really that important? If this is the case, you need to talk to those who support this view and show what you can do as a tester that makes his/her own decisions. Most organisations do not want robots or non-thinking testers who are affecting the release.

Tip 2: Focus on what adds value to developers, business analysers and other stakeholders. If you do not know what they find valuable, perhaps it is time that you found out! Read Jonathan Kohl’s articles [8] on what he thinks adds value for testing.

Tip 3: Grow into a jelled team (read Peopleware by Timothy Lister and Tom deMarco for inspiration). Peopleware identifies, among other things, things that you should NOT do in order for a group to grow into a jelled team. As a team it is so much easier to gain momentum in testing and especially so if you are jelled. Do you need to reconsider how you work as a team?

Tip 4: Work on your cooperation. Improve your cooperation with the developers. Assist them in what they think is hard or not their job. Polish on the areas of improvement that the developer think you lack in. Show them that you take their ideas seriously. A good cooperation is one of the keys to success. Improve your cooperation with the business analysers. The ideal situation would be when you are able to give them feedback early, during and after their work on requirements. What can you do to get to that situation? What do they want?

Before you or your group start testing an area, invite the business analysers to let them explain theirs thoughts on the feature. Invite the developers so that they can explain the design, risks etc. Invite other parts of the organisation that you think can contribute with ideas. When you have the different stakeholders with you, show them how you work and what your thought patterns are. Explain how you conduct testing. Pair-testing (tester + tester, tester + other stakeholder) is an excellent tool for getting to know the strength and weaknesses of your test group but also for education and showing others how you work. If the stakeholders do not trust in your work, this might be a method to show them your skill. It is common that the tester is left on their own. You must take the initiative! Invite them to you.

Tip 5: A good status report can (and should) affect the release decision, but it might also affect how testing is perceived. Still, keep to the truth as you see it. Dare to include your gutfeeling, but expressing it clearly that it is just that. If you have different metrics included, be sure to add context and how you as a tester interpret it.

Tip 6: The bug report is one of the most important artefacts that come from the tester. A bad bug report can have so much negative impact, while a good one can have the opposite. If possible, review the bugs before you send them. By doing this you will get new test ideas as well as raising the quality of the bug report. Train your skill in reporting bugs. Notify project management, developers, etc that NO bug report with bad quality is to be accepted from your team and that you want feedback on how to improve. You really want to make the life easier for the bug classification team and developers trying to pinpoint  and eventually fix the bugs.

In a project a co-worker and I had worked on a bug for a time. It was a late friday afternoon and almost everyone were heading home. It was only a few days before the release. Each bug reported engaged lots of people, no matter how small they were. The bug we found though was a blocker. We considered if we were going to hand in the bug as it was or if we were going to go to the roots of it. A week or two earlier the developers had worked the whole weekend trying to fix a set of bugs that were very badly written and hard to reproduce. So, we decided that we were going to collect as much information as we possibly could for the bug to get a good classification and possibly get fixed. Our goal was to make a good bug report [9]. We started keeping track of the frequency and noticed that it was 10 out 50 times. We collected logs and reports from all parts of the system. When we knew that the repro-steps were correct, as we saw it, and that the content was ready. Then the bug was released into the bug system. After an analysis by the bug classification team the determined that the bug was both hardware and software related, so several teams got involved. When hardware thought they were ready they moved the bug to software. It was possible to track the communication and collaboration between the different teams. Not once were there any hesitation that any information was missing or that it was incorrect. In total there were close to 30 people working on the bug. Eventually it all got fixed and was sent back for verification. So, we spent a few extra hours to make a good bug report and saved/minimized time as well as lessening frustration.

Most of these tips are easy to start with, but they are also important to work with continuously.


  • Raise your ambition level
  • Care about your work and those that you work with
  • Prioritize testing before administration
  • Cooperate and collaborate
  • Report World class bugs
  • Create status reports that adds value



[1] Broken Windows – http://www.theatlantic.com/magazine/archive/1982/03/broken-windows/4465/

[2] Don’t Live with Broken Windows – http://www.artima.com/intv/fixit.html

[3] Explanation of Technical debt – http://en.wikipedia.org/wiki/Technical_debt

[4] Testing debt – http://www.kohl.ca/blog/archives/000148.html

[5] An Incremental Technique to Pay Off Testing Technical Debt – http://www.stickyminds.com/sitewide.asp?ObjectId=11011&Function=edetail&ObjectType=COL

[6] Behind the scenesof the test group – http://thetesteye.com/blog/2010/07/behind-the-scenes-of-the-test-group/

[7] A tutorial in exploratory testing – http://www.kaner.com/pdfs/QAIExploring.pdf

[8] How do I Create Value with my Testing? – http://www.kohl.ca/blog/archives/000217.html

[9] The Impact of a good or bad bug report – http://thetesteye.com/blog/2009/07/the-impact-of-a-good-or-bad-bug-report/

Darren McMillan November 13th, 2010


Firstly what an excellent write up, fantastic I enjoyed every minute of it.

I think there is a fine line between investing time in getting information together to file a good bug report to get it fixed. It sounds like some people really have to work hard to get their bugs fixed, to me that’s a crying shame & a real waste of valuable time the tester could be spent testing. Luckily I’ve yet to experience such an attitude towards having my defects fixed, I’ve no doubt that I will at some point.

I’ll be sharing this with my team, thanks for writing it. There’s been some really excellent material coming from you guys, keep it up 🙂



Marlena Compton November 14th, 2010

Hi Martin,

Your post resonates with me because I’ve been thinking lately about the role conscience plays in software testing. When you write about caring, I interpret this as using our conscience to help improve the software we test. I sometimes wonder if, as testers, we are, in a heightened state, the conscience of our team.

I hope to see more writing from you and others about this aspect of testing.

Zeger Van Hese November 15th, 2010

Thank you for this very thoughtful & interesting post. I like the analogy.
Caring about our work, cooperating, making sure we do the best job possible…
What you describe here, in my eyes, is at the heart of “Testing Craftmanship”.

@Marlena: I do think we are the conscience of our team (it’s not a responsibility of testers alone (but when everyone else is failing, we should step up)


Michael Bolton November 15th, 2010


The Broken Window theory is persuasive yet controversial (http://en.wikipedia.org/wiki/Broken_windows_theory). In the extreme, it’s hard to believe that fixing broken windows in a neighbourhood will result in immediate drop in all kinds of social problems; on the other, enforcing fare-jumping laws on the subway does seem to have some probability of reducing fare-jumping, at least.

That’s consistent with something that I think is even closer to your excellent argument: the normalization of deviance (http://en.wikipedia.org/wiki/Space_Shuttle_Columbia_disaster).

To Tip 6 I might add a suggestion to enroll in the AST Bug Advocacy course: http://www.associationforsoftwaretesting.org/training/courses/bug-advocacy/

—Michael B.

Martin Jansson November 15th, 2010

Thanks for your feedback.

@Darren, in smaller organisations and teams the feedback loop is so much shorter, thus making some of this issues described above obsolete. Still, you should always strive for making things better no matter the size of the organisation.

@Marlena, I have not seen it as we are the conscience. I need to reflect on that. I’ve sometimes expressed it as the testers being the oil in the machinery or that we are the sweepers in the curling team. I agree that it would be interesting to delve deeper into this area.

@Zeger, yes this is the heart of the craftsmanship. Still, if you look at the essence of the Broken Window theory you will notice that you might be a good craftsman, but because of those around you, you will lower your ambition and attitude if everyone else is doing so. The opposite can be achieved by raising the ambition level in all our tasks and quality of our deliveries, such as bug reports.

@Michael, I could probably take some other example. I’ve used different examples each time I present this. I guess you could conjure something that works the best with your audience. My wife told me that one example I used would not fit well in Sweden, since it was of a more American style. Regarding the Bug Advocacy, I agree totally! Still, those who work with me are under a constant reminder on the importance of excellent bug reports. It would be interesting to attend the course myself. Next time I present it I will recommend the AST course as one way of becoming better.

Rikard Edgren November 16th, 2010

These ideas have been lingering on my mind since you presented at SAST 15.
It’s not an all pleasant message, but it is necessary.
One thing I’m missing is the relationship to the developer’s technical debt.
It seem not so common that you only have testing debt, e.g. “When you avoid testing areas because you know that bugs won’t get fixed there anyway” is a situation where technical and testing debt spiral together.
What do you do then? Should other actions be taken, or is it just to fight on, and hope your improved testing will inspire other departments, eventually?

Martin Jansson November 17th, 2010

Rikard, I think we need to think more on the connection between the testing debt and technical debt.

What can developers do to decrease the testing debt and what can testers do to decrease the technical debt?

We often discuss how we improve in the test domain, but seldome how we improve in the cooperation between testers and other stakeholders.