Systems outside the testing radar Martin Jansson
When is a system small, non-complex or unprioritzed enough not to be tested? If there is a test organisation working on the bigger system that will be released to customer, what happens to the other smaller systems then? Is it so that they are almost always left untested? I usually identify these as applications that are created by one person, they can be used in critical parts of the product development, but are not viewed at as a product, sub-system or application that needs to be tested.
During mid 90’s I worked at a small localization company. At that time I was involved in a project where we were preparing a major translation of swedish material that was going to be translated to 20 languages. In order to succeed with such a translation it was important that we first translated the material into english and then went from there to the other languages. I did a little script to fix the glossaries for the various languages. It was just a small script to prepare for translation to the various languages. At that time we had a few testers but they were either testing commercial applications or testing for our customers, so there were no resources left for my stuff to be tested. Naturally there were loads of bugs in my code. I accidently made an offset in the script for each row, so each translated language used my glossaries with an offset of the translation. The proofreaders noticed the error eventually, but it had cost the company a few millions by then.
During another project we had a problem working with our customers files. A customer required us to use a specific program that used one working folder which contained close 100000 RTF-documents. Each time a translator checked the folder to find a specific document it took him a few hours just to see the file. So, when we wanted to do many changes to many files at the same time we calculated this to take 3 years time in order to implement our changes. So, we did a little perl script to do these changes we want by searching and replacing directly in the RTF-format (very dangerous way). First we did just one little change and when we saw that this worked well, we added more. Eventually we had a quite long list of changes being done to these 100000 files. We probably saved a few hundred years of time and budget, but eventually there were bugs in the script. One of these bugs where that I thought I had fixed issues with too many spaces and replaced them with just one space, but RTF sometimes needed two or more spaces. The result was that whole sections of text disappeared totally from being shown. This was just a small tool.
At most companies there are many such small systems floating around in the organisation. Shall the test organisation take a step toward testing those? If they are so small might it be so that testing also takes little effort? How about automation tests or production tests? Are they prioritized to be tested or do they fall into the same unimportant category?
If we see testing as a service, I think it is important that the test organisation involve themselves in all development, even if it is a few minutes time. Developers also need to involve testers, to give them a chance to do their job. Naturally it will be impossible to do everything, but in some cases it only takes such a short time to find a few bugs that it is worth it.
Good thoughts and great stories!
I guess that if you had a risk-based approach to testing in such projects, it would not be prioritization amongst products/applications. Instead the test areas would have been selected according to the risk that the applications or functions imply which then might be exposed during testing; regardless of application type and size. But it is easy to forget this!
It is important to look at testing projects from different angles, not only from a line organization point of view; which is so often the case.
Interesting question. We could probably benefit from considering this to a much larger extent than is currently done. But just becasue we are able to find a bug somewhere doesnt automatically mean we benfit from finding it.
If the resource allocator had focus on the overall costs I think it would look differently. The test team could still be clear that they would be able to identify these kinds of errors on all development to lower cost (that is, if they are fixed by development).