Background Complexity and Do One More Thing Heuristics Rikard Edgren

I spend a lot of time testing new features for the next release.
I actively try to not test the features in isolation, to not use the easiest data and environment.
One example of this is that I often use “documents” that are more complex than necessary, that includes elements and strange things that aren’t obviously related to the functionality in focus.
E.g. if I were to test text coloring in Word on a fully-fledged book with different types of footnotes, a lot of formatting and images et.al.
This doesn’t cost much time, but now and then expose weaknesses, either in the new functionality, or in the old product.
I guess many do this, and I recommend it to anyone who has the freedom or guts to control their test environment.
Maybe the name Background Complexity Heuristic can help you remember it.

When a test is “completed”, I like to do something more with the product, preferably something error-prone, popular or the next thing a user might do, not necessarily related to the new feature.
I don’t think too much, rather just press F1 if I can’t think of anything better, since I don’t want this extra-testing to take too much time.
I call this the Do One More Thing Heuristic.
It helps you learn, and find problems.

For both of these tricks, it might take some time to pinpoint problems, but the alternative might be to not know about it at all.

5 Comments
Justin Hunter March 14th, 2011

Rikard,

I like this idea a lot and use it myself frequently.

If “import a Microsoft Word document” is included as a step in multiple end-to-end tests, I like to ensure variability between the different Word documents that are imported.

One method of doing that would be to set up a series of test inputs into a pairwise, and/or combinatorial test case generating tool in the following way:

Version of Word: latest version / older version
Includes footnotes: footnotes / no footnotes
Margin widths: narrow / wide / normal
Font type: / normal / unusual / mix
Pictures included: / .png / .jpeg / none
Justification of pictures – if included: center justify pictures / left justify pictures / right justify pictures
Font size for part of the text: 24 / 48 / 72
Justification of first text: center text / left justify text / right justify text
Font size for other text: 12 / 8 / 212
Color of font: Black / Other
Include non-ASCII characters? Yes / No
Highlight sections of text ? Highlight some text / don’t highlight any text
Use red-lining? Use red-lining / don’t use red-lining
Columns per page one column / two columns / three columns
Included bullets? No bullets / Yes, include bullets / Yes, include numbers / Yes, include letters

What you would wind up with is a set of Word documents that are very different from one another that would not only test each of those features at least once but would test each of those features in combination with every other feature. It would maximize the variation between tests and minimize the repetition.

– Justin Hunter

Shmuel Gershon March 14th, 2011

Rikard, we should not underestimate an additional press of F1. Good point.
In 2005, a tester in my team filled a bug about F1 not working any more after one specific operation. I still have this bug saved with the ones I like. It was found exactly by trying… Just one more thing.

Rikard Edgren March 15th, 2011

Justin, yes, that’s another way to ensure variability. It will however not catch things you didn’t know you were looking for, which is the essence of my heuristic.

Shmuel, my best F1 story (we all have one, right?) is a case where years of persuasion was needed to make F1 launch the Help instead of a settings dialog.

Do One More Thing can of course be used before or inside a test as well (but then maybe it’s just a part of Exploratory Testing?)
I think both these heuristics can be used together with any test type (but for a purely scripted test, you would be breaking the rules…)
For those of you that collect testing heuristics, I recommend putting these in the category Test Execution, Serendipity-enablers

Justin Hunter March 15th, 2011

Rikard,

I’m not sure I understand your point. If you had objected to my approach by saying “You’re not talking about adding one more thing, Justin. You’re talking about adding many new things. That strategy can often be overkill in my opinion.” I’d say ‘fair enough, I agree that finding the right balance for your testing context is important.'” However, if you’re suggesting that attempting to identify a series of ‘one more thing’ testing ideas ahead of time* will limit what you can find to only what you know you’re looking for when you put the list together, I can say that in my experience, that is just not the case at all.

In fact, I would go so far as to say that one of the largest benefits of structured variation methods (like pairwise test design and combinatorial test design and orthogonal array test design) is precisely that they help you find things that you are not even aware that you’re looking for. The variation of test inputs, from test to test, is maximized. This helps you to find defects because you’re taking new paths through the system that you otherwise wouldn’t (which is one of the goals we both think is valuable with your “One More Thing” heuristic). To paraphrase James Bach, if you want to survive your passage through a land-mine field, following in the footprints of someone who has successfully made the journey is a wise choice; if you’re trying to find software bugs, it is an awful strategy. Variation is good. Maximized variation is even better.

I had a great dinner with James Bach and solved his tricky “calendar time entry puzzle” using an approach that simply focused on trying to achieve as much variation from test to test using whatever categorizations I could thing of for ways to maximize variation. The categorizations I chose (early morning / late morning / early afternoon / evening) had nothing to do with the underlying bug but the variation of combinations of test inputs that these arbitrary categorizations caused resulted in finding the fault very efficiently. I wasn’t looking for the particular type of bug that James “planted” in that example. In fact, at that point, I wasn’t even aware that that kind of bug even /existed/ in the real world. Nevertheless, the structured variation approach quickly triggered the bug in a very small number of tests.

A counter-example to the above “success story” would be an example where, say, for a web app, a JavaScript race condition bug only showed up in the IE7 browser. If you didn’t think to list IE7 as one of the browser types, then, yes; in that case – as you say, “It will however not catch things you didn’t know you were looking for, which is the essence of my heuristic.”

I hope my explanation makes sense. I’m confident we’re both largely in agreement in principle. I think your One More Thing heuristic is great and I’ll remind myself to apply it moving forward. I’d be interested in whether you have experiences that are consistent (or inconsistent) with my assertion that structured variation approaches often have the potential to uncover things you’re /not/ looking for.

– Justin

* PS

I was aware as I was writing “attempting to identify a series of ‘one more thing’ testing options ahead of time” that you would be puzzled and think: ‘That’s crazy talk. It is an oxymoron. The “One More Thing” idea only comes into the picture after the test is being executed by the tester; by definition, a tester won’t be able to think of “One More Thing” when he’s designing the test cases in advance of test execution.’ Even so, that’s what I’m /attempting/ to do (and recommending to others that they do) when identifying appropriate test inputs for their pairwise and/or more thorough combinatorial testing plans.

And then, you might ask, what happens during test execution? I would be trying to execute tests that have the combinatorial testing / structured variation One More Thing test ideas (that were thought up ahead of time) as part of each test case…. /and/ … in keeping with your insightful post, when I’m at the keyboard, I often try to dream up new “One More Thing” ideas to add on the fly. If some of the spontaneous “One More Thing” ideas bear fruit, they are likely to turn into “Ahead of Time One More Thing” ideas that get fed into future combinatorial testing plans (so they’re built in from the start).

Examples of One More Thing ideas that have transitioned from spontaneously added to my plans to One More Thing ideas that are often built into my plans from the beginning include resizing application window sizes, enabling and disabling JavaScript, using non-ASCII characters, typing symbols frequently used in programming languages into data fields, including several of the data format inputs on Elizabeth Hendrikson’s excellent cheat sheet (which is available at testobsessed.com), etc.

Rikard Edgren March 16th, 2011

Hi Justin

The intention of my comment was not to object to your thoughts, I just wanted to say that it was different (to me.)
I guess I did not want to steer the thread towards a pairwise discussion (I will post my thoughts on that later, haven’t thought it through yet.)
I was also certain that your suggestion was about the Background Complexity Heuristic, but I can guess it could be both.

Anyway, I am very glad you tóok the time to elaborate further.
Your switching from “One More Thing” to “Ahead of Time One More Thing” is a very good example of using what you learn, and adjusting your test methods, it’s Exploratory Testing Dynamics.
And the mix of humans and tools, supporting each other, is something our industry can do more, and better, with.

Using many different approaches is the key to good software testing.