Chess & Testing Rikard Edgren

Analogies are helpful, not because they come with truths, but because they can help you highlight and think in different ways about the phenomena you are comparing with.
I think you can pick any subject you know a lot about, and after some thinking, interesting things will emerge.

The important moments
If two chess players are at about the same strength, the winner often is the player that realizes at which stage it is necessary to re-think the strategy very carefully. Some moves are more important than others.
For software projects that don’t go perfectly well, and when unknown things happen, the better activities might be straightforward, or require a lot of consideration and creativity.

Technique
There are many typical methods that every good chess player must know about (forks, rook endings etc.) in order to see opportunities and apply them in the right situations.
Software projects differ much, much more than chess games, but there are many available quicktests, tricks and test design techniques that you can make good use of, if you know them well.

Theory
In chess there has been an enormous amount of analysis of different opening moves, and a player has a great advantage if she knows a lot about how to start games, and the typical positions and strategies that are likely.
Since each project have a new starting position, we can’t have this in testing.

Understand what is important
In chess there are some key elements the player needs to think about (material, development, centre, king’s safety, pawn structure etc.), but in each game these different aspects have different importance; it doesn’t matter if you have a great advantage on the Queen’s side if your opponent is mating in three.
It is the same in testing, we might know beforehand which areas and attributes that matter, but since testing can’t be complete, and unknown things always happen, we need to adjust and focus on all those things that are most important.

Time trouble
If you spend too much time on your moves, you will end up in time trouble, and once there, it is a much bigger risk of making big mistakes.
We can get in the same situation in testing (e.g. a fixed release date, even though everything else has been pushed), with the main difference that the test team have little chance of avoiding this by their own means.

“it is better with a bad plan, than no plan at all”
When you learn chess, you are often told that you must have a plan, that having no plan is a bigger mistake. Later, I have realized that a bad plan probably is worse, but by always creating a plan, you get better at it.
So we have the same in testing: it is essential to have a plan, and a bad plan is better than no plan from a learning perspective.

Practicing
There are many ways to learn chess, but a key element is to play a lot; and I think it is the same for testing.

Analysis of games
After a competition game where you have played for a couple of hours, it is common that the two opponents sit together and go through the game, move by move. They talk about their thinking, and examine what would have happened if better moves had been chosen.
The exact same thing is difficult to do for testing, but detailed retrospectives of important decisions or bugs can be a great learning exercise, and we should do more of this. Maybe directly after a pair session?

History
When learning chess you study the history, look at classic games that you learn from and get inspired by.
We don’t have this in software testing, and it is a pity.

Diversity
The people playing chess are of all different types, and like in testing with all types of backgrounds. Maybe the concentration of peculiar people is higher, both in chess and testing; this is a good thing.

Fun
You learn more when enjoying yourself, both in chess and in software testing.

When making comparisons it can be fair to list the most important differences:
* Chess is a game, testing isn’t.
* Team work and collaboration is very different.
* Chess has a defined play area, and a specific set of pieces; and this is certainly not the case for any two software projects.

6 Comments
Markus February 3rd, 2010

If “Chess is a game, testing isn’t”, well, then we should probably make testing a game, which some people have already realized and started weekendtesting.com .
I think this is currently the closest we have as a “playground” in testing.

Zeger Van Hese February 3rd, 2010

Hi Rikard,
I like the chess-analogy. I thinks there’s even more of an analogy with testing than it appears at first. You say that you don’t really see the analogy with regards to “theory” and “history”. But I think there is, in both cases:
– Theory.
You said “a player has a great advantage if he/she knows a lot about how to start games, and the typical positions and strategies that are likely. Since each project have a new starting position, we can’t have this in testing.”
Although I agree that the context at the start of every test project is different, I think that it does help if you have the theoretical and practical knowledge/experience of other projects. That way, you easily recognize similarities and differences between them. You quickly know and remember what worked in which context, what the possible approaches are. This can help you in choosing the right one for your current situation, or at least the one that will be most effective.
– History
You said “When learning chess you study the history, look at classic games that you learn from and get inspired by. We don’t have this in software testing, and it is a pity”.
I think that when learning software testing, we should study and look at “the classics”. Classic, noteworthy books of thought leaders in the field, that inspire you and sometimes encourage you to develop your own theories. We read about how they solved important problems, read experience reports, learn from those I’m practically sure you did all those. No? 😉
–Zeger

Rikard Edgren February 3rd, 2010

Markus; nice opposite thinking! I guess Weekend Testing is playing, and having fun, in order to learning. I like weekday testing as well.

Zeger; you’re right, and wrong. For Theory it is good with experience and knowledge about other projects and models, but they are vague and give hints, they are useful tools, at the best. In chess the theory says “if I do this, she does that, I do this; and I have a slightly better pawn structure.” To try to do this for testing is flawed, too many elements are unknown.

History – yes, there are some really good books, and there are a lot of papers and presentations about (un)successful projects. But when you come down to the details, how tests are designed and executed, there aren’t so many examples, and those that exist are mostly about failures. And we definitely don’t have textbook examples that describes really good testing that at least some people think are musts for a tester prospect. Of course, the best material for this is probably confidential; maybe we will have a better situation in 50 years.

Here’s a blog entry on History of Software Testing: http://xndev.blogspot.com/2009/03/history-of-ideas-in-software-testing.html

And if you want more on chess & testing:
Blitz Testing at quicktestingtips.com – http://www.quicktestingtips.com/tips/2009/07/blitz-testing/
Jon Bach’s If you want to practice testing skill, play chess – http://www.quardev.com/blog/2008-03-24-1151885966

Martin Jansson February 3rd, 2010

I also disagree with “We don’t have this in software testing, and it is a pity.”. We have quality/test patterns, old risk or what you will call them. We have areas which we always know that developers forget… i.e. Large font handling, File handling etc. I consider this as a historical aspect.

I think we have a lot of Theory in testing. Your anology with opening moves can be seen as our various test strategies. For instance, we open with a build acceptance test, then move on to a more general smoke test… we then dig deeper into the top most risk areas. We also have the theory of always changing strategy in order to see as many faults as possible. Perhaps not able to compare to chess theories, but still.

Saam February 5th, 2010

Nice blog post. I’d like to add some input to the discussion regarding where you, Rikard, stated the following: “In chess the theory says “if I do this, she does that, I do this; and I have a slightly better pawn structure.” To try to do this for testing is flawed, too many elements are unknown.” Maybee I am slightly off topic here, but I think in both cases, testing & chess, it is a matter of how well you know your “fellow player(s)”. “she does that” is based on your expectations that “she” also read chess theory, and that “she” decides to apply it. The better you know your “fellow player(s)” the more confident you will be to rely on your expections, and I think this is true for both chess & testing.

Rikard Edgren February 6th, 2010

Rearding analogies, there are no rights or wrongs; so all additions are welcome and good!
But to clarify the subjects I chose, I should have written “Opening Theory” instead of Theory.
In chess it is very important to study openings, and to know exactly which moves you will make, commonly the first 15-20 moves for most games.
If you don’t study this, you won’t reach really good results in chess.
This emphasis on memorizing a lot of details is not applicable to software testing.

Instead of History, I should have written “Classic Games”. I would love if we could have more written details about successful testing projects.
So if you are testing a certain product, the History of Software Testing Expert could tell you: “You should read the test details of project X and Y, their solution will probably inspire you.”