The Boundaries of System Testing Henrik Emilsson

Over the years I have noticed that System Testing have had a special meaning at every place I have been at; and it has even meant different things for people on the same place. I.e. System Testing is depending on the context; and it is fuzzy because we are dealing with arbitrary and/or general systems.

“System testing of software or hardware is testing conducted on a complete, integrated system…” but the boundaries on what the complete and integrated system includes may vary a lot if you look at it from an outside perspective as well as if you look at it from within.

Some examples: When developing a cell phone, where does the “system” begin and where does it end? Does it begin with the cell phone OS or the phone (hardware + OS + software)? Do we need to include any network? How large network? What if you are a third party and developing the phone OS (that could be run standalone in a simulator)? What if you develop an application for the phone (or even different kind of phones)?

So when is a system “complete and integrated”?

I guess that it is very hard to define this because the boundaries of system testing are elastic, and this is true since the boundaries of an arbitrary system are hard to define. Still, we use the term “system testing” daily.

So what is my point really?

I have often found interesting things just by thinking on what system I am working with. I have found out that some things are really outside the boundaries that I first believed were in; and perhaps more useful has been to find things that really should be included in my system and the system testing. And the boundaries might stretch the further you get in a project; what was a complete and fully integrated system during the first half of the project might just be a sub-part at the end of the project.

So my advice is that you take a few minutes now and then in order to seriously ponder the boundaries of Your system. It will affect the type of testing You choose to focus on; and it will be a good aid in order to get a proper and valid test coverage.

Rikard Edgren March 20th, 2010

Yes, this is an important observation.
The most important issues might occur outside your system, but be due to things within your system.
Of course, testing the “whole system” (a small part of society?) is too time-consuming, but we should at least test “things” from our system that other interacts with. Maybe with a scenario that starts and ends far from the software being developed.
It has been wisely said something like “Never make your box the biggest one. It isn’t.”
For system testing we sometimes should think in the opposite way:
“make the box of possible testing scope as large as possible”

Martin Jansson March 20th, 2010

Excellent thoughts!

Another interesting concept in this is when you have a larger organisation and you have different levels of testing. On one system you might have up to 50 levels. So, the question as you phrase it “What is the boundary of my system” is very interesting. I also think it is interesting to know what did they test before me? Perhaps what boundary did they see? I am sure if you start to compare these boundaries between the different levels you will see lots of overlap but also a lot of spots where no one tests.

If we see the universe as a quite complex system, how many levels of testing would that be?

Henrik Emilsson March 22nd, 2010

@Rikard: That is a good idea! Scenarios is a very useful technique to visualize the system and to analyze the boundaries. This gives you an opportunity to step in and out of the system, which of course stretches beyond the software.

@Martin: Yes, and I guess that there should always(?) be an overlap for at least two reasons: Coverage and use case. Coverage isn’t hard to understand, but the use case is interesting because the “current system level” might have a whole different set of use cases than its sibling, children or parent. Often these use cases steps in and out of the “current system level” but can cover completely different things.