Automate configuration checks while testing Martin Jansson Comments Off on Automate configuration checks while testing

I assume you are familiar with the discussion around checks vs testing brought to you by Michael Bolton, which I agree with.

With configuration I mean settings on a unit such as settings for whatever you are testing. This can be configuration heavy devices such as switch, router or similar using SNMP, applications using the registry or applications using databases for setting storage. Those of you who are familiar with these settings would also be familiar with setting it using a CLI.

So what do I mean with… automated configuration checks while testing?

In a configuration heavy environment there are lots of settings that you know about the system that are stored in a configuration. While working with the system the configuration sometimes changes slightly. Some tests might be to perform a certain task and while finished check if some things have changed or not. You are testing in one area, but you are continuously interested if something is changed in specific configurations.

For instance, in an Ethernet Switch you want to test around the changing of the MDIX, Speed of a LAN port. At the same time you are generating traffic through the system. While doing this you wish to monitor that no other settings are changed, so you might wish to check configurations for alarms. There might be thousands of these settings that you know the state of and that should not be changed, or at least you know what states that they could enter.

My idea is that you create automated configuration checks that polls the system for information either continually or when triggered. The checks are context dependent, but it is fairly easy to know what context is valid in each situation when it comes to these settings. If it is too complex you should perhaps not automate it.

As I see it, a check would best be suited as a unit test (or unit check as Michael Bolton calls it). I am fond of using Python in combination with a unit test framework and Pexpect. Pexpect enables you to create wrappers around whatever you are trying to do with the system whether it is using the CLI or doing SNMP. Pexpect then enables you to check the result using regular expressions, thus enables you to build in some context-dependent checks.

This would enable you to create tools for yourself while doing the real testing.

Growing test teams: Progress Martin Jansson 2 Comments

A lot of these ideas come from Peopleware by Tom DeMarco and Timothy Lister. As I see it, they realised it is easier to show things that will stop the growth instead of listing things that will actually create the team. Jelled teams are created when many of the factors have been eliminated that stop us from growing.

What stops growth of a test team? I identify new things almost every day that in one way or another disrupts the team or stops it from growing.

The hunt for test progress!

When we talk about progress it is directly linked to a goal, thus progress towards a certain goal. If the goal is unclear or has been lost, the progress estimation can sometimes shift toward things that was not meant to be.

How do you determine progress then? When are we done testing? If our plan from start is fixed, we might have a defined set of tests that must be run in order to say we are complete. That is, complete with what we thought from the beginning. But the plan changes, no? If that is the case the progress report is ever changing up or down.  Is it perhaps not really that interesting to focus our time on bulletproof progress estimations? We stop testing when time runs out or when someone says stop?

I think the test team have a harder time growing when …

  • the ability to show test progress becomes more important than the actual testing done or the information produced from it.
  • we think it is important getting more green than red in the pie chart or bar chart.
  • we avoid testing areas that might result in bugs because that might disrupt the expected weekly progress.
  • it is more important doing a test that show progress than doing a test that might actually find bugs.
  • we avoid helping developers fix the bugs found because we need to show test progress.

Too much focus on progress will generate bad energy in the test group and therefore slow us down, as I see it.

YouTube Premiere! Rikard Edgren 2 Comments

At EuroStar 2008 I presented Testing is an Island – A Software Testing Dystopia.
Fritz shot the pictures, Henrik wrote the music, and I uploaded it on YouTube:

The accompanying paper can be found at http://www.thetesteye.com/papers/redgren_testingisanisland.doc

Exploratory test plans? Martin Jansson 4 Comments

How would a test plan be constructed that is for exploratory testing? I would assume it is different from a traditional test plan?

Would we use concepts such as entry/exit criteria for test? I would never say No to a build to test. Skipping entry/exit criteria. I guess it also has to do with the role of testers. Do we act police or are we a service?

Resources needed? Do we ever know how many testers we need? We can give a vague number how many we want to be to full comfortable, but can this ever be fully accurate? If we aim to test as much as we can in the defined set of time, we will do so with the amount of resources that we have been allocated. I guess it also has to do with how you are organized as a team and what your mission is. If the team is running several projects and tracks at the same time it is even harder to determine how many resources that are needed. Do you really want to allocate testers to a certain percentage in different projects?

What is to be tested and how? Well, do we ever know that in advance? We should be able to list tons of test ideas, but isn’t that just our initial idea of what is to be done and that will change as soon as we sink our teeth into the first build.

Do we get the test plan approved and then use it in the project? A plan is just temporary. It will change many times for sure. Planning is better than the actual plan. No matter what project I work in I am able to do planning incrementally, using scrum or whatever tool that is available.

I vote for that the traditional test plan as an unnecessary artifact. One would perhaps not want a generic plan either around exploratory testing?

How do you plan your exploratory testing? What do you focus on? What resistance do we meet from management when presenting our exploratory plans?

Alternative usage of Test Process Improvement Rikard Edgren 1 Comment

Last week I attended SAST VÄST seminar (Gothenburg section of Swedish Assocation for Software Testing) with two interesting presentations.
One of them was about experiences of TPI, Test Process Improvement, and a sneak peek of the improved TPI Next.

I am not fond of TPI, or TMM, or CMM, or anything else that tries to objectively measure how “good” something is.
I think you miss what is most important, and just because you are at level 5 doesn’t mean you will find all important defects.

But there are interesting things and good suggestions in Test Process Improvement, so an alternative usage is to make a TPI analysis with your colleagues, throw away the book, and focus on what you think is most important.

Or have I missed something about TPI?
Maybe it is the only way to sell process improvement to your managers?
And getting an external test expert to look at your work should always be fruitful?

Multidimensional Subjectivity in Software Testing Henrik Emilsson 8 Comments

I use Jerry Weinberg’s definition of quality: “Quality is value to some person”; and I use Cem Kaner’s extension to the definition so that it becomes “Quality is value to some person (that matters)”…

I.e. quality is inherently subjective. And there are a lot of persons that are affected by software that we produce… With this in mind it becomes hard for a tester to stay focused when there are so many persons with opinions that could matter; but if we can find out “who matters” we decrease the number of possible values to care about. Still, this will leave us with several important values that need to be taken into account when testing the product.

So how can we testers deal with that?

You could do a role play when testing and put on someone’s hat during the test session; or you could let real users test the product and let them have a say about what they find.
But for a skilled tester it is more about being multidimensionally subjective and think as several persons at the same time.

This means that a lot of values, beliefs and preferences are taken into account which might matter. Not as an average, but as several independent quality dimensions that has (more or less) importance. The hard thing is to know when a value is threatened and for which (type of) person that is affected; and if this matters at all.
I.e., it is a matter of questioning “is there a problem here?” constantly and try to pair a potentially threatened value with its corresponding person. And if this problem threatens a value for some person that matters, we have found a bug. This corresponds to the definition of bug from Cem Kaner “A bug is something that threatens the value of a product”.

Much of this happens automatically for many of you skilled testers out there; when I thought of it recently I realized that this is something I do more and more and hopefully I am improving this skill each day. This is a great skill to have when testing software!

Anyone having any thoughts on this?
Have you experienced this yourself?
If not, does it sound like an interesting thing to examine? Would this be helpful to you?

Cheers,
Henrik

Update 2009-09-14: According to comment from Michael Bolton, see below, the quotes that I said belonged to Cem Kaner are both quotes from James Bach. I apologize for referencing wrong person.

What’s so special about software testing? Rikard Edgren 6 Comments

There are some things about software testing that are special, but not unique:
* you are never done, and there is always something to do
* you have to be creative very often
* you are dependent on new, different and conflicting technologies, users, objectives
It’s not easy to be a tester, thank God for that!

And there are some things that are unique:
* it is often good to do things in the wrong way
* you try to destroy something you love

You might say that none of the above is true for an executor of detailed test cases, but that’s not software testing, it is software checking!

Michael Bolton on Testing vs. Checking Henrik Emilsson 2 Comments

I just want to promote a really good blog post written by Michael Bolton where he describes the difference between Testing and Checking:
http://www.developsense.com/2009/08/testing-vs-checking.html

I wish that many managers, testers and developers read this post…

Cheers,
Henrik

The Inquisitive Tester – Part I: Question the tests the test eye 4 Comments

In order to become a successful inquisitive tester, there are a couple of things you can do to improve your skills beyond the more common quest to “question a product”. One important thing is to question the tests themselves.

——————–

Have you ever run tests and wondered if they were really necessary, perhaps knowing that the tests are useless?
Have you ever run tests that were too old and not updated with latest terminology or functionality?
Do your tests contain too much configuration information? Should this be put elsewhere?
Have tests become redundant because those failures no longer happen, ever?
Have the intent of the test been lost and therefore been rendered useless?
Have the tests already been run, on the same build? Twice?
Have you found any bugs or any important information at all with the tests?

Are your tests the most powerful?
Are the tests credible?
Can the tests be faster to execute?
Can you run the most important tests first?
Are the tests too narrow and/or too general?
Do you really understand the test?
Have the original test idea been lost in translation?
Are the tests too much of a projection of the test designer’s thought of view?

Are your tests interesting or boring to execute?
Are the tests in line with your test strategies?
How often do you change your test approaches?
Can the tests instead be better used as input to developers’ unit tests?
Can the essence of the tests be used elsewhere?
Have your tests been reviewed by your colleagues, including technical writers?
Have your scenario tests been reviewed by business people?
Have you captured how the users will use the software in the tests?

Are you satisfied with your tests?
Are your (hidden) stakeholders satisfied with your tests?

——————–

Can you come up with more questions?
Regards,
the test eye (Henrik, Martin & Rikard)

Broken window theory and quality Martin Jansson 6 Comments

Consider a building with a few broken windows. If the windows are not repaired, the tendency is for vandals to break a few more windows. Eventually, they may even break into the building, and if it’s unoccupied, perhaps become squatters or light fires inside.

Or consider a sidewalk. Some litter accumulates. Soon, more litter accumulates. Eventually, people even start leaving bags of trash from take-out restaurants there or breaking into cars.

based on an article titled “Broken Windows” by James Q. Wilson and George L. Kelling in March 1982. You can find a bit more about the background here.

It is my belief that the same thing happens with products and product development. When bugs start to accumulate in an area and where the bugs do not get fixed, you will as a tester lower your standard on what is a bug or not by comparing the new found ones to the vast amount that already exist. Eventually you might get the feeling that you might not even bother reporting bugs, because you know that they won’t get fixed anyhow.

I think this is mainly a managerial problem. Examples of these kinds of problems can be when…

  • focus is on implementing new features, just getting them in there
  • covering an area with tests is more important than actually finding bugs and getting them fixed
  • the threshold for getting a bug fixed in the late stages of a project requires earth quakes or miracles

Many other testers talk about this phenomena as when products are going rotten or something similar.

How serious is this problem for testers? What do you think?