Open Letter to EuroSTAR organizers – testing introduction Rikard Edgren 2 Comments

Hi

Thanks for your request of a high level summary of software testing. You would get different answers from each tester, and here’s what I think you should know.

1. Purpose of software

Software is made to help people with something. If people don’t have use of it, the product doesn’t work. This is complex, because people have different needs, and there are many ways that software can fail, in big or small ways.

2. Why we are testing

For some products, problems aren’t big problems. When they are encountered they can be fixed at that time, and the loss of money or image is not high enough to require more testing. But if it is important that the software is really good, the producer want to test, and fix, before releasing to end users. A generic purpose of testing is to provide information about things that are important. Specific missions are found by asking questions to people involved; the mission can be to get quantitave information about requirements fulfilment, and/or subjective assessments of what could be better, and/or evaluation of standards adherance etc.

3. Context is king

Every product is unique (otherwise we wouldn’t build it), so what is important differ from situation to situation. Good testing provides reasonable coverage of what matters. The strategies for accomplishing this can be difficult to find, but I know that I don’t want to put all my effort in only one or two methods. If you want to engage in conversations, start with “What’s important to test at your place?” and select from the following for follow-up questions: “What about core/complex/error-prone/popular functionality?”, “What about reliability, usability, charisma, security, performance, IT-bility, compatibility?”

4. How to test

Testing can be done in many ways, a generic description is “things are done, observations are made”. Sometimes you do simple tests, sometimes complex; execution ranges from automated unit tests in developers’ code, to manual end-to-end system integration tests done by testers/product owners/(Beta) customers. There are hundreds of heuristics and techniques, but you don’t need to know them; rather practice by seeing examples and discussing how something could be tested to find important problems.
Key skills are careful observations, enabling serendipity, vary behavior in “good” ways.

5. Test reporting

Testing is not better than the communication of the results. Testing doesn’t build anything, the output is “only”  information that can be used to make better decisions. While the testing can be very technical, the reporting is done to people, and this is one of many fascinating dynamics within testing. The reporting ties back to the purpose of software and testing (but also includes other noteworthy observations made.)

And with that we have completed a little loop of my testing basics. Any questions?

Regards,
Rikard

Black Box vs. White Box Henrik Emilsson 7 Comments

I have heard and seen the difference between Black Box Software Testing and White Box Software Testing being described as “whether or not you have knowledge about the code”; and that Gray Box Software Testing is a mix of the two.

But is it really about how much of the code you see?

I rather think in another way; and this is my take on an explanation:

Black Box Software Testing – When you worry about what happens outside the box.
White Box Software Testing – When you worry about what’s going on inside the box.
- Sometimes you know about the inside when doing Black Box Software Testing.
- Sometimes you know about the outside when doing White Box Software Testing.
What matters is that there might be a difference in scope, question formulation, and information objectives.

Another take which would mean the same thing:

Black Box Software Testing – When you don’t need to worry about what’s going on inside the box.
White Box Software Testing – When you don’t need to worry about what’s happening outside the box.

================================================

Disclaimer: As a tester I cannot say that I often categorize my testing into White/Black/Gray Box Testing. However, it can sometimes be helpful to think about the transition to another perspective; so I use Black and White Box as a test idea trigger (or heuristic).

How I Write Conference Abstracts Rikard Edgren No Comments

I guess some of you are writing, or thinking about writing, abstracts for EuroSTAR 2013, deadline is at 13 February.
You should do this, not just because Alan said so.
You should do it because you want to tell stories, enhance your own understanding of something that is important to you.

This is my process for writing session abstracts:

1. Think

I consider the theme, to see if it inspires me, but I don’t feel limited by it.
I know that a great abstract can get accepted, regardless of any link to the theme.
I follow my energy, and usually there are some topics I would like to talk about.
Sometimes I re-write an abstract from last year (twice this has given me conference spots!)

2. Research

Has this been addressed by other people?
What did they say?
What’s unique about my abstract?
What should I read or do to understand more?

3. Do a full outline

I think the thing through, all the way, because I want to write an abstract, not a trailer.
I want a lot of material, so I have the luxury of discarding the less useful/appealing stuff.
I try to include the most important things in the abstract; I can’t afford secrets, and it should be clear why this is a good session.
I often forget Weinberg’s Rule of Three: if you can’t think of three things that can make this a bad idea, you haven’t thought it through.

4. Let it rest

If an idea still is promising after one week, it is probably a good idea.
My sub consciousness does some work, for free, and I usually make some twists and turns in order to learn what would be a good session for me, and for attendants.

5. Polish

Proof-reading is important, one spelling error hurts the confidence of many that will review the abstract.
I also let another tester review the abstract, it is so easy to take things for granted, and if the abstract isn’t understood, it isn’t good.
The title is very important, and with a flow in the reading, it will feel polished, and readers believe it will also be a good talk.

This process has worked well for me (it has been implicit up until now), it won’t work for you; but I hope it can help in some way.

Double testing – converging or diverging models in testing Martin Jansson 6 Comments

I have experienced that many test leads, managers and project managers are worried about something called double testing. In short, it is the idea that some tester is testing the same thing as another tester. The term double testing might be a local term, but then you know it by another name with same properties and confusion.

I think the idea of double testing is about what models we use in testing. More precisely our mental models on our test approach/perspective on testing, system boundaries, system parts, levels of testing, terminology in testing, ideas of test coverage, usage of test techniques, test idea sources, test planning techniques and so on.

I will elaborate around some of the models that we use in testing and its relevance towards the idea of double testing.

Black Box Testing vs. White Box Testing

The box metaphor is used as one way to visualize how we perceive the system while testing. By talking about either black box or white box (and in some cases grey box) we can determine whether if we are able to see inside the solution or not, but also if we are able to take advantage of any of the artifacts that the system produces for us to understand the health of the system.

I have seen testers to choose a black box approach even if they had access to valuable information about the system. They had the possibility to do grey box testing that would have been a lot more enriched; still they selected the black box approach because that is what the customer will see.

Let us assume that we have the strategy of splitting up between black box, white box or grey box approach to testing the system as a way to ensure that no double testing is done.  The box is a model of the system that shows different levels of transparency of the system. It tells something about the approach to testing, not the system itself. It might tell us about how we ask questions to the system and how we monitor the system while we try to get answers. If we choose to ignore information available from the system then I conclude that we will limit what kind of questions we ask of the system. With more information available, we probably ask more questions. If we ask fewer questions and base them on less amount of information, then we indirectly increase the chance of double testing.

Unit testing vs. Integration testing vs. System testing

If we instead break the system into different parts where we also show the integration between these units, we will use the model of unit, integration and system to visualize what we test. This is by nature a simplified model of the system. The boundaries of the system and its sub parts is unknown or at least vague, therefore the representation of this model is in theory only.

If we ask questions that have to do with a unit such as a class or function in the code, you still might want to repeat the question when you have extended or expanded the context. You ask the same question but the environment around it is altered and therefore it is not the same test, thus not a double test. The same goes for when you wish to ask questions higher up in the system. A unit test is also limited by factors such as performance and speed. This makes the unit test limit its focus on what it will test and what it can guarantee, in theory.

A unit test, an integration test and a system test have different objectives when testing. This should mean that if each of those types of tests follows those objectives, then they would not be subject to double testing because they give different types of information and in some cases interpreted differently by different stakeholders.

But I question if it is effective to split the work between different teams by that of unit, integration and system testing as a general solution to avoid double testing.

Testing vs. Checking

A check asks a binary question, while testing asks an open-ended question.

If one team limit the test effort to perform only checks, while another team most probably does  a bit of both they might avoid double testing, but instead have some double checking. Based on that we are asking different types of questions would it really be possible that we performed double testing?

Charters vs. Test cases/Test scripts

If we compare two setups: one team that use test specifications with detailed test scripts with testing planned before hand that execute the planned tests no matter what was actually seen, another team use charters and missions with a general outline where to go testing, but it was up to the tester to explore and document what was actually experienced. This testing adapted to what was seen before them, following the fire or smoke leading them to possibly valuable information. We could say that the first team would be able to review each others test cases and scripts to avoid double testing, but on the other hand they would perhaps miss out on important information. The second team using charters would have a hard time not to guarantee double testing, but it is a bigger chance that they would find important, valuable information. Still, if several teams are testing the same thing using charters they would probably follow the same fire or smoke, thus increased change of performing double testing. But then again, they could just avoid that by letting the team members do collaborative testing and share the charter.

Scripted test approach vs. Exploratory test approach

The teams following the scripted approach to testing has a tendency to be more hierarchical, where the decision paths take longer time and where there is more time spent on things that team members think are meaningless. Since changing decisions take a longer time, there is also a bigger chance that you continue with a planned task even if it might result in double testing. For a more empowered approach such as the exploratory, the team members have more freedom but also more responsibility to do meaningful tasks. The chance to do double testing would in that case be less likely. Still, this is how I perceive and have experienced the impact of the two approaches.

Regression testing

There are many different strategies for conducting a regression test. The traditional one is to rerun the test specification for areas that worked previously. If several teams are using similar test specifications and use the system in the same way, then there is an increased chance to perform double testing. If you instead use a check-driven test strategy for regression testing, then you will have an idea of areas/features you will check to see that they work to some extent and continue to test around those areas. The chance for double testing is less likely because of the pointer to where to go rather than how to test. Depending on how you set this up the chance for double testing will vary.

Smoke tests

A smoke test should be focused on what is important for their sub-system or for the solution as a whole. Each test in the smoke test probably focuses on critical areas that must work for the sub-system or solution to be considered worthwhile to test further. The most obvious functional areas or features are probably just checked and the more obscure areas are probably tested. A smoke test is usually quite shallow in depth and should go quickly. If several teams run the same set of smoke tests they will probably perform double checking, but less likely for double testing. Still, a smoke test is cheap as far as time spent. The information gathered can still be worthwhile even if double testing is done.

Definition of the system, sub-system or solution

Each team probably sees the solution or system differently from the other teams. They probably see the depth and complexity in their own sub-system more than in other teams. They might be aware of all connections to other sub-systems or they might not. When creating a model of the solution that consists of systems of systems of systems there is a big chance that not a single tester sees the solution and its sub-systems in the same manner. If we do not see the same system before us, how could any of us perform double testing? We base our testing on our mental models and if they differ it is less likely we perform double testing.

Ideas of test coverage

If a team focus on test coverage by only covering all the explicit requirements, then their test coverage will be a model based on explicit requirements only. They will miss out on a plethora of other sources for test ideas which then could have a test coverage model for each new source. Those teams that look in other areas will have different coverage models. If the idea of test coverage differ from team to team then the likelihood for double testing is not great either.

Configuration of the system

If each team uses the exact same system or solution simultaneously, then there is a chance for double testing or seeing the same things. But if teams have configured the system or sub-system differently then they will probably not perform double testing since they use a different system or a different setup of the solution as a whole. If the system can be configured and setup in many different ways, then it can be tested in many different ways which means less likelihood for double testing.

Platform of the system

If each team uses the same kind of platform for the system or sub-system, then they might be double testing. If teams use different platforms then the chance for double testing is less likely. If the platform itself consists of systems and sub-systems then the configuration and definition of the system is applied to determine the chance of double testing.

Static or dynamic system

If the system or sub-system is static, that nothing changes over time then it is more likely that teams are performing double testing. If the system is dynamic such as that log files are created from usage or that that data storage grow over time, then the chance for double testing less likely. Tests performed early in the week might be different from the ones performed later in the week. Does it matter than you run tests during the night or day? If it does, then it is less likely that you perform double testing.

System usage

When test teams use a system or sub-system, do they use it as a certain persona or role that is applicable for their sub-system or the solution as a whole? If you use a persona that is applicable to your sub-system alone, then it is less likely that you perform double testing. If the roles or personas are applicable to the solution as a whole you might still have an entry point or focus that is most related to your sub-system, then you will probably not perform double testing because the information that is related to your team will be different from the information that is related to another team.

Transparency of the system

If test teams are testing the solution or sub-systems without being interested in collecting information from the system itself on its health, then you will probably perform double testing more likely.

Obvious bugs

If all the teams start testing at the same time they are prone to see the most obvious bugs that fall on them as they start testing. The obvious bugs barely need a conscious thought from the testers to recognize that there is something wrong. I would say that these are bugs that found before real testing is performed. Independently of what approach to testing the teams have the obvious bugs would be found, unless you have testers who do not see bugs at all. If all teams report the same obvious bugs you have some other problem than double testing.

Re-testing of bugs

Some use retesting of bugs as part of a regression test strategy. They might select a certain severity or priority of the bugs that they wish to use as a guide when performing regression testing on a new build. When test teams are using the same bugs to retest then there is an increased chance that they will perform double testing, but only if they follow the bugs repro steps in a strict way. If they instead use it as a guide, change the test data or the order of things and then they are probably not performing double testing.

Isolation of testing

There is an old, smelly idea that testers should not be affected by others, that testers need to work in isolation. This is related to the different schools of testing, how the role of testing is perceived and what testers should do. If you are vigilant about the idea that your test team need to be isolated from other teams, then there is an increased chance that you will perform double testing because you are not able to communicate about your test ideas.

Reflections

I think the likelihood that double testing is performed is very small. If you are a decision maker that worries on double testing, you can stop unless your test teams are context-oblivious. Instead, worry more on if your teams are good and effective at sharing information. Information sharing from planning, testing and reporting is important to avoid covering the same areas with tests. Still, you might look at the same information differently and might have different objectives and will probably act differently on the information gathered.

Information shared through 400 page test specifications is harder to understand if there are areas that are subject to double testing. So find new ways in planning and preparing for testing such as using models, mind maps or test proposals.

Wolf Pack – a collaborative test compilation Martin Jansson 2 Comments

You are part of a pack of wolves.

You are hungry and have not found food for several weeks.

When you move, you run covering lots of ground quickly.

You are out hunting, cooperating and collaborating with the rest of your pack.

You are seeking the big game, not a flea, nor a rabbit or rat.

An elk is ok, a mammoth is great, but you look for a stranded whale or a leviathan.

You might take note of the smaller game, but as a pack it is not your focus.

When you find tracks or clues of the bigger game, you howl and notify the rest of the pack.

As a pack, you circle and take down the prey.

The lone wolf is no hero in this context.

 

Can this be an effective compilation when you get a new build, as a form of smoke test, acceptance or regression test. It could train you in collaboration, not crying wolf for small prey, but instead wanting the bigger juice.

Pass to Fail, Fail to Pass Heuristic Rikard Edgren 2 Comments

When teaching scripted testing (yes, I actually do this!) I found the Pass to Fail, Fail to Pass heuristic (used by many, but now with a catchy name.)

The essence is that when a not-overly-simple test has resulted in a Pass, think about it some more, and try to make it Fail instead.
When a not-overly-simple test Fails, think about it some more, and try to make it Pass.
This will stop you from jumping to conclusions; you will find out if the essence of the test case was Pass or Fail; and you might have isolated a bug or two in the process.

Example: Test case is about resizing 200 images at once. The essence of the test (many images) actually works, but some testers might report a Fail, besause it didn’t work (but the reason was that (at least) one of the images wasn’t handled properly.) When Pass is reported, you might have missed a chance to run a complex test that could find important problems.

This is a specific instance of the generic Do Variations Heuristic (Creep and Leap is another instance) As with all heuristics, it is to be used with judgment.

1000 Comments on TheTestEye the test eye 6 Comments

Very soon the 1000th comment will be published on thetesteye.com.
Comments are our main reason for writing blog posts, because they take our thinking further.
Our ideas are challenged, taken in other directions, opening new possibilities (and closing some…)
Thank you!!

To celebrate this, we will reward the author of the 1000th valid comment with a prize.
It is not a big prize, so please don’t spam with “great job”.
We prefer comments that add value to us, and the readers.

This quantitative milestone requires a qualitative statement:
Comments sharpen thoughts.

Testing in isolation Martin Jansson 3 Comments

I often promote that testers should sit close to or with their cross functional teams. Still, I am very fond of working in an isolated testlab environment where it is possible to shout, scream, play music and play out dramas that would otherwise disturb the regular office tasks.

The office landscapes that are open seem to be quite common. For agile teams, the environment can be setup so that everyone in the whole project is working close to each other. This is excellent in many ways, but perhaps not all.

I have, at several occasions, worked in a testlab which was isolated from the rest of the project or line organisation. During these times we were working closely with the developers, business analysts and product owners, they were never far away. Each of us had a space of our own where we could focus on tasks that needed no interaction. The testlab was where we had joint efforts for collaboration on test activities or experimentation.

Sometimes we used different types of music to affect our mood and indirectly the way we were testing. When we played music such as Queens of the Stone Age and Ministry it affected our mindset to be a bit more speedy, aggressive and non-forgiving. When we played music such as Nick Drake or anything bossanova, we instead took another direction. Still, all in the team had their own preferences for music and were naturally affected if they did not like it.

The testlab is a place where there is action, noise, interaction and collaboration. If you need to work on your own, that is probably not the place to be in at that moment. I believe that many projects need this kind of an area where it is ok to scream out in delight when you find a valuable bug or where the bass from the loud speakers beat with a steady rhythm while testers in their best way slam the system.

I confess, I am delightful when I find really vicious bugs. Sometimes I scream in dispair when the build is broken for the 50th time in a row. I like to express my feelings in the testlab, an area of free will and emotion. After this emotional disruption I can calm down and gather up the proper evidence so that I can present it to the stakeholders in a professional manner.

For those of you who have missed out on this, I urge you to create this area for creative test recreation.

Long live the isolated testlab!

Experience report EuroSTAR Testlab 2012 Martin Jansson 1 Comment

Setup

Bart Knaack had done a wonderful job in setting up and organizing the testlab. He had done this by himself. So most credit to him for taking on most of the initial work and planning. Me, Ru Cindrea and Kristoffer Ankarberg focused on keeping the lab up and running as well as taking care of the participants. We wanted to be able to challenge experienced testers while at the same time help beginners get started.

In the lab the system to test were GeoGebra, osCommerce, OpenEMR, Freemind and Mindstorms LEGO with Mantis as a bug reporting tool. Mindstorms got the most attention, probably because it was different and so hands-on. The downside was that there were fewer bugs reported. GeoGebra was a new tool for the lab introduced by James Lyndsay. I believe those who tested it had a really good time, because I did. It was not as buggy as OpenEMR but it had equal amount of functionality with enabled lots of exploration.

Placement

As usual on the EuroSTAR conferences, the testlab is done through out the day where participants join in between breaks and at lunch. This setup makes it hard to facilitate and does not enable longer test sessions or longer ongoing events. Still, I believe it give the participants a constant flow of interesting events for EuroSTAR.

This time we were in the far corridor away from the sessions. Participants needed to go through the expo to get to us. Someone called it a gauntlet, for the testers to survive to get through to us. I think having the testlab on the far side made it better for the expo so that they got some exposure. Still, many of the sponsors showed tools that I do not believe in, but that is another story. The testlab was in an open area with pillars marking where its boundaries were. We had managed to get a printer by the EuroSTAR team, which enabled us to do so much more things for the participants!

Next to the testlab was coffee and lunch bars, this enabled many of the participants to grab some snacks and enter the lab or just watch while eating. I think this was a good move since it gave the participants a non-stop action. The downside was that the tempo was high and there was no rest for anyone, which might have caused a stressful mood to some.

Participation

Some of the more experienced testlab participants came to us in breaks and dug in to try our setup. The more inexperienced ones were a bit more shy, they needed a bit more guidance to enter. Once inside and sitting down, everyone showed great engagement. You could see a sparkle in their eyes as they found bugs or found something puzzling. This year it seemed like there was a greater mix of people from different approaches to testing. Previous years it seemed like there were more from the context-driven or agile approach to testing that visited the testlab.

The EuroSTAR star test team, that won the competition to be part of EuroSTAR, were in the testlab most of all. They were extremely focused in their testing and found some really interesting bugs in the systems. The EuroSTAR team wanted the star team to journal their stay at the conference using video. They interviewed many of the speakers and seem to have worked hard on their assignment. It was great having them in the testlab helping us but also by creating some activity just by being there.

Many people did an excellent job by bringing a college to the testlab and did some collaborative testing, sharing techniques and ideas on how to test. I’ve talked about this before and cannot say it enough times, this is a great way of testing and a great way of getting to know a tester you have not worked with before.

Sponsors

During sessions, when participants were most often listening to speakers, the sponsors demoed their products or services. There were hardly noone there and it did not work the way they probably had wanted it to be. Telerik used OpenEMR to show their product which was great, even if I was the only audience. I would recommend sponsors to engage with the participants in a different way. The current way of demoing is not paying off.

In EuroSTAR Testlab 2010 some of the sponsors were helping us taking care of the participants and if one of the participants wondered about a specific sponsor tool they were available to assist. The sponsors had installed their software on each client machine and had been working with our systems to show how their tool interacted and tested. I think that was a great way of handling sponsors.

Events

Simon Stewart held the keynote about Selenium. Prior to that he had prepared a few selenium scripts that he intended to talk about in the testlab directly after his talk. In the testlab, he had roughly 30 people around him that was engaged for more than an hour. I think this was a great success. Many who were at the conference were interested in automation and this was a key moment for them, I am sure.

One of the days we had focus on performance testing and one of the speakers held session in the lab for several hours. He had a small group around him that were engaged in talking and sharing experiences.

Markus Gärtner held a session on Testing Dojos where he moderated a group of testers to do test planning, testing and a debrief with reporting. I call this collabative testing, test planning and reporting. I use this setup in my every day testing as well as on previous test labs. I think it is a great way of working.

Bart and later on Michael Bolton showed a coin trick that is quite similar to the dice trick that James Bach and others use in their training. It is a way of training exploratory test design, reporting and testing. I was not able to participate myself, but it seemed like the participants had a great time and by looking at their faces learned a thing or two.

As the very last event in the testlab we had a test competition that Ru and Kristoffer organised on their own, to most extent. There were 8 teams participating who did a good job under the circumstances. It is a tough job to do something good when under time preassure, with limited space and in some cases having to communicate in a second language.
There were many more events in the testlab, but I was not able to be part of them all.

Reflections

I’ve already had a few reflections above about the testlab. Some that I would like to highlight more are that I would like to see more of Open Source Testing, something that Julian Harty talked about at Let’s Test conference 2012. It means that we make the artifacts from testing public, open for scrutiny and research. We collect statistics and data to be analyzed. We gather as much information and material as we can so that researchers and teachers can use it in their daily work to improve the testing community.

I would like to see tutorials and workshops have their base in the testlab instead of somewhere else. I believe the participants will have a great learning experience if we incorporated the theory from the speaker with the practice by the participants in the testlab. This would mean that we would need a bigger testlab or possibly a different setup of it. So a merge of one of the tracks that is more hands-on with the testlab. That is, if the testlab is done during the day. For Let’s Test we had the testlab as an evening activity, then the setup is different.

Having a printer in the testlab is a must from now on, I won’t live without it. We printed so much material and testdata that we would just cripple the testlab without it. I think a headset with mic would be nice to enable speakers guiding things in the testlab easier. We also need more whiteboards, flipcharts, pens of different colour, scissors, lots of tape and different papers of different materials, shapes and sizes. This is a workshop after all. By stocking the testlab with these items, we would enable more creativity and more fun for everyone involved.

If the testlab have sponsors, they need to be engaged in the testlab and with the participants. If the current setup continues they will just loose credibility as a company. At Let’s Test I had Compare Testlab as a sponsor, they helped me with keeping the server and wireless up to date and working. I could then focus on events in the testlab. Working together like that is a great way of making the testlab even greater.

The testlab is very much alike our every day life as testers. We prepare and plan for many things, but when reality hits us our preparations might be in vain. Therefore it is important in being prepared for the unknown and the unexpected. Working with the testlab is a great way of practising that.

Many thanks to the EuroSTAR team for helping us and serving our needs. Thanks to the Program Commity for being involved and participating in the testlab. Thanks to my colleges Bart, Ru and Kristoffer for being pragmatic and doing everything in our power to make this a great event for the participants. Finally, thanks to all participants who tested and shared their experiences with us.
Ru and Kristoffer will hold the testlab for EuroSTAR 2013 in Gothenburg. I am sure they will give you all a great time!

Another certification, another scam? Martin Jansson 17 Comments

In a recent blog post [1] on Informator-blog, Magnus C Ohlson articulates the idea of pilots having the flight hours but not the actual flight certificate. He insinuates that the artifacts from requirements and testing would be better if people were certified, if I understand him correctly. Furthermore he explains that testers need education, knowledge and experience which I agree with fully. But he promotes that to ensure that someone has the right competence is by having a certificate such as ISTQB or REQB. According to him, you would then know if he or she is a skilled tester.

Here are a few personal experiences that relate to certification, as I see it:
When I did the military I got the opportunity to train for driving a trailer. We did a two-day theoretical exam, and then they let us out on the roads. After studying text books succeeding with a little exam they determined we were ready to start driving. I had no problem passing the exam, but I was a terrible driver. I had only a few months prior to this been able to take the driving license for driving a car. They let out a large amount of trucks, trailers and other vehicles into the small town of Boden. I managed to get around town without getting killed, but all in all the lot of us managed to demolish several traffic lights, some road signs, some trucks and a few light poles. The cost of this adventure was huge. I guess we learned a lot, but the idea that we were ready based on reading a textbook was a bit strange. Months later, after a few hundred miles of driving we started to get good at it, but the 2-3 day theory in the beginning was almost meaningless then. I see the test and requirement certification as having the same symptom, we do not get much value of something so intensive and theory focused.

In the early 2000 a doctor told me I had diaphragmatic hernia. At the same place, a surgeon went through with me the procedure how to get this fixed. She told me that I need to change the way I lived my life, but that it was almost pointless to try according to her. If I were to go through with a surgery, they would have needed to open me up, lift my chest and move around several organs. The whole procedure was very dangerous and the risk for death was high. As part of the surgeon’s analysis, she performed a gastroscopy. This procedure took between 10 and 20 minutes. It was hard for me to determine, after a while I understood what torture was all about. Some years later I met a doctor who hinted that there were other experts in the field that could help me out. I ended up with probably one of the best doctors in the field, who told me a whole different story. He recognized the name of the original surgeon who had given me the first statement. According to him, the original surgeon was not able to perform a gastroscopy in school as well. This new doctor performed a gastroscopy as well, which took 1½ to 2 minutes which is the time it should take, according to him. He then said that an operation was possible with minimum risk to my life. The last 10-20 years they had been using keyhole surgery, the methods the previous surgeon talked about are not used any more. Even if the first surgeon got her license, she was apparently not interested in keeping herself up-to-date with the changes in her craft. Comparing this with testing and requirement certificates, even if you are certified does not mean that you keep up with changes to the craft and that it is applicable.

A friend of mine was troubled about being forced to become certified. He was basically forced to certify himself as a project manager. The curriculum of the certification was solely based on waterfall methods. My friend has been working with agile projects for a while and has left the world of waterfall behind him. He does not see the point in keeping old facts up to date but instead try to learn new things. The person who wrote the curriculum for the certification did not know anything about agile. Comparing this with testing and requirement certification, we will have single point or points of failure in keeping up to speed with changes and improvements in the craft. If the creators of a syllabus are not top notch in the craft, then everyone who needs to certify themselves must lower themselves to the creators level. Even if they are top notch, they cannot compete with the wisdom of crowds, where the crowd is the joint knowledge of the test community.

Another friend of mine took an intensive course to drive a car. She travelled to a small town in northern Sweden to take a two-week course. She took it easily and got back home to Gothenburg. But the driving in the small town was a bit too easy with no highways, very few cars and an environment that did not match what was in her home town. She got the license and got home. When entering the traffic in Gothenburg she became too scared and dared not drive anymore. If we compare this with testing and requirement certification, an intensive course that results in a certificate might only mean that you have paid money to get something that is not valid in your actual context, in your project or hometown.

In a previous article called Testers Greatest Nemesis [2], I wrote about the intent that Dorothy Graham had when they initiated ISTQB. According to her blog posts the original intent was lost over the years. In Sweden there is a movement that has started certification of requirement experts called REQB. I see consultancies are offering the training and certification of it. I do hope this is just not a scam to make money, where recruiters will start filtering out those with 10+ years of experience of requirement handling to those with the name REQB in their CV.

Conclusion

I do not believe the complexity of organizations, projects and business is displayed in a generic multiple choice questionnaire that someone with no knowledge could accidently pass. I hope the movement in REQB learns from previous mistakes that have been seen by introduction of ISTQB. Dorothy and several who comments on her blog has identified a few things to consider.

A certificate tells one story about a person, but it is very fragile. I prefer talking to references, looking at their renown, public appearance, blogs, articles and papers would perhaps give a more vivid story.

References

[1] Varför certifiera sig inom krav och test – http://informatorutbildning.blogspot.se/2012/10/varfor-certifiera-sig-inom-krav-och-test_10.html
[2] Testers Greatest Nemesis – http://thetesteye.com/blog/2011/05/testers-greatest-nemesis/

 

Page 4 of 28« First...23456...1020...Last »