Notes from EuroSTAR 2009 Rikard Edgren 5 Comments

It was Stockholm again this year. Good to not have to travel far, but since you are travelling I wouldn’t object to something more exotic, and warmer. Next year it is Copenhagen, again.
I had a full-packed program with 4 days of tutorials, workshops, tracks, short talks, test-labbing, conversations, so in total it is quite an amount of ideas since new things appear when you combine what you hear with your won reality and thoughts on testing. I am a bit exhausted after all this, which is a very good thing!

Monday gave Exploratory Testing Masterclass with Michael Bolton. Even though I would have expected a bit more advanced level, it was very good; here are some highlights:
it’s the scripted guys that sit and play with the computer; most of what we look for is implicit, it is tacit; develop a suspicious nature, and a wild imagination; checks are change detectors; some things are so important that you don’t have to write them down; codingqa.com episode 28 is important (I listened to it, but didn’t understand what was so important); managers fear that Exploratory Testing depends on skill, is unstructured, unmanageable, unaccountable; we need to build a “management case” (or should it be middle-management case?)
Michael showed an improved Boundary Value Analysis for a more complex example, where there are many boundaries; whatever focus for coverage you have, you will get other coverage for free; visualizing Test Coverage with sticky notes on a model is a good way of creating charters for Session-Based Test Management.
Go beyond use cases, create rich scenarios; emphasize doing, relax planning; Test Coverage Outline and Risk List to guide future sessions; don’t try to find bugs in the beginning, it takes time away from building a model; HICCUPPS + F (Familiar Problems); you learn best when you are in control of the learning process (and have fun); who said something valuable should be that easy?
Reports to make number people happy; SBTM debriefs are important for keeping quality of the testing and the report (and good for coaching and mentoring); the principal interrupter of testing work is bugs; Weinberg: “everything is information”; Dr. Phil: “How’s that working for you?”
He also had good exercises, and a nice movie, a Detective Story.
I haven’t been to Michael’s tutorials before, so it was about time.

Tuesday started with Tim Koomen tutorial “From Strategy to Techniques”; there’s a gap between the test strategy and the actual tests.
He is very knowledgeable, and walked through the basic testing techniques that every tester should have in his toolbox: Equivalence Partitioning, Boundary Value Analysis, Classification Tree Method, Pairwise, Path Coverage, Condition/Decision Coverage, Input/Output Validation, CRUD, Operational profiles, Load profiles, Right/Fault paths, Checklist.
The examples are focused on functionality, and a magazine discount example is shallow; it doesn’t consider if the person is just about to become 20 or 65 years old, or if you don’t know the age, or if an incorrect age is corrected. And now we haven’t even considered everything else that interacts with this small piece of functionality.
Every time I see this list, I think that they don’t sum up to the testing techniques I actually use when I design tests.
So my highlight was this feeling combined with my shallow knowledge about Grounded Theory; maybe we could have a super-advanced error guessing test technique, that describes the really, really good test design that happens all over the world, where we are looking at a lot more things than the requirements (more to come on this…)
Tim showed PICT tool (consider 1-wise!), and audience mentioned that Mercedes-Benz also has a free tool (see pairwise.org for a long list of tools)
I learned a new thing: the modified conditional coverage, where you omit tests that aren’t likely to catch errors.
Sometimes I wonder how many of the tests from the classic test techniques that preferably are automated in unit tests.

The actual conference started with Lee Copeland talking about nine innovations you should know about: Context-Driven School (the search of best practices is a waste of time); Test-Driven-Development (help you write clean code); Really Good Books (too few testers read the books!); Open Source Tools; Testing Workshops (Specialized focus, participatory style); Freedom of the press (He is no fan of twitter, but like blogs); Virtualization (Rapid setup, state capture, reduced cost); Testing in the Cloud (rent an awful amount of machines, very cheap); Crowdsourced Testing (Lee did not mention the ethical payment dilemma)
“sincerity is the key – once you learn to fake it…”
Keys for future innovation: creative, talented, fearless, visionary, empowered, passionate, multiple disciplines. Do we have all of these???

Johan Jonasson explained Exploratory Testing and Session-Based Test Management, but since this was a short track, there wasn’t so much time left for the real juice. “ET has specific, trainable skills” (Bolton)
Julian Harty, Google (where the testers seem to have huge responsibility areas) explained the concept of Trinity Testing, 30-90 minutes walkthrough(s) by Developer, Tester, Domain Expert. Not radically new, but it felt very fresh and effective. Julian was the only one I saw that brought a hand-out, one paper with the essentials.
Geoff Thompson talked about reporting, that “it’s the job of the communicator to communicate.” 1/10 of men (1/50 for women) are color-blind, and maybe you want everyone to understand the report? (I saw two other presentations, where red-green was used to highlight important differences.) “Know your recipients, what information do they want?”, “honesty is always the best option.”
Michel Bolton had a short session on “Burning Issues of the Day” that is available here. Very funny, very thought-worthy, very good.
Jonathan Kohl talked about Agile having lost a lot of its original value, it is re-branded, old stuff, and has become business. Process focus can distract from skill development, the point is: focus your work on creating value.
I asked Jonathan afterwards about Session Tester (where not much has happened lately), and he said that the programmers are too busy, but it will happen things pretty soon.

Wednesday’s first keynote was Naomi Karten about change; change that represents the loss of control, change that we often respond to in an emotional and visceral way.
Hofstadter’s Law: It always take longer than you expect, even when you take into account Hofstadter’s Law.
Regularly communicate the status of the change, also when you don’t have any news, or when you’re not allowed to tell the news (say that you can’t say anything!)
Listening and empathy are the most important change management tools.
The biggest mistake is to forget the chaos; and in chaos: don’t make any irreversible decisions.
This was my favorite keynote, and as I’m writing this I understand there was some really important information in the presentation.
Mike Ennis talked about software metrics, that help you manage the end game of a software project.
The end game term is taken from chess, where the outcome is almost decided, it is just a matter of technique, primarily about not making mistakes. Mike used the analogy that if you can anticipate what will happen, you know what to do next.
He defined example release criteria, which often aren’t met, but business decisions can overrule the criteria.
40% of the code is about positive requirements, “not a huge fan of exploratory testing, do it if you have time, after the standard tests have been run”.
He used a Spider Chart (aka Radar Plot) to visualize The Big Six Metrics (Test Completion Rate, Test Success Rate, Total Open Defects, Defects Found this week, Code Turmoil, Code Coverage.)
A question was raised that there is a risk of over-simplifying things, and the answer was: “Yes, but these are indicators only.”
Erik Boelen talked about Risk-based test strategy, if you do it with different roles it is like Läkerol, it makes people talk.
He likes games, the we-versus-them game with developers is good, at his place developers with many bugs buy drinks to the testers; and testers aren’t allowed to say one word for a week; last week that word was testing…
A very interesting and nice thing about the presentation was that he explained their (very good, but for some, very provocative, I assume) test method as a natural and obvious way:
They take the entry paths from the Risks and perform Exploratory Testing. For High and Medium risk they document test cases as they explore, and for Low risks they just report the results.
“Eventually testing will rule the world.”
Shrini Kulkarni talked about dangerous metrics, and that software development must consider where it is suitable with measurements. (Shrini hates SMART by the way, so I like him.)
A root cause is that metrics/measurements represent rich multi-dimensional data, there is inevitable information loss.
People might say “we can’t improve without metrics”, but you could use metrics as clues to solve and uncover deeper issues.
We can report with stories attached to the numbers, but still, we are losing information.
Susan Windsor had a double session on communication styles where time flied. In the audience, everyone said No to “Exploratory Testing adds no value”
Art of Storytelling involves: Random, Intuitive, Holistic, Subjective, Looks at wholes (two of my favorite adjectives!)
Research shows that interviewing is the most ineffective method when hiring.
She noted that a high proportion of testers also do creative things like music, poetry (which seems natural, it is good to have trained a lot at being creative.)
We looked at four different Personal Communication Styles (why is it always 4 different types of persons??): Strategist, Mediator, Presenter, Director.
Gitte Ottosen had the ending keynote of the day with a presentation about combining Agile and Maturity Models (“CMM = Consultant Money Maker”)
“Metrics, I know they are dangerous, but also necessary.”
Manual Testing involves using the story to do exploratory testing (“continuous learning as we implement the feature.)

On Thursday I was wise enough to skip 2 sessions in order to have a late breakfast and practice my presentation.
So the first presentation of the day was Zeger van Hese (he won the best paper award for the second time this year) that shared his experiences of introducing Agile, but only doing parts of the full-blown, capital A stuff (resulting in a Real-world, semi-Agile process.)
They used this strange mix of Waterfall and Agile that many, many companies have, and got a better and better situation as more members of the team sat in the same room.
But in the end they fell back to old behavior, there were many late changes, many Release Candidates, and a one month delay. But; excellent quality and stability.
3 Agile goals: better feedback, faster delivery, less waste.
They did a big Agile no-no by using manual testing, which seems like a wise deviation to me.
A quote attributed to Einstein, and several others: “In theory, there is no difference between theory and practice; In practice, there is.”
Next presentation was my favorite of the whole conference: Fiona Charles, Modeling Scenarios with a Framework Based on Data.
They built a conceptual framework at 2 levels: an overall model of the system (testing), and the tests to encapsulate in that model.
They did a structured analysis of all attributes for each framework element, and then used these attributes to build simple, and then more complex, scenarios. This is difficult to do for many testers, so careful review of this work is a way to make sure the results are good.
I think this is an example of the test design technique I thought about on Tuesday, a very advanced, structured way of designing tests that can’t be captured by the classic test design techniques (error-guessing is closest, but there’s a lot more to it.) I like to call this Grounded Test Design (more to come on this…)
“scenario testing is a nice thing to add to your repertoire”, “combine two or more models”, “don’t ever fall in love with your model”; they found 478 bugs, and all except 20 was essential to fix for the customer.
What you need to do something like this: testers with domain experience, business input and scenario review (and maybe an industry book), a model, structured analysis.
After lunch, I had a second session in the Test Lab, so I could report some of the bugs Zeger and I found the day earlier. It was great to test on real stuff, but I didn’t have the time that I would have liked in order to understand the product and its failures. There weren’t time (at least for me) to discuss in depth the findings with other testers, which is something I hope to be able to do next year (I’m hoping the Test Lab will continue.)
At the last presentation slot, I did my thing on “More and Better Test Ideas”. People were tired, but looked interested, so I’m happy with the presentation. I won’t recapitulate the session, but I did talk about the potato, but had to skip the new Find Five Faults analogy (unexpected time pressure, I’m still in doubt that I got the 38 minutes I was supposed to.)
The paper is available here, the presentation here, and it will be given as a EuroSTAR webinar at December 15th.
Good questions, and also examples of how similar approaches are used by others. A bit more than 10% of (almost 100?) attendants use test ideas/conditions.
The next day I got a mail stating that ideas from my presentation could be used at once; the best feedback to hope for.

The Test Lab organizers (James Lyndsay and Bart Knaack) seemed happy when presenting the results, and it’s good to know that the efforts might make open-source medical product OpenEMR a bit better (there is certainly room for improvements…)
At the final panel debate half of the audience voted that certification is important, Tobias Fors shared the insightful “as a developer I was scared about code review, but then I realized it really was about my low self-esteem.”
Regarding teaching testing in school, it was said that critical thinking should be taught early.
“How do we breach the barriers and invite the developers to our world?”
Dorothy Graham (who reviewed every presentation!) ended the conference and announced the next years programme chair John Fodeh.

Overall it was a very nice conference, at the expo Robert from ps_testware was nice and let me win a chess game this year also.
Recurring themes were Agile/Exploratory Testing (why are they grouped together?) and now and then the importance of a Story was emphasized.
Unknown source: “The higher and more complex quality objectives you have, the more manual testing is needed.”
Attending a conference isn’t about learning truths from the experts, it’s more about getting input to be able to create your own ideas that apply to your job, and to meet people, hear stories, interact with people that share your passion: software testing.
See you next year!

/Rikard

Is our time estimation on testing valid? Martin Jansson 4 Comments

What do we actually base our time estimations on when delivering a plan to a project manager?

I know that we initially can have a vague idea on what to include and what must be done. I am sure that we can even make a rough estimation on how many resources we need in some cases. If we are testing something new, where we do not know the developers and the full extent of what is delivered, how do we know how much time and resources we need for testing? I have seen many plans that give an estimate, but how accurate can they be?

Once I did a resource plan for a year long test project, I was allowed to change the plan incrementally. I estimated that we needed quite many testers since the deliverables had been delayed and the original plan with incremental builds early did not work out as planned. In the test team we had a few disagreements about how many we actually needed, some thought that we were enough and some (including me) thought we needed more. We got 10% of the resources that I had wanted, but we managed somehow (to some extent). Decision makers seemed to be satisfied with the quality and the result of these tests. One of the reason for us succeeding was that we reported more bugs than the developers were able to fix. If we would have been more testers we would have found many more bugs for sure, but they would probably just have been postponed. The most critical ones were always fixed, but all others below that could not be fixed naturally.

So… should we when estimating resources and time focus on getting enough and the right resources just to keep the spot light of the resource discussion somewhere else? Lets say that bugs are fixed faster than we find new ones, will that mean that we are ready to deliver or that we need more testers to find even more bugs?

I think resource planning and time estimations is very hard. Michael Bolton has expressed, in an excellent way, many of the thoughts that I have tangled with. You can find his reasoning here:
http://www.developsense.com/2009/11/why-is-testing-taking-so-long-part-1.html
http://www.developsense.com/2009/11/what-does-testing-take-so-long-part-2.html

Combine the idea of having estimated how long time testing needs with an answer on how far we have come and especially pinning it down in percentage. Can we say anything truthfully here? Is it worth the cost it takes in planning and administration to give a accurate picture?

The Inquisitive Tester – Part II: Question the specs the test eye No Comments

Statements in specifications try to clarify and are inevitably an interpretation of what the author thinks need to be more specific. I.e., they try to be a more specific model than what existed before the spec. And “Essentially, all models are wrong, but some are useful” (http://en.wikiquote.org/wiki/George_E._P._Box).

Every specification you encounter is persons’ interpretations, and  not necessarily true.

This means that you as an inquisitive tester have a lot to do by questioning the specifications. The questioning will help you to form a model of the software that is better than if you only had read and accepted the spec as it was.

Specifications cannot be complete and especially regarding things that the program shouldn’t do. It is probably not stated that the software shouldn’t use too much memory or processor for certain operations; it is not stated that the screen shouldn’t flicker, or that all text should be easy to read with all different font settings. Other typical omissions are interactions with other systems; things you expect from all applications under that operating system, internet browser, connected software etc.

You cannot expect a specification to be complete, in most (all? many?) cases, the thing produced by the specification is more important than the document about it. The hardest challenge for the inquisitive tester is to question a lot, but only for those things that are important.

——————–

Who will use the specification? What will they use it for? Will it meet their requirements?

What is it all about? Really?

What areas are left out?

Who is the writer? Does he/she usually miss certain things?

Are there many writers? Does this make the whole less tangible?

Are there many reviewers? Are they using different perspectives?

Is the writer vague, insecure and confusing about certain areas?

Is the specification consistent?

Is the specification consistent with other related specifications?

Is the specification consistent with other different features and combinations of those?

Are all functional and non-functional requirements covered?

Are there dubious thoughts about the wished for functionality?

Are there other sources of information that can be useful?

How is the style of the language affecting the specification?

What quality attributes are the most important, e.g. how is Security weighed against Performance and Usability?

Does it match the system requirements?

Does the specification focus on what is most important?

Does the specification reflect the model of what you think is described?

Are there any new terminology? Will this affect other documentation such as help files?

Is the new terminology consistent with other specifications?

What does the Internet say about the newly chosen terminology? Will there be any misunderstandings?

—————–

If there was no specification, could it be described in a completely different way?

Introducing exploratory testing in a scripted test environment Martin Jansson 5 Comments

In many organisations it is hard to change how you are working. You might be bound to certain CM tools, how things are expected to be planned, documentation systems, management expectations, project management expectations and so on. In many of these traditional environments you might also use the regular test plans, test matrices, test specifications, test cases with expected result and test record.

If you are doing scripted testing it might sometimes be hard to handle the cases that fall a bit outside the scope, that mess up the planned activities. Some might say that it is better to stick to the plan and leave the vague and hard to reproduce thing for later.

To totally change the way you work might affect many things, so you might want to start out with small changes. If you are in such an environment and consider how it would be possible to try out exploratory testing as an approach to testing I have a few suggestions.

One way is using your current test cases as a guide to where you intend to test, then you use exploratory testing on those areas. The test cases will in this case just map lots of areas that need to be looked at and what you actually cover and what you expect is up to you. This means you will certainly skip a lot of the content of the test case. If this is accepted that is perfect, if not see how you can get away with it.

Another way is executing the test cases and each time you find something outside the test case or something fishy along the way, you create a work page/task for this issue. You then assign someone or a small group to dig deeper into this area using the exploratory test approach. All issues that are outside the regular plan are handled as an exploratory test.

You will report progress and result as usual, noone will know that you tried a new method in secret.

Each time you use exploratory testing you do it as a defined session, you can google how this is done. There are a multitude of good articles out there on how this can be done.

Exploratory Testing vs. Scripted Testing – rich terminology Rikard Edgren 2 Comments

Exploratory Testing in its purest form is an approach that focus on learning, evolution and freedom.
Cem Kaner’s definition is to the point: “Exploratory software testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.”

ET in real life is a collection of ways of testing, sometimes used as example implementations of the approach, sometimes used for testing that don’t follow a script, and sometimes as a synonym to ad hoc testing (because that word has become increasingly under-rated.)

Scripted testing in its purest form is an approach that focus on precision and control. It is yet to be defined by proponents, but a benevolent try could be:
“With a scripted testing approach the testing effort is controlled by designing and reviewing all test scripts in advance.
In this way the right tests are executed, they are well documented, progress towards 100% execution can be controlled, and it is easy to repeat tests when necessary.
The scripted approach is not dependent on many tester heroes, and can take advantage of many types of resources for test execution, since the intelligence in the test scripts are created by test design experts.

Scripted testing in real life mostly means designing test scripts early, and executing them later, and these scripts have quite detailed steps, and a clear expected result.
The terminology is rich, complex and sometimes confusing, since they at least can mean approach, style, method, activity and technique; and these are in reality so connected and intertwined that distinctions aren’t necessary or helpful.

 

So are the distinctions important?
I think they can be, especially if the words are used without details, e.g. in statements like “Exploratory Testing is the opposite of Scripted Testing” or “combining exploratory and scripted testing”.
Both statements can be true, because the first talks about the approach, and the other about methods.
By understanding the different meanings of the words it is possible to get a more nuanced debate, and to see other combinations, e.g. test scripts with an exploratory approach, or scripted approaches with elements of ad hoc testing.

The intention of the used method shows which approach you are using (inferred from Cem Kaner’s Value of Checklists… p.94):
If test scripts are used to control the testing, it is a scripted testing approach.
If test scripts are used as a baseline for further testing, it is an exploratory testing approach.

 

It would be nice to have a solution to this semantic mess, but I don’t think it is feasible to always attach approach or method to Exploratory Testing and Scripted Testing (or to distinguish between upper case Exploratory and lower case exploratory.)
It is extremely difficult to give life to new words, but I do have some hope in the clarifications by testing vs. checking, and less hope for a renaissance of ad hoc testing.
A start would be if more people are aware of the different meanings, and are more precise when necessary, and eventually the problem will dissolve, in 25 years from now.

The Quality Status Reporting Fallacy Henrik Emilsson 4 Comments

A couple of weeks ago I had a discussion with someone that claimed that testers should (and could) report on quality. And especially he promoted the GQM-approach and how this could be designed to report the quality status. When I asked how that person defined quality, he pointed to ISO 9000:2000 which define quality as “Degree to which a set of inherent (existing) characteristics fulfils requirements”.

But wait a minute!

If testers can report the current quality status based on the definition above, it means that test cases corresponds to the requirements; and bugs found are violations where the product characteristics does not satisfy the requirements. If so, then you must have requirements that follow a couple of truths:

  • Each requirement should exhibit the statements: Correct, Feasible, Necessary, Prioritized, Unambiguous and Verifiable.
  • The set of requirements cover all aspects of people needs.
  • The set of requirements capture all people expectations.
  • The set of requirements corresponds to the different values that people have.
  • The set of requirements contains all the different properties that people value.
  • The set of requirements are consistent.

(The word People above include: Users, customers, persons, stakeholders, hidden stakeholders, etc.)
At the same time, we know that it is impossible to test everything; you cannot test exhaustively.

But assume, for the sake of argument, that all requirements were true according to the list above; and the testing was really, really extensive; and the test effort was prioritized so that all testing done was necessary and related to the values that the important stakeholders and customers cared about.
If this would be the case, then how can you compare one test case to another? How can you compare two bugs? Is it possible to compare two bugs even if you have 20 grades of severity?

We, as testers, should be subjective; we should do our best to try to put ourselves in other people’s situation; we should find out who the stakeholders are and what they value; we should try to find all problems that matter.
But we should also be careful when we try to report on these matters. And it is not because we haven’t got any clue about the quality of the product, but we should be careful because many times we report on the things that we do that can be quantified and take these as strong indicators of the quality of the product. E.g., number of bugs found, number of test cases run, bugs found per test case, severe bugs found, severe bugs found per test case per week, etc. You know the drill…

If you are using quantitative measurements, you need to figure out what they really mean and how they connect to what really should (or could) be reported.

If you think that “non-technical” people are pleased by getting a couple of digits (hidden in a graph) presented to them, it is like saying: “Since you aren’t a technical person we have translated the words:  Done , Not quite done, Competent, Many, Problems, Requirements, Newly divorced, Few, Fixed, Careless, Test cases, Dyslexic, Needs, Workaholic, Lines of code, Overly complex code, Special configuration, Technical debt, Demands, etc, to some numbers and concealed it all in one graph that shows an aggregate value of the quality”.

Quality_is_a_number

I think that it is a bit unfair to the so-called non-technical…

Instead, we should use Jerry Weinberg’s definition “Quality is value to some person” in order to realize that quality is not something easy to quantify. Quality is subjective. Quality is value. Quality relates to some person. Quality is something complex, yet it is intuitive in the eyes of the beholder.

When do you feel productive? Rikard Edgren 5 Comments

I believe that it is impossible to objectively capture important things about a software tester’s productivity.
On the other hand I don’t believe there is a big difference between feeling productive and being productive.

I feel productive when I

* test a feature that is good, but not perfect
* review specifications
* do pair testing
* am happy
* am motivated
* find interesting things in the product
* find very important defects
* report bugs
* help developers
* don’t think much

When do you feel productive?
How do you make sure you spend as much time as possible being/feeling max-productive?

Seven Categories of Requirements Rikard Edgren 9 Comments

I like to use categorizations to structure my understanding of a subject; and after the simplifications are made and I think I understand it well; the structures can be ripped apart, and you get a bit less confused by the complexity of reality.

There are many forms of requirements, these are some a tester should look out for:

Explicit Requirements
These are the requirements found in the requirement documents. You are probably using them in your testing, making sure that they match the functionality.
This is a quite small part of software testing as I see it.

Implicit Requirements
These are requirements that can be found by combining different requirements that are intertwined.
They could originate from general statements like the program should never crash, or the program should be easy to use, which has implications for many other requirements.
They could also become very large, e.g. support all possible input, or support Python scripting.
They are an effect of vague requirements, and they are a natural part of software development; it would be insane to document everything in advance. Tester can deal with it and understand what is important.

Unspoken Requirements
These are things that many users expect from a program, but they are seldom listed in the requirements document.
Typical examples are behave in same way as other applications on this platform, not leave any garbage files after running, or be appealing to most users.

Incorrect Requirements
The writers of the requirement documents don’t know everything in the world, sometimes they are wrong.
There can be small errors, e.g. inconsistencies between requirements, and huge mistakes, because they didn’t understand the user’s true needs.

Changing Requirements
Sometimes requirements need to be changed, which is something that testers shouldn’t object (too much). The requirements are most likely changed in order to make a better product, and that’s what we all are working for. But when they are changed, or added at a late stage, it can be difficult to challenge them, and test them really good, simply because you are under time-pressure.
We can’t do more than our best, but that’s often enough.

Vague Requirements
I used to dislike vague requirements that were very difficult, almost impossible, to test, but now I think they are good to have.
Not that you should be vague on purpose, but quality attributes like usability, performance etc. can never be detailed and capture the important thing: that customers will be more than satisfied.
It gives you a challenge as a tester; you need to use your feelings and imagination to come up with test ideas, and results that point to a positive or negative indication of result. You can’t hide between some numbers, and must stand for if you think the requirement is met or not.

Hype Requirements
These can be difficult to handle. Often they come in the shape of specifying too much detail, e.g. save settings in an XML file, just because XML is hype (this was 10 years ago, so replace with SOA or the cloud or the hype your company believes in, right now.)
They might be out-of-place, put there in order to be allowed to start the project, but they can also be important, exploiting the hype, or just being a perfect match for this specific application.
As a tester, it’s often not much more to do than accept the hyped requirements, especially if it is accepted (or initiated) by the developers.
But probably you need to learn more about the hype, often there are (at least some) good things inside them.

And regardless of how well all these categories of requirements are implemented and tested, will the application be really, really, super good?

The power of a sound Martin Jansson 1 Comment

In my local food store they have this system where you scan the price tags on the food you buy and most often smoothly able to pay and exit without having to stay in any long queues.

Shop Express Scanner

A time back they must have changed software in these scanners because their behavior changed and bugginess increased. The funniest bug or feature, as they themselves would most certainly call it, is when you are finished. You then scan a finish code which then sends a signal to the scanner, then you are able to pay. When you perform this last scan you will now hear a loud beep from the device, previously this beep was used when there was an error of some kind. So, everyone (at least everyone who I have seen do this) perform the last scan and upon hearing the beep they exit the queue and go to the cashier. The cashier then tells it is supposed to sound like that and that it is perfectly normal. This happens every time for everyone, at least once for us technocrats… but prolly each time for those who are a bit scared of technology still. One of the main ideas with using this device is to minimize the effort for the cashier by letting customers check-out on their own. This sound stops that feature.

Another funny bug that have appeared quite recently is that it takes a bit longer to scan wares, I mean from less than a second to close to ten seconds per ware. Pulling up a box of tomato sauce where you must scan each of the 12 cans will now take about two minutes… you just stand there continuously pressing scan… waiting and building up on that hysterical laughter. The idea of Shop Express has lost a bit of its flavour, still when it works as expected it is indeed a lot better than using the normal queue.

Are we ashamed of software testing? (And who is willing to pay for it?) Henrik Emilsson 1 Comment

Imagine that you run a software consultant shop where you take on projects for customers. The projects cover such areas as new software development; implementations of IT systems; and web site development.

Let’s say that you are about to create a offer for a new project to a customer.

Do you dare to specify the proper amount of hours dedicated to software testing? Or do you feel ashamed of having to test the software before letting the customer lay its hand on it?
Do you just add a couple of hours as a separate post so that it doesn’t look bad if someone asks about “any software testing planned”?
Do you include all the software testing hours needed in the total estimate? Or included in the total per function?

I think that we should treat software testing as any other task that are needed in order to develop functionality so that the hours that are specified per function/requirement/area covers all necessary actions and tasks in order to deliver ready functionality.
As stated in an article on www.jcount.com/benefits-of-constructing-your-own-commercial-building/, if you include such tasks as Design, Interaction Design, Specification, Requirement Analysis, Architecture, Coding, etc, you should also include Software Testing amongst these tasks. And you should be proud of doing Software Testing!

By including software testing in your time estimates, you give yourself a competitive advantage. When your customer selects between several offers and sees that you have included software testing and some of the competitors haven’t, it is a signal to the customer that makes them wonder why the others haven’t got any software testing (or why they haven’t specified any). Your offer might come out as a more expensive one, but since you have specified the difference it becomes obvious that they cannot just compare the price tag.

What are your thoughts on this?