The Helpful Model Henrik Emilsson

Here is neat story I told to Michael Bolton, Martin Jansson and Markus Gärtner when we were exploring the Metro in Copenhagen, during the coldest days the city had experienced since they started measure the temperature. I promised to blog about it…

Let me begin with some background information.

A couple of years ago I worked as a tester in a project where we developed an application that did an automated processing of an electronic version of a paper form.

In many ways this was an interesting system – no GUI, no user input; no visible output (if everything went OK). And the paper form was something most people in Sweden would use at least once; so it was very important that the system did things right. In fact, I used it myself just days after my assignment was over.

In short, this is what the system did:
My organization sent out a paper form regarding a matter to people with some hard-coded text in it (e.g., names and personal identity number, etc), and people should fill out the rest of the form.
They would then send in the paper form to a third-party company that used OCR to convert the paper form into an electronic form (xml-file). The file was then sent to my organization and went into our system, where the processing of the form content took place.
First there were many checks done in order to check the accuracy and format of the manually entered text. If that passed, it went on and did checks against all the laws that were supposed to be met in order for the matter to be processed correctly; if that passed it went on and a formal decision could be made, which included notifying several organizations and the people who sent in the paper form in the first place.
If any format check or law was violated, it went to a manual handling of the matter.

The test data we used was non-real people, but they still had to be unique in the national database system (so if our database had included all existing persons, our non-existing persons would have been unique persons in that sense). So we only got 2 test persons a week, because that was what could be reserved for testing purposes. This meant that we needed to be very careful with these persons and design them with utmost precision in order for them to not violate any rule in an unintended way and thereby be caught by the system. And vice versa, those who should be caught by a certain rule needed to just be caught by that rule and nothing else, even if that meant that an earlier check might have been introduced later in the project.
So you might understand that we put a lot of effort in the test data and making sure that it would be valid.

Anyway, this story is about what happened at the end of the project.
After we had developed all tests and they ran through the system successfully – meaning that many should be caught by the system, and many  should run through all the way without being caught – it was time for us to take the tests one step out of our system. This meant printing out all paper forms; and manually write the text that should be included; and then send them to the third-party company that would scan the paper forms and convert them into electronic forms which they then sent into our system. The intention was that the result would be the same as when we ran the electronic forms. Partly why we did this was because the third-party company had trimmed their OCR-machine in order for it to understand this new paper form; so we wanted to know how good work they had done with this. All their reports said that the results were OK; and we had sent them a couple of forms and we were also satisfied with the result. But we wondered if the OCR-system could handle some tricky data (which obviously some of our tests were designed to be).
So we began filling out the forms according to our test cases. E.g. one format check could be to make sure that only one word was written in a field, so the test included a word with a space inside – which we then wrote as obvious as possible in order for the OCR-machine to interpret this as two words and thereby it should be caught.
There were plenty of these format checks that we carefully tried to violate. Another example was to write with a really hard handwriting style so that it would be really hard for the machine to interpret and thereby flag the field as “unreadable”. Etc.
At the end of the week, we sent all these papers with a box delivery firm to the third-party company and waited for the forms to drop into our system on Monday morning. We were all excited because this really felt like a proper production test with test data as close to the real stuff as possible.

On Monday morning, the first forms were dropping in and processed by our system. We monitored the process by following them through the database and all the states that they ended up in. To our surprise, most of them passed all the format checks and went further into the system… We started out investigating the xml-files and the scanned tif-images. The tif-image showed our paper form and they were correct; but the xml-files didn’t get the flags that we would have expected. Hmmm, strange…
Our reaction was: “What an amazing OCR-machine! How can it interpret so good!?”

We reported to the third-party company that we weren’t satisfied with the tests; and we told them that we would send them a new batch as soon as possible. We didn’t say why we were unsatisfied, because we thought that we had screwed up and not exercised the OCR-machine to the limits.

So we began the tedious work of filling out the forms again; but now even more evil then before. 🙂
Now, the OCR-machine shouldn’t stand a chance against our cruel intentions.
As we did the previous week, all papers were shipped to them at the end of the week and on Monday we were back and rubbed our hands waiting for the forms to enter the system.

But the same thing happened the second time!

Now I took a tif-image and the corresponding xml-file and went to one of the business analysts. “How the hell can the machine interpret this garbage text into something as useful as what it says in the xml-file?”. We looked at it and shook our heads. We couldn’t believe that this was happening.
I went back to the system and analyzed the data some more.

Suddenly I discovered a typo in a name and instantly got suspicious. I recognized the name since I had created it when creating the test person. Something was very wrong here…
Then it hit me! The name had been pre-printed on the paper form and shouldn’t have caused any trouble for the OCR-machine given the results for the handwritten stuff. I thought to myself “It’s a human behind this!”.

I went to the business analyst again and told him about this. He said “Damn, that’s it!”
He then called the third-party company and asked to speak with the person responsible for the OCR-machine. He then asked “Do you know if someone has interpreted some of our paper forms manually?”
The answer dropped as a bomb:
“Well, yes. All of them. We’ve had some trouble with the machine so we had to do it manually. And I want to say that it was really hard for us; you had written in such bad handwriting that it took us so much time to process them that we thought that this was torture. And just as we thought that it couldn’t get worse, the second batch came in that was way worse than the first one. We had to sit in pairs and process them carefully and with utmost respect. But we did a hell of a job, don’t you think?”

I came to think about this when I read Jerry Weinberg’s “The Secrets of Consulting” and saw The Helpful Model:

No matter how it looks, everyone is trying to be helpful.

3 Comments
Darren McMillan December 23rd, 2010

Hi Henrik,

What an interesting and funny story, you had me curious all the way through this on what would happen in the end 🙂

Did they know the data you sent in those batches was test data?

I’m laughing because I spent a very short time when at university in a data entry job & have seen all manner of poorly written forms which I’ve been left puzzled at trying to figure out what they say.

Thanks for sharing.

Cheers,

Darren.

Henrik Emilsson December 23rd, 2010

Thanks Darren,

It was indeed very funny when we realized how stupid we were not telling the third-party company what our purpose was… Why were we keeping that to ourselves? 🙂

I guess that the biggest lesson I learnt (while I still try to be better on this) was that it is very important to be distinct and clear with what your expectations are. In other words, do not expect that other people know what your intentions are. Do not expect that other people have the same picture as you have. And finally, be clear about your “failures” – it might just be misunderstandings.

And since our expectations and intentions blurred our eyes, we weren’t looking at the problem from all angles.

Cheers,
Henrk

Darren McMillan December 23rd, 2010

Hi Henrik,

Reporting and communications are two area’s most teams forget about at one time or another. I think in this case it was both parties at fault; them for not communicating that they’d stopped using the OCR & yourselves for not expressing that it was test data. Although the later I think was fine as you might have wanted to do that without them knowing your intentions, as that may have skewed the results.

Thanks again 🙂

Cheers,

Darren.