Why Checklists?


If you believe what the avant-garde testing community is saying, test plans are hardly a good thing. Only with exploratory tests, session based tests or context based tests, many errors can be found and the quality can be improved effectively.

You may not know each of these styles, but in short they all criticize the classic test management approach, where a comprehensive test plan is created up-front, which is then processed more or less by "test robots". The test plan will destroy the analytical and creative skills of the tester.

But with common tools, it is very easy to design test plans that also hold these negative aspects.

So, an exemplary test plan in TestLink will looks like this. Even at first glance, quite intimidating:

For a test case, test steps can be stored. This gives a tight corset for the tester. When in doubt, the tester adheres to the test plan, instead of thinking out-of-the-box and finding an important error.

While the test case definition is extremely elaborated, the design is dull, only a text note and a result can be stored.

The UX of this GUI enforces even to the action "no marginal notes and no deviation, just check whether the test case is correct."

TestLink is extremely poor in terms of usability, but the basic problems are shared by all test management tools. If the tool supports test steps, it's probably not lightweight.

Still, I think that test plans have a certain value. Just like unit tests do that for programmers, they describe the system and document behavior and knowledge.

With a test plan, one can do extensive reporting. Somehow it creates trust in the software, if you see that many test cases were executed and passed.

The checklist is only a rough agenda; it does not contain precise instructions how the test must be carried out, such as the formation of equivalence classes for a form input. Either a tester has this know-how or he is a beginner and doesn’t know what this equivalence classes thing is.

For example, the test plan i.e. the checklist for Quality Spy's Undo-Redo function looks like as follows pretty spartan:

While execution, one can "check" the points accordingly, so that during the test one can have a rough overview about what areas are already checked.

By the way, I do not like the typical scale "Passed/Failed" for manual testing. This is a scale for test robots. People can say "passed, but with small problems". Thus, a meaningful picture is shown quickly in the summary, instead of the the situation "100%" of the tests passed, but 50 errors found.

From my point of view a checklist never replaces the test protocol, because here problems and comments can be easily added:

But again back to the avant-garde testers, like the people in the Software Testing Club, which are probably far better testers, than I am. They are quite right that test plans are not so good. But short, crisp checklists are a useful tool with a good cost to benefit ratio.

Quality Spy

Development started in the year 2013 with the credo to make software testing fun again. Over the years it evolved to a fully-featured commercial test management solution and still fulfills the credo. Just by the way it makes your software testing better and more efficient.

Product Homepage

Author

I'm Andreas Kleffel, the person that drives the product. Let's get in touch about your testing at qualityspy@bluescreen.technology.

Whitepapers

Test strategies.

Learn how a well-thought test strategy ensures project success.

Why checklists?

Read why checklists are the better test plans and why they improve both quality and efficiency of testing.

Test protocols.

Understand how Quality Spy's unique features make test protocols the workhorse for any software testing activities.

Lightweight bug tracking.

Detect the holy grail of super-efficient tester-to-developer collaboration and why Quality Spy forces you to work that way.