Android-eBook-CTA-938x181

5 Tips for Managing Tests

managing tests

If you’re anything like me, you’ve seen a different process for managing tests everywhere you’ve worked. Usually, I look at a development process as being flexible to the organizations and the personalities in the organization because process is only effective if people believe in it and adhere to it, and no organization is exactly like another.

But there are some areas of your process that need some rigor if you are going to produce quality software on a regular basis. One of those areas is test management. It doesn’t matter how deeply and how well you’ve tested your software if you cannot repeat the tests or accurately describe what they were.

How often have you heard a developer tell a quality engineer that they can’t reproduce a reported bug and ask for the steps they used when they found it? And how many times have you heard a QE reply with words like “I think I…”? And, as a manager myself, I have often looked at a production bug and asked “Did we miss this in testing?”

Here are some tips for managing your tests so those conversations happen less frequently.

1. Clearly itemize each step in the test.

Nothing could be more tedious than this part. But it’s critical to write tests that can be repeated exactly by any member of the team, including the developers. Often a bug surfaces when you follow one path through a feature but not another, so having the path of the testing clearly outlined eliminates any communication problems between team members.

2. Send your tests through a review cycle.

Avoid wasting precious time and resources testing a feature inaccurately. The developer who wrote the code is best qualified to know whether a test makes sense and will actually exercise the code thoroughly. And don’t forget the designer and product owner, both of whom know the intent of the feature better than anyone. I have witnessed many times when a test plan becomes the vehicle for discovering that a feature’s intent was not clearly communicated or thought out.

3. Clearly specify pass/fail for each step in the test, not just the test itself.

By breaking your test into discrete and detailed steps, you not only ensure that the test is repeatable but you also create an audit trail for exactly where failures occur. If the tester updates each step in the test run with its pass/fail status, there is a clear record for the developer to identify the code that is called between the last passing step and the step that failed.

4. Store your tests in a common area where everyone can see/use/share them.

Ah, well, that audit trail I just mentioned is only good if the developer can see the tests and their steps. This is a common mistake – nobody looks past the bug report to the test execution report. Hopefully, the tester creates clear and detailed bug reports but why not leverage the tests themselves by sharing the execution report with the developer? Another benefit to keeping your tests in a shared repository is that each member of your testing staff can re-use and/or re-run those same tests for subsequent releases or commonly shared functionality across features.

5. Save those tests and their execution reports.

Quality reporting doesn’t end when you deploy your code to production. In fact, those test results turn to gold when something unexpected occurs in production or a user reports a bug. Reviewing the tests that were executed and their audit trail of failures can tell you if you missed a path through the feature or promoted code that didn’t pass the test execution.

We might as well admit that test planning and management are the most tedious, and sometimes grueling, part of the development cycle. But if you take the time to find the right tool and the right process for documenting, reviewing, and assessing your tests on a regular basis, you can significantly improve not only the quality of your code but also the quality of your testing.

See also:

df5b4c57-2f9f-477c-b36d-cb2ee6526c2e
subscribe-2


Comments

  1. Konstantin Lyovin says:

    Well, indeed you have described a very classical approach that works fine in Waterfall SDLC. However Scrum turns this into nothing. You’ll never get appropriate time for such detailed steps and cross-review due to tight project scehdules in the modern world.

  2. Lorinda Brandon says:

    Hi Konstantin, this is actually a process I have used successfully within a Scrum methodology. By involving QA early, you can start creating your test cases based on the user stories. We often did a quick test case review of at least the Minimum Acceptance Tests during the first week of our three-week sprint. It is no different from creating your one-page dev specs and having the team review them during the sprint cycle. I will grant you that your step details will vary depending on the clarity of your user stories and the length of your sprint. But in essence, you can still accommodate creating and managing test cases in Scrum.

  3. Konstantin says:

    Hi Lorinda! 
    Creating such detailed test steps means you have to have a full vision of User Interface design including exact control names and their layout. Did you have such detailed User Stories? A User Story is usually a just single sentence thing. It all just contradicts in my head with the main Scrum principle: “People over tools and processes.” From my point of view in Scrum projects we still need to keep the balance between test cases detail level and QC engineers professionalism, so that writing a minimal data will still guarantee that each other team player will understand the steps absolutely the same. Anyway, thank you for the article!

  4. Thank you for posting these tips in managing quality control tests. The strategies that you have provided are indeed very helpful. Cheers!

  5. These tips are really helpful especially to those that manage quality control testing. Thank you for sharing.

Speak Your Mind

*