Software Test Documentation: How Much Detail Do You Really Need?

Software Documentation TwitterWhen I started my first job in software testing, there wasn’t any formal training available, but there was plenty of advice from management that more detail is always better.

So we made ourselves useful while waiting for new features and builds, writing detailed descriptions of what we would test called “test cases.”

When these cases were reviewed, there was almost always a call for more details:

  • More expected results
  • More cases to check failures
  • More explicit instructions

Why so much detail?

The people a few pay grades above me wanted those documents to be so detailed that they could be picked up and run by anyone in the company.

It’s worth saying here formally: That’s a terrible goal.  If people are asking you to do that, print this out and give to them. You’re welcome.

Once we’re past that, the question is what is the minimum for responsible documentation?

How do we create something that has the most power per paragraph, that is unlikely to get out of date, but fit for a variety of readers.

Let’s look at how we can get away with less detail and less documentation to start testing sooner.

Why do you need detail?

Every time I have tried to kill a test case, two questions have come up:

  1. How will people know what to test?
  2. What will we do when we need to perform pre-release testing?

These seem like good questions at a superficial level, but don’t be fooled. Customers don’t need explicit instructions for finding new bugs and neither do you. The call for expected results is a red herring — they distract us from the mission of finding new and important problems.

Many companies are doing away with detailed product specifications in favor of a small set of criteria that must work in order to ship the product at the end of the release. I don’t like thinking of these as acceptance criteria, because other things can fail that would stop a release. But they are fun to use as a guided tour through what is important in the feature.

Those helpful sentences, my colleagues at work, and lots of questions have always been enough to figure out how to do good enough testing. Usually, those are where the information for test cases come from anyway.

When release time comes, we have a stack of stories to review and stories from developers about what their worries to help us focus testing. As a bonus, less documentation introduces variation between testers and test runs, which actually increases the amount of features and combinations that we test over time.

There are a few business domains, like medical hardware and the financial sector, that require special consideration for how we document. That doesn’t necessarily have to be a detailed, step-by-step test case though.

My colleague Griffin Jones has done a large amount of work in the regulated healthcare space. Generally what is required there is information on what and how the product was tested, along with a definition of how testing will occur and proof that it did. Those last two requirements are often interpreted as needing a lot of records, but they don’t need to.

Video recording of testing sessions, for example, combined with a high-level overview and tags, can do the trick nicely. Another colleague, Matt Heusser, was required to do “design documents”, took a video camera and did the design, and review, at a white board, checking it into version control.

An automated testing tool like TestComplete gives you the ability to record automated tests, which enables you to append the recorded operations to existing tests instead of recording new tests.

Shifting away

I was in a meeting talking about what to do about our test case maintenance problem. Every sprint that went by would make the existing test cases a little less relevant, and when release time came we were in a crunch to get things in order again. Setting aside more time to fix old test documentation clearly wasn’t the answer. That meant we had less test time, and that would grow with every release.

I had an idea — what if instead of documenting a test for each value that needed to be tested, we just created a sort of check list with sets of values we might use or, even better than that, some ideas on what types of data might be interesting to look at? This idea stopped hundreds of test cases from ever existing.

At the end of the next release when we were having the same meeting talking about the same things, I pushed further. What if instead of explicitly talking about the setup involved —navigation steps, required data, user accounts — we just describe that as part of a story.

“To test this you will need a user with edit privileges, and also have 20 products added to your store.”

Each release, we chipped away at what we traditionally thought was required eventually getting close to the essence of the test. The end result was something that looked like this:

The goal of this feature is to allow someone to edit multiple products in their store at a time.

  • User privileges are important, we want to know that users without edit cannot perform this action.
  • Test with none, some, and all products that currently exist for a store.
  • What is the upper limit on products that can be edited?
  • Are there performance implications for editing more at a time?
  • Does product type matter?

Instead of explicit instructions, we had test ideas that would hopefully generate more questions and deeper exploration of the new software.

Mind maps and sessions

My next step was even further away from what most people will recognize as a test case— mind maps and test sessions.

One of the main goals of test documentation it to share our ideas with other people. Mind maps help us do that in a visual and easy to edit format. At the center of the map is a high level them, and from there I break down into categories until it doesn’t seem useful to go further.

There are categories for where the field might appear — desktop, web, mobile. There are other categories for the type of field, data considerations, and tooling that might help. The end of each branch has a set of test ideas that someone could use to guide how they work through testing a field.

Test sessions are a story to match up with the test ideas outlined in your mind map. Test cases are an idea of what and how you might test something. Often those ideas are born before the software even exists, so they might not match up very well with reality. Sessions on the other hand are a description of the testing you actually performed.

Here is an example:

I spent 30 minutes testing the new product edit feature on 11/03/2015.

To set up data, I got exports of customer databases that already had product data set up.  I started by editing a single product, and then did several edits in the middle of the group, and then did an edit of all at the same time. Editing all products got a little slow, so I might be concerned for customers that have more than ~200 products. After this I looked at our 3 product types. Edit fails intermittently for type 2, but I’m not sure why yet.

If I had more time, I would look into user types, and do more investigation on editing of product type 2.

I documented the following issues:

1345

1346

1458

This short paragraph tells about my concerns, what went well, and what I missed and it only took a few minutes to write up.

Nothing at all

Test documentation is usually short lived.

That information might be useful for this release and maybe the next, but past that the feature and the other parts of the product it integrates with have probably changed a lot. When no one is demanding proof of my test work, I just like to get on with it. There will probably be lots of different notes made in text files and in chat threads with coworkers, and probably some documentation in the form of code that will get rerun with every build, but no long text.

This is even more possible when I can pair with other people in the development and test team on a feature. In addition to them helping to bring up new questions and test ideas, they learn the same lessons about how the feature does and doesn’t work.

What about coverage?

You’re probably asking how can I possibly know what test coverage looks like if I’m constantly chipping away at the things we can easily count.

Counting test cases is a pretty bad way to measure test coverage for lots of reasons, so think of it as freeing yourself from a bad habit and misleading data. One alternate is to document actual things in your product like menu items, features, pages, variables, and so on. Using these you can talk about test coverage from a few different perspectives and get a more holistic view of how much of the product has been tested.

Another option is to run a tool on the server side that reports on classes that have been accessed. These tools guide us to the places in a product that are lacking test coverage. They act like a big neon arrow pointing to neglected parts of your software.

There are a lot of different ways to document, or not document, your testing work. I’ve done a lot of them of the years and everything is appropriate somewhere.

The important thing is to make sure that you are doing enough to satisfy the needs of your project, but not so much that it drags you down.

Interested in learning more?

In our newest eBook, Test Management in an Agile World: Implementing a Robust Test Management Strategy in Excel and Beyond, we look at how teams are implementing test management and the steps you can take to bring your strategy to the next level.

Get your copy today!

QAC_Ebook_Ad_1200X628

Speak Your Mind

*