How Testing Fits Into DevOps, Because It’s Here to Stay

When DevOps first appeared on the scene, no one really knew what it meant. Books were defining the term in completely different way; conference speakers were sending out conflicting messages about tools that you absolutely must use (or not) to do “real” DevOps. I distinctly remember seeing a job advertisement or two that were hiring a DevOps person to “dev all the ops.”

We all know better now.

Or, at least, some of us know (a little) better now. Having some time to experiment taught us that DevOps is a lot like “agile“. It describes a set of methods and tools that help programmers deliver software faster. This isn’t something one person does, it is part of daily life as a programmer.  There is still one big question in all of this. How do testers fit into development groups? How can testers continue to make a difference when software companies are driving further and further into technical practices?

Dev Ops

 

Teaching Developers To Test

Anyone can test software. Tell them to “play with it”, or give them a spec. They’ll even find a bug or two. If it’s big and obvious, for example, login in broken, they’ll probably find that too.

Even programmers can do this sort of testing. Programmer testing tends to be verification that what they thought they built works — it tends to miss out on the difference between the customer need and what the programmer understood, verifying only what the customer understood.  Because what they do is write low-level code, a programmer might create a code-test ‘test’ before writing production code to see that some value is set, then write that production code, and then finally run the code and test in concert. This usually comes in the form of TDD, or BDD, or unit testing. These tools are a nice way to help a person check their work, but it often isn’t enough. Anything surprising, that the programmer didn’t guess ahead of time might be a problem is still a mystery. One way to go beyond this is to have the programmer test someone else’s code.

Jesse Alford, who is on the technical staff at Pivotal Labs, did a talk at CAST2015 about how he spends his time at Pivotal teaching programmers to test software. Through a combination of pairing with programmers and then talking about the work, playing games with software testing themes built in (like Zen Do, and the infamous dice game), the programmers learn more about what skilled software testing looks like, and Jesse learns more about writing good code.

Pivotal has created a stronger team through this process of programmer / tester pairing, and teaching exercises.

What Can’t Be Done In CI

Honestly, I think Continuous Integration is pretty cool. Why wouldn’t you want to get some base feedback on the software for every single build? Why not build every few minutes to every few hours at the slowest?

Yet there are other quality questions that CI just can’t answer, where delivering faster might not help.

Stability and Reliability:

This is the question of how well your product runs over time. A quick 1-hour exercise of the software won’t find a memory leak.  If something goes wrong, and eventually it will (this is software we’re talking about) how does the product recover and can I continue using it without intervention from some sort of administrator?

Continuous Integration runs on the short term. Get the latest code from the repository, mash it up into something testable, run the automated checks, and spin things back down. These environments don’t exist long enough to get a meaningful feel for how stable or reliable the product will be.

Performance:

Most of us understand the idea behind performance testing – run the software with a great number of simultaneous users for an extended series of time and see if it slows down. Yet how to do performance testing, and what to make of the results, is an interesting combination of technical skill, mathematics, and social science. Single user performance testing can be as easy as sitting at a computer with a stopwatch to see how long a page takes to load or form to submit. More complicated versions include running a series of HTTP requests, measuring various aspects of the call, comparing that to previous measurements in different environments, and then trying to decide if the results matter.

The important thing to note here is that you are the most important thing in the equation. The performance tester needs to observe differences, then decide whether a 25 millisecond difference in one HTTP call between versions is important enough to do something about, or if the fact that one button click triggers 30 HTTP POSTS should be reported. The context is important, and the tester has it. The CI system never will.

Usability:

The term ‘Usability’ encompasses a great number of factors, including utility (can it do the job), usability (does it work for me) and identity (do I think of it as compatible with my sense of self.). Figuring out if software is usable, and how to improve it, is ‘soft’ but it is still science. Ideas like affordance of devices and interviews, which both come from the ‘soft’ science of anthropology, can help use answer these questions and improve our product. There is also the intuition of the user, which is even harder to understand and measure. This intuition can manifest when customers user the software and rub their foreheads trying to figure out what to do next, or become frustrated and ask someone for help, or even giving up all together. (When customers abandon a request for vacation, or submitting reimbursement, that system has problems.)

In some cases, usability studies and design are carefully handled ahead of time and then forgotten. More than once, I’ve worked on a product that was immaculately designed, but after performing one task many times, found it very tedious. That feeling of tediousness is a hint that something is going on, and it is not good.

Compatibility:

No software is an island. Even very small programs like games that run on your phone or tablet have to play well with their environment and the other software running there. Business systems integrate with user accounts and often send and receive data with other pieces of software. Healthcare software is constantly sending patient information for health records and insurance information for billing.

Often the fastest way to learn if your software is sending the right medical billing codes in the right format for a patient to get insurance coverage is to create the scenario in your product and then send the output to your test system. That might be a 3rd party test system — when millions of dollars in involved, don’t worry, they’ll have one you can use. You might be able to do this every build, but by the time you built the file and the scripts, you could already have discovered what was broken and started fixing it.

How Continuous

Where Continuous Integrations will get every new line of code into a build and checked against the unit tests, Continuous Delivery (CD) takes it to the next level. CD takes the latest build and automatically deploys that to an environment along with whatever database and frameworks go along with that build. Some companies have pushed this concept to its logical conclusion and are pushing new code to production on every commit, something they call Continuous Deployment. These terms are used so interchangeably, and are so confusing, that I prefer “Continuous Delivery (to where)” – for example, Continuous Delivery to a staging server or continuous delivery to production. CD to production (“true” Continuous Deployment) takes a variety of engineering practices designed to enable partial features, turning features on and off, sending new features in “dark”, database changes that run simultaneously so you can cut-back if needed, and more.

The first step is usually CD to staging, and then only if all the automated checks run green. Deploying every build automatically to a staging gives the benefits of fast visibility, but also protects users from big, unanticipated, black swan problems that will ruin their day. Deploying continuously to staging has the added benefit of allowing testers to control their own test environments.

One other strategy I’ve had work well is deploying automatically to test after getting a green light from suites of tests that cover multiple layers of the product — unit, service, and UI. Although these are just checks and usually won’t show unexpected problems, they will show that certain aspects of the software still function the way you think they do. Having a second test environment to control and compare against the latest is a nice touch, too.

Not Everything Is Functional

One important aspect of DevOps is defining when a feature or code change is officially done. When a company releases quickly, and often, that definition can be as light as a green light from all automated checks run on a given build. This method treats software as a simple set of functions. I can enter a value in this text field and select one of these radio buttons then click a button and get a value out. With some higher (than unit) types of automated checking, we can string these functions together to get something a little more complicated.

That isn’t enough though, and it certainly doesn’t represent how people use software.

The main problem with relying on this type of testing is how simple and linear it is. When we use software, we don’t take perfectly predefined and clean paths. Instead of performing a series of — submit 5, assert value, select check box, assert value, check for NULL, assert value — testers take a loosely guided path. We meander here and there looking for hints of something interesting and then strike when a clue shows itself. This kind of activity can happen all the time, both on a macro level (“what new features could use all more attention on staging, or even in production, right now?”) and at the micro level, exploring a story just a little bit more before the code goes live. That micro-exploring work can even happen with continuous delivery to production, by turning the feature on in staging and “off” in production until the tester has completed an exploration run.

DevOps tends to treat testing as an activity to be completely automated. Over time, as DevOps begins to gain maturity, I see that changing. Human, thinking, in the moment testing might be different each time, and needs to be done by someone, a tester, or developer or someone else, while the things that run every time according to algorithm, the checking, that might be automated. Cutting out the exploring causes us to loose perspective in a way that was probably unanticipated.

The push toward DevOps can be scary for testers; it isn’t hard to imagine that the methods and tools in the wrong hands could squeeze our special role out of development groups. The best way to stay relevant is by understanding your unique contribution, being able to explain it, and excelling at it.

So keep calm and Excel On.

Comments

  1. Thanks for giving the information. It will help me lot.

  2. Very nicely written, thank you.

    It seems to me that the terminology changes but the job remains the same. I know I can break it, but how?

    I find that automating tests (APIs mostly for me) gives me an insight into what I might be able to break when I start exploratory testing. Being new to testing with Dev Ops is a bit head scratchy but the principles are the same.

  3. If you want all your developers change company in a while, then yes, you can “Teaching Developers To Test”.

    Why not teaching developers to wash floor? And more seriously, why not teaching QAs to develop? I think answer is obvious 😉

  4. Hello justin,

    Can i use this article to write a paper and publish it.
    Please let me know if it’s copyright protected.

    thanks

  5. Nice to see your concept about DevOps. And the good information you have gained and posted here.Thanks for sharing the information.

  6. Thanks for such a wonderful information on DevOps Training.

Speak Your Mind

*