Why Quality is Eating Software


By now, we’ve all read about Marc Andreesen’s idea that “software is eating the world,” and the majority of us recognize that we live this every day. Our daily lives are powered by software in ways we don’t even bother to think about anymore.

If you’re tapped into the API industry, you may also be familiar with Steve Wilmott’s “APIs are Eating Software” talk, in which he points out that in this connected world, software itself is being reinvented in ways we could not have imagined a decade ago. APIs are the muscle behind so many of those interactions listed above, as well as much of the newer technology being built into cars and appliances.

Let’s take my day today, for example: I stumbled downstairs while it was still dark out and was relieved to find my coffee already brewed and waiting for me. That’s good, because I had an early appointment in the city and needed to hit the road early to avoid traffic. Checking my phone on the way to the car, I saw there was a chance of rain (grabbed my umbrella) and that there was a backup on the main highway (that second cup of coffee would have to be in a travel mug on the way then).

In my car, I dropped the address into my navigation system, picked an alternate route that kept me off the clogged highway, and settled in with my favorite satellite radio station. At different points throughout the day, I checked my email, transferred money for my son in college via my banking app, checked the news online, and did a little social networking. It was a normal day for a normal business person and parent, all of it powered by software… software that I expect to work – and work flawlessly.

Few people take the time to process how software has revolutionized so many of their activities and what would happen if that software stopped working.

Why Should Testers Care?

This is the kind of conversation that normally takes place among developers and business leaders as they create new ways to build intelligence and automation into our daily lives. But what is the corresponding conversation we should be having on the testing side? With more people relying on software to power critical needs like navigation systems in planes, trains and automobiles or doctor-patient systems within hospitals and healthcare facilities, how do we make sure those systems work on Day 1 and continue to work on Day 100?

Add in the complication of how rapidly these applications are being built and deployed, especially with the availability of APIs that make the software development process that much faster. You can’t avoid the fact that today’s tester is critical step for the success of these systems, but in an environment that puts high value on speed of deployment. We need to take more seriously the need for testers to focus on API testing as a regular part of the application testing process – if the API is the strength behind an application, it’s critical that it meets the needs of the application.

Modern Testing Conversations

Certainly, there are new methodologies being developed these days to deal with aspects of software development like Agile and distributed teams. Rapid Software Testing is designed to keep testers fast on their feet as features come out and they are asked to test simultaneously with development. The key concept here is to know how to approach a feature when you have very little documentation to rely on and when the feature itself is only partially built when you start testing it. That’s the new world of testing, and it’s imperative that testers learn how to get this right – after all, our daily transportation and communications rely on these products.

Context-Driven Testing is also a way to provide intelligence to project stakeholders by understanding what is vital in the context of the application and the project rather than approaching each project the same way. Testers provide important insights to the state of the product but without relying on “best practices” and “standard metrics” that may not work from one project to another. I think we would all agree that our bug tolerance is a bit higher for our digital recording devices than for our on-board navigation systems. Asking a tester to adhere to the same priorities and practices when testing those applications doesn’t make sense.

Then, there’s the mobile testing conundrum. How do you accurately test mobile applications whose network connectivity, device settings, and application interoperability needs are so unpredictable? The industry is still experimenting with testing tools and methodologies in this space, but we don’t have much more time to figure this whole thing out. Already, the reliance on smartphone applications for POS payments, navigation, banking, and healthcare is a reality for some people, and all predictions point to this reliance continuing to expand to more and more sectors of the population. This is another area where testing APIs can have a much more fundamental impact than testing at the GUI level, for example.

In some cases, the rapid deployment process leaves testing behind completely… at least until after the fact. Some companies are using a methodology called Testing in Production (TiP), opting to push their products to production with only developer testing as their barometer. They then keep an eye on production data from their monitoring systems and react to errors that are generated there. While this gets your code to your users faster, you need to decide whether your application fits a business model that can incur this level of risk.

Think, Think, Think

But, more important than any specific approach is the fact that the testing industry is thinking again. Just as the API Revolution has injected a new vitality and creativity into the software development process, the “Software Eating the World” concept has injected a new urgency and philosophy into the testing industry. We have some hard problems ahead of us and we need to find the right balance between speed, innovation, and quality – not an easy trio to manage. As someone who has been involved in this industry for decades, it is refreshing to hear the new testing philosophers whose strong voices are helping to lead the way.

This article was originally posted on Ole Lensmar’s column in Network World

See also:



  1. Keith Tyler says:

    The whole premise you’re operating from is flawed. It’s not the job of software testers to test the networks the software will operate on. Besides, these issues are usually hardware, not software.

    Blaming software testing for the issues of network and hardware realities is scapegoating.

    Should software testers test every single fabric in which a phone running its software might be hidden? Should software testers test every gait of people who walk with a phone running their software in their pocket? Should software testers test cell phone software while sitting on the dash of ten different vehicle types on five different road types in three different conditions?

    The first rule of good testing is know what it is you are testing, and stop testing things that aren’t that.

    Robustness and load testing aside, you can’t fault software testers for hardware and infrastructure problems. You’ll end up with overworked testers who realize their job has become stupid and pointless, not better software.

    • lindybrandon says:

      Hi Keith,
      Thanks for your comment. Of course, you are exactly right and that is actually part of the complexity – knowing where the issues you see in your testing are occurring. There’s no intent here to “blame” anyone – but the reality is that testing software is not the same as it was 20 years ago or even 10 years ago, because we do have so many more interoperabilities to consider when troubleshooting. I agree with you completely that you need to focus on the functionality your team is working on – but that also means you need to be able to discern which issues are within your charter and which are not.

    • Hi Keith,

      This point of view reminds me of a programmer that used to work with us in a previous company. Her code was never tested, and everytime we tell her that she should test her code, she would say: “It’s not the programmer’s job to test the code”. We were a small company, and that drove us insane.

      I completely disagree with you, the programmer must handle everything, even network latency issues. What if, in the middle of a transaction, the network goes down, what should the code that the programmer wrote do? Should it not address the issue and revert the part of the transaction that was done?

      What if a programmer is building a VOIP app for an android device, and that VOIP app works on 3G and Wireless. What if the wireless connection drops out in the middle of a call, shouldn’t the app automatically and seamlessly switch to 3G?

      It is programmers with the “it’s not my job mentality” who are destroying the programmers’ reputation. As long as the programmer can do something about it, it is his/her job.

      • Keith Tyler says:

        No, you’re blurring the lines of the division of labor that is essential to a successful project. Each member of the project team needs to own and be responsive for their role in the SD process. I will agree that developers should do some level of QA, to a certain extent, and that extent is unit tests. QA likewise should do some level of robustness testing, but that extent is negative/boundary/edge/robustness/load testing. However, beyond that, it would be the responsibility of the product owner to set the expectations and acceptance criteria. The product owner needs to understand the circumstances and challenges that the application will need to handle, and communicate that need via specific targets and criteria, which developers will write code to implement and handle, and QA will write and execute tests to verify. That’s how a well oiled SDLC project runs. Most projects fail because team members from PO to Dev to QA and elsewhere fail to perform those roles sufficiently… instead it all falls on QA’s head, and that’s asking for disaster.

  2. Roosevelt P says:

    I agree 100% with some of the concerns you laid out on your post. I don’t know about you but I don’t recall reading much about “software testing” in school. Matter of fact whether it’s Unit Testing, Acceptance Testing or Penetration Testing, I learned all that at work or on my own. You’ll also notice that most of the managers of a IT project are not programmers/developers themselves. So, at the end of the day all they want is a product that customers can use but may not push enough for testing. It is the developer’s responsibility to take care of development and testing but when the developer and the manager is not aware of importance of further testing… we see interesting issues popping up!

Speak Your Mind