TestComplete PPC Ads TC10 - CS6-02

There Ain’t No ROI in Software Testing

Using a return on investment calculation for testing (or test automation) is an error. It’s misuse. It’s kinda like my misuse of English grammar in the title up there … But I doubt you had any trouble understanding my meaning, and you may already understand my “joke.” Regardless, it’s still wrong.

Let’s clear something up first: There’s no ROI in testing unless you are selling testing services like outsourced testing. Only then does testing generate tangible, traceable, 100% correlated revenue. And therein lies a rub. ROI calculations are based on revenue-generating activities and, by definition, are a quantitative analysis method. We should use quantitative algorithms to assess tangible things like number of items sold, or amount of profit, and qualitative measures to evaluate intangible things, like quality (or the “total” value of testing).

So am I making a big deal out of nothing? I don’t think so.

Let’s use a common example: coupons and discounts. Marketers want you believe that you’ll “save” money using coupons and buying when promotional discounts are in effect. In reality, you’ll save money by not spending any. When you make a discounted purchase, you are not “saving” money, you are simply spending less. Splitting hairs, you say? Look at the bank account at the end of a shopping trip and you tell me. Did you spend or did you save?

Testing costs you, just like a shopping trip, and you might justify your actions by listing a set of benefits obtained or needs that were met, but not in money RETURNED. (A nitpicker’s note: a rebate is just a coupon– money returned–after the fact) The fundamental purpose of an ROI calculation is to determine how much revenue or profit will likely be generated as a result of particular activities and their associated investments. It helps people make business decisions like whether this project is worth pursuing, or determining if an effort is going to break even or lose the company money.

So back to the big deal. After a big shopping trip, do you give status reports and brag about how much you spent, or do you put a spotlight on how much you saved? Why? Simply put, either choice is manipulative. You are manipulating how the information will be perceived, and possibly used. Finance is all about manipulating numbers to tell a story. Which story? Whichever one you want to tell. This is the number one reason I changed my major in college from Finance to Management of Information Systems halfway through the pursuit of my Bachelor’s degree. I wanted to major in fact-finding and reporting, not storytelling.

Information about testing is predominantly used by stakeholders to make decisions. So I think it’s important to consider what decisions about testing could be influenced by twisted information. The most common misuse of ROI in testing is with regard to the marketing of testing tools and the use of test automation in general. Why do they do it? To get you to buy tools and invest in automation. What’s really happening? Your company is spending money on tools and investing in automation. What’s the return (revenue potential)? There is none. It costs. Sunk costs in accounting terms. So am I saying don’t buy test tools or automate? NO! I’m suggesting we stop lying about the costs of testing and help stakeholders and teams figure out how to balance the investment in testing with the benefits it can provide. It’s called a cost/benefit (or cost-based) analysis. And while quantitative in nature, it’s accurate.

Today there is so much pressure in software development projects to reduce costs and timeframes, and a common target is testing. Testing is expensive and time consuming, and often the benefits aren’t understood, aren’t seen, don’t have dollars associated with them, or they truly aren’t there. When this happens, executives don’t want to fix testing, they want it to impact the schedule and the bottom line less. So we get mandates to do less testing, leaving the scope and quality goals intact. There’s no such thing as doing more (or even the same) amount of testing with less. You do less with less. But when you can be more efficient by doing the key testing first, reducing redundancy, or automating effectively, the benefit received from doing that testing may cost less in terms of time and expenditure. There’s the real story.

Testing costs! More testing costs more. And less testing often costs more than more testing! If you follow that logic, help me stomp out the misnomer of the ROI of testing. Then you can have a real conversation about what testing needs to deliver and what the company or project team is willing to do to get the benefits of the testing they fund.

See Also:



  • Pawel Dolega

    bollocks ! Obviously there is ROI in testing. Lots of organizations calculated already average cost per defect and avarege % of defects found per different kinds of software and different types of testing activity. On top of that there are values to be calculated in your environment (which may or may not differ from industry standards).

    Point is – most of things in companies can be at the end calculated in money (and ROI), testing is obviously one of them (though if you don’t have metrics then you may have hard time calculating approximated values).

    • dhtester

      I didn’t suggest at all there’s no benefit related to the investment in testing. I’m advocating for different language to describe it because testing provides benefits that are not as flat as the pure financial ROI calculation. There are tangible and intangible benefits of testing. I’m simply suggesting that testing is a “cost” decision, not a revenue generating proposition. And therefore, it makes more sense to use the COST/BENEFIT calculation than the ROI calculation when talking about investing in testing.

      And I absolutely agree with you that most things can be calculated in terms of money some how some way. But when project teams are trying to figure out how much to “spend” on testing, I really, really, really want them to be able to describe what they expect from the testing (in terms of the benefits it can provide — which I’d like them to describe with something other than “please spend less on it”).

    • http://about.me/scott.barber Scott Barber

      Dude, you’re missing the point. It’s not $ and value – it’s about misuse of terminology. What you are talking about (I think) maps to a thing called Social ROI (SROI) — look it up… I suspect that would help to clarify the distinction here.

  • waddle

    Quality is measurable, % of bug per line is measurable, C.R.A.P. index exists, code coverage is measurable, duration of acceptance regarding to testing duration is measurable, costs of bug fixes on each environment are measurable.
    With all this, you can calculate a ROI of testing.
    You’re mixing planning and budget issue with ROI calculation. Sometimes, it’s better to have poor quality and a TTM shorter, so the “overall” ROI will be better with nearly no testing. Fortunately, most of the time, the *calculated* ROI is better with widthspread testing

    • greivin

      % of bug per line? -_-

      • Waddle

        Yes, after development, on a significant project, and for a precise language, you can count the bugs in your trackers and the number of LoC, and then compare with test effort.

        • Baustin213

          But each bug is different. You could have a great % of bug per line, but you missed a huge, costly bug. That percentage alone is not a good indication of ROI or quality of your product.

    • dhtester

      Thanks for your thoughts. I certainly did not intend to imply that quality is not measurable. Just simply pointing out that ROI means something VERY different to executives funding projects and testing (therefore THEY mix up and misinterpret the notion of ROI in testing related to planning & budget – so I completely agree with you there!). And that’s why I prefer to use cost/benefit analysis (instead of ROI) to describe the value of testing (simply put, the measurable benefits realized from the expenditures related to quality initiatives and testing efforts).

    • http://about.me/scott.barber Scott Barber

      No. You can calculate a real or perceived *value* (i.e. benefit) of testing, but that is not the same as ROI. ROI is a business/accounting term with a particular meaning that does not apply here. Don’t believe us, start a business of your own and try to convince your accountant to classify your “investment” in test automation tools your way & see what happens.

  • Josifov Gjorgi

    I dare you to ship untested billing module for your system and then you will see what is ROI in Software testing.
    What you don’t understand is building software isn’t just coding web page and ship to clients, it’s a complex process in which coding is just 7% of all work.
    Please, read some of the introduction level books for Software Engineering and you will never wrote blog posts like these

    • dhtester

      Yikes!! I never suggested not testing!! Not sure what I wrote that implied that. I do understand quite a lot about building, testing, and supporting software systems – 30 years of experience. I also understand how testing efforts get squeezed and how test teams get pressured to PROVE their worth. In some organizations, management wants to see tangible ROI. Well, I don’t believe that testing delivers only tangible, countable ROI (which is a dollar value calculation). I believe that many of the benefits of testing are intangible (or qualitative), which makes them much harder (but not impossible) to measure when someone is looking for proof. I’m trying to address this issue by helping folks expose the VALUE and BENEFIT proposition of testing to help management teams decide how much to spend on testing and quality efforts. When stakeholder teams understand the value and benefits they are getting from testing, I believe (or at least hope!) they will manage testing efforts differently.

      • Josifov Gjorgi

        With such experience, you can’t judge how much is need to test some software system?
        Also in books about Software Engineering, there is ratio of development/testing depending on type of system. For ex. in critical system this ratio is 40%/60%, in business system is 60%/40%
        Testing is process of proving that the software you deliver is what client wants and work correctly as it’s specified in requirements and the VALUE and BENEFIT is client is happy with the product and wants more of it :)
        How much to spend on testing is question that can be answered only by software veterans that work in IT industry. To know how to estimate how long it will that testing or any work you need to have experience in that area. You can’t ask manager from Toyota/GM or other auto company to answer the question how much to spend on testing ERP software from your company, you ask software engineer, who have spend 5+ years in software development :)
        But if management don’t want to spend money on veterans they only like juniors, than that is other issue

        • dhtester

          You are so right – judging how much testing is needed is a very complex endeavor. And to that end, I would never use an external ratio or rule of thumb as my only input. Even if it were an industry norm. I would perform an analysis of all the relevant context factors – which include industry/domain, technical, staff (skill level, knowledge, related experience, etc.), client/user, competition (if any), compliance, key risks and ultimately, the complexity of the deliverables. Then all that has to be weighed against the risk the team is willing to accept and the constraints of the project (financial and time related). When I have “judged” what testing I’m advocating for, I’m still not going to describe it terms of ROI calculations. I will describe in terms of Cost/Benefits (for these testing costs, we are looking to realize these benefits). For me, it changes the face of the discussion and enhances the engagement of the stakeholder team enough to make the distinction worth the effort. So I’m just sharing my experience. If helpful, great! If not, please ignore. :)

        • http://about.me/scott.barber Scott Barber

          Dude, if you want to quote books, why not quote the one that says that testing can never prove the absence of a bug, only the presence of one?

          Regardless, how much to spend on testing is a stupid question in the first place as it implies that developing and testing are separate and distinguishable activities. If you’ve ever written a line of code, you know that isn’t the case — for example, every time I execute code as a developer, I’m testing whether or not it did what I intended. What is the cost of that? What is the cost of *never* doing that? How often should I do it? blah, blah, blah… who cares?!?

          Hire competent, responsible people, make them aware of and accountable for achieving a mission, give them parameters to work with (like budget and timeline) and get out of the way. If we gave all the time back to the teams that we take trying to calculate things like ROI, we’d end up with better software anyway.

          • Josifov Gjorgi

            I agree 100%, but from what I have read from this blog, I get conclusion that testing isn’t very valuable because there is no ROI, we can skip some types of testing.
            This is why documentation for software isn’t updated and many software system lack of it because someone many years ago and now in Agile say their isn’t any value in documentation. Some stupid people from management will take this as granted and than you have buzzwords like Testing doesn’t have any ROI or something like this and than you have millions of blogs saying that testing doesn’t have ROI. (Ex of these are RoR, NodeJS you name it)
            A good QA know how to test software and know how much time it will take.
            And I would like to see blogs that saying testing vehicles doesn’t have ROI

  • Gregory Mooney

    I think the issue here is semantics.

    ROI to me means there is an actual–down to the penny–number of cash returned over time on cash invested.

    ROI in testing to me is the future cost savings of not having a defect make it to release since we all know that bugs after release are 10 times or more costly than they are before release. But that’s just guessing.

    In my opinion, the point is there is no “real” dollar-value for the ROI in testing. Of course, we could argue that there is ROI for testing if you believe in imaginary numbers, but you are creating a formula based on what you think the cost of that bug you just found would cost your company after release. It’s not a real ROI, which would consist of a real number.

    Either way, I don’t think Dawn is devaluing the job of the tester or the testing itself.

    • dhtester

      Thanks Greg! Yes, it is really a semantic issue. Most qualitative things are very difficult to quantitatively measure. That doesn’t mean they can’t be measured … or valued. There’s big return for investing in testing, it’s just not the same return financial folks are thinking about. I used to be a Finance major and worked in an accounting office for 5 years running financials for a non-profit board of directors. I have some insight into the real difference between ROI and Cost-Benefit Analysis. And I have my preference about which one I believe test teams should use when communicating with stakeholders and executives.

  • Robert Hodges

    I’m having some difficulty following this argument. Testing ROI is hard to calculate in some cases, just as it’s hard to compute ROI for written designs, code reviews, and many other things software engineers do. However, that’s different from saying that there is no return on investment. If there is truly no return on investment, why do so many businesses invest so much in testing?

    • dhtester

      Thanks for sharing your thoughts Robert. I’m not suggesting there’s no value or benefit to investing in testing, I’m suggesting we use more accurate financial terminology to describe it. ROI is a very specialized quantitative analysis calculation targeted at determining if a financial expenditure will provide enough revenue or profit to be worth pursuing. Testing doesn’t provide revenue or profit directly. But it can contribute to revenue/profit margins indirectly. So I’m advocating the use of the Cost/Benefit calculation (instead of ROI) to describe/illustrate/measure the value and “benefit” received from the “costs” incurred (or invested) in testing.

      ROI vs Cost/Benefit Analysis may be a trivial distinction to those of us in testing groups, but at the senior executive and stakeholder level – I believe the distinction matters. It mostly has to do with appropriate expectation setting and alignment of responsibility and accountability. So I’m eager to hear from folks who have shifted their terminology – did it help? Hurt? Not matter at all? I’m curious too!

      • Agile Lasagna


        I thought the point of automated testing is to reduce the amount of time it takes to release the software. A company invests in automated testing to reduce the release cost of a release. So the investment results in reducing costs, which could be considered an ROI.

        • http://about.me/scott.barber Scott Barber

          Reducing costs does not increase revenue. It *might* increase profit margin, but that is a different calculation entirely.

          Remember… Accounting isn’t actually math, it just looks like math. The rules are strange. ROI is an accounting term. Whether or not you ultimately “spend less” by “investing in testing” is a classic Cost/Benefit Calculation, not an ROI calculation.

          Before I got into IT, I got a degree in Civil Engineering. What we are talking about here is the same as asking “What’s the ROI of more expensive bolts for the bridge?”

          *If* the more expensive bolts mean that the bridge will last longer, or can be built faster, or support more cars, there may be *benefit* to using more expensive bolts. HOWEVER, no matter what bolts I use, my company will get paid the same for successful completion of the project, thus no ROI. (Conversely, if I try to *save* money by getting super cheap bolts that are of poor quality and the bridge falls, kills people, gets the company sued, and my licence revoked, ultimately, I’ll have lost more than I “saved”). Still no ROI. Just Costs & Benefits (or not).

  • http://www.pmhut.com/ PM Hut

    If programmers tested the software (and not just their code) before handing it over to the testers like the good old days, then testing wouldn’t cost as much and projects would finish definitely faster!

    • dhtester

      That’s the theory for sure! Typically when developers and programmers find and fix issues related to functionality and code stability before handoffs, downstream testing tends to go more smoothly. From a cost/benefit perspective however, this may simply be a cost shift. When it matters who pays for testing, sometimes there is a lot of resistance to testing in development — but there are real, tangible benefits to doing so. Those are precisely the things I hope people are sharing with their project managers and stakeholders to get more buy-in for developer/unit testing (or in my nirvana testing universe – get teams to allocate time and budget for testing AND development together – as it rarely serves users, quality, or teams to separate them).

  • Martin Hynie

    Interesting article Dawn, and even more interesting reactions in the comments. It is always quite fascinating to me how so many responses to articles that discuss value and testing carry a very personal belief system. They often speak of personal experiences and points of view, but present them as factual. In some ways, it is somewhat encouraging to me that our community is searching for patterns that can be shared and applied across multiple sectors, but to argue that value, and the very nature of ROI can be generalized is a very very difficult hypothesis to defend. Let’s look at some of the elements we would use in our “standard calculations”:

    - Average cost of bug… when one bug can effectively cost more than the previous 1000 bugs combined.
    - Average cost of check implemented… which of course depends not only on the automation framework, but also the complexity/age of the product in test, the skills of the coders and scripters, the sustainability of the code under test… etc
    - Average cost of test executed… I think we know how many variables can impact this range
    - Average cost of released defect and associated fix… a spelling mistake vs a space program being cancelled.
    - Average cost to reputation and potential business for the companies at risk (usually more than one).
    - Average number of bugs or tests against lines of code… probably need to then create categories based on age of code, frameworks, design patterns, maintenance plan commitments, etc

    In other words, I agree with Dawn. It is important to consider how you spend your money (by all means, if you have found some predictive measurements that help you make decisions about how to spend your money then YES… use these), but know that you are choosing to
    - spend money *here*
    - because you are worried about *this*
    - and you think that by applying *this approach*
    - you can possibly help discover potential problems *there*
    - and help protect *this investment*…

    If you believe in the scientific method, then do not seek out magic formulas and short cuts. Be a scientist! Learn and study your unique system under test. Become an expert about what information is available and also how that information tends to evolves with each iteration. Is it predictable?

    Testing is all about learning about something that is valuable to someone. Help them make informed decisions… and one of those decisions is “Do I need to learn more to make my decisions? Do I need to test more to make informed decisions?”

    Cheers and happy testing

    • dhtester

      Martin, thanks for extending the thread. Nothing about this is simple or should have blanket ratios or rules of thumb applied to them. I love your scientific approach to the problem and the model you shared. I will refer to it in future discussions … thanks!

      I suggest a similar model when folks focus on counting test cases. Test cases can’t be normalized (in terms of size, span-depth-coverage-scope, level of effort to document, execute, analyze, debug, maintain, etc.), therefore counting them for the purpose of determining testing time or costs is risky and likely to be error prone. Further it’s quantitative instead of qualitative, so it can’t answer any quality related questions about the testing. Like, are there enough tests? Are they the right tests? Will they expose bugs related to … ? Lots of people get wrapped up in trying to answer the eternal question:

      “How many test cases does it take to cover a requirement?”

      Hmmm. Sounds like a great title for a blog post!

      • Griffin Jones


        Thank you for publishing this. For me, you (and Scott) have been the best people to articulate the misunderstanding / problem / conflict / misuse / obliviousness of: ROI versus cost benefit; quantitative versus qualitative analysis; and Social ROI. Yes, it is semantics problem – but bad semantics leads to bad communications and analysis. This is exactly why I have to clash with badly directed and implanted metrics programs.

        • dhtester

          Thanks for validating the message Griffin!

  • Pingback: Five Blogs – 9 December 2013 | 5blogs

  • Rick Brannon CSM

    Hi all I found this to be an interesting conversation on a difficult topic, however my question to this is why ROI and not opportunity cost?
    It would appear that both are a subjective measure in this particular case and hard to quantify. I also find Scott Barber’s views into accounting refreshing as GAAP isn’t a standard and even though ROI is ratio analysis it is still not an easy (non subjective measure)
    I believe the cost of testing is truely a varible cost and not a fixed cost as it is depicted here so the measure of true ROI is a range and not a succint measure.
    Just my two cents worth anything from .01 – .03 depending on the cost of money :).

    • http://about.me/scott.barber Scott Barber

      I like to compare it to trying to calculate the ROI of auto insurance. Auto insurance is a total waste of $ if you never have an accident, but can save you *crazy* $ if you’re accident prone (or end up at fault when there are injuries/fatalities involved). I *guess* if you had sufficient data, you could calculate the “statistical likelihood of achieving an ROI of x% based on age, gender, car type, zip code, and driving history…” Oh, wait, that’s what insurance companies do to set your rate — and somehow, I think if we had that kind of time and $ available, we’d get far more benefit by spending that $ on development/testing than we would by spending it on trying to calculate the ROI of testing.

      Just sayin’.

    • dhtester

      Hi Rick,

      Thanks for joining in. Opportunity cost in an interesting calculation to consider, but again, it’s often tied to income or revenue generated by choosing A over B (like ROI). However, it *can* be tied to benefits, but they may be very intangible/subjective. For example, what’s the opportunity cost of spending $1 on an apple vs. an orange? Well, you get the apple and forgo the orange. That’s the opportunity cost -> no orange. A Cost/Benefit analysis (CBA) ties the expenditure to the benefits received or perceived. So apple gets us X and orange gets us Y. Opportunity cost pits X against Y whereas CBA doesn’t. If you want X benefit, spend money on apples. If you want Y benefit, spend money on oranges. If you want both, spend money on both.

      My goal with provoking this discussion is to highlight the fact that these types of calculations are fundamentally imperfect for the task (due to things like variability and subjectivity, etc.), but CBA at least gets the *benefits* discussion on the table – and that helps everyone understand the value/benefit proposition the funded testing needs to target. I think that’s a wonderful starting point.


  • Kobi Halperin

    Now – If I find a Bug, and Dev fix it and insert 2 new bugs along with it – What would the ROI be ??? :-)

    • dhtester

      Ha! Right!

      Testing = Bug
      Bug Fix = Bug+2
      Quality = (Quality+1)+(Quality-2)
      Time = ((FindBugTime)+(FileBugTime)+(DebugTime)+(FixTime))x3

      So, Negative ROI in terms of time/$$$/quality. Just one reason not to use ROI. If one of the “benefits” of testing is *information* – then we have:

      Information = (OrigBug)+(NewBug1)+(NewBug2)+(BugFixAdded2Bugs)
      Quality improvement/risk reduction opportunities = 3
      Maintainability Metrics = (1Fix == Bug*2)

      We could have fun with this! ;-)

  • http://www.kiwiqa.com/ kiwiqa

    yes i am agree with your statement, but all are not same as per you describe…

  • Martin K. Schröder

    It’s like saying there is no ROI in software development. Testing is an integral part of software development. And it reduces spending in the future when bugs appear which could have been avoided by having a comprehensive set of tests that software must satisfy before a release.

    You may think that there is no ROI in testing, but even the best programmers make mistakes – forget something or miscalculate. Tests are a tool for catching errors that may later lead to difficult to track bugs that may be even more costly than what you can save by omitting the tests.

    • Baustin213

      I don’t think anyone is arguing that testing is not important or valuable (or at least I hope not). The point is that you can’t put an exact price tag or financial gain on running a test or finding a bug.