37 responses

  1. Pawel Dolega
    December 4, 2013

    bollocks ! Obviously there is ROI in testing. Lots of organizations calculated already average cost per defect and avarege % of defects found per different kinds of software and different types of testing activity. On top of that there are values to be calculated in your environment (which may or may not differ from industry standards).

    Point is – most of things in companies can be at the end calculated in money (and ROI), testing is obviously one of them (though if you don’t have metrics then you may have hard time calculating approximated values).

    • dhtester
      December 5, 2013

      I didn’t suggest at all there’s no benefit related to the investment in testing. I’m advocating for different language to describe it because testing provides benefits that are not as flat as the pure financial ROI calculation. There are tangible and intangible benefits of testing. I’m simply suggesting that testing is a “cost” decision, not a revenue generating proposition. And therefore, it makes more sense to use the COST/BENEFIT calculation than the ROI calculation when talking about investing in testing.

      And I absolutely agree with you that most things can be calculated in terms of money some how some way. But when project teams are trying to figure out how much to “spend” on testing, I really, really, really want them to be able to describe what they expect from the testing (in terms of the benefits it can provide — which I’d like them to describe with something other than “please spend less on it”).

    • Scott Barber
      December 7, 2013

      Dude, you’re missing the point. It’s not $ and value – it’s about misuse of terminology. What you are talking about (I think) maps to a thing called Social ROI (SROI) — look it up… I suspect that would help to clarify the distinction here.

  2. waddle
    December 4, 2013

    Quality is measurable, % of bug per line is measurable, C.R.A.P. index exists, code coverage is measurable, duration of acceptance regarding to testing duration is measurable, costs of bug fixes on each environment are measurable.
    With all this, you can calculate a ROI of testing.
    You’re mixing planning and budget issue with ROI calculation. Sometimes, it’s better to have poor quality and a TTM shorter, so the “overall” ROI will be better with nearly no testing. Fortunately, most of the time, the *calculated* ROI is better with widthspread testing

    • greivin
      December 4, 2013

      % of bug per line? -_-

      • Waddle
        December 5, 2013

        Yes, after development, on a significant project, and for a precise language, you can count the bugs in your trackers and the number of LoC, and then compare with test effort.

      • Baustin213
        December 5, 2013

        But each bug is different. You could have a great % of bug per line, but you missed a huge, costly bug. That percentage alone is not a good indication of ROI or quality of your product.

    • dhtester
      December 5, 2013

      Thanks for your thoughts. I certainly did not intend to imply that quality is not measurable. Just simply pointing out that ROI means something VERY different to executives funding projects and testing (therefore THEY mix up and misinterpret the notion of ROI in testing related to planning & budget – so I completely agree with you there!). And that’s why I prefer to use cost/benefit analysis (instead of ROI) to describe the value of testing (simply put, the measurable benefits realized from the expenditures related to quality initiatives and testing efforts).

    • Scott Barber
      December 7, 2013

      No. You can calculate a real or perceived *value* (i.e. benefit) of testing, but that is not the same as ROI. ROI is a business/accounting term with a particular meaning that does not apply here. Don’t believe us, start a business of your own and try to convince your accountant to classify your “investment” in test automation tools your way & see what happens.

  3. Josifov Gjorgi
    December 4, 2013

    I dare you to ship untested billing module for your system and then you will see what is ROI in Software testing.
    What you don’t understand is building software isn’t just coding web page and ship to clients, it’s a complex process in which coding is just 7% of all work.
    Please, read some of the introduction level books for Software Engineering and you will never wrote blog posts like these

    • dhtester
      December 5, 2013

      Yikes!! I never suggested not testing!! Not sure what I wrote that implied that. I do understand quite a lot about building, testing, and supporting software systems – 30 years of experience. I also understand how testing efforts get squeezed and how test teams get pressured to PROVE their worth. In some organizations, management wants to see tangible ROI. Well, I don’t believe that testing delivers only tangible, countable ROI (which is a dollar value calculation). I believe that many of the benefits of testing are intangible (or qualitative), which makes them much harder (but not impossible) to measure when someone is looking for proof. I’m trying to address this issue by helping folks expose the VALUE and BENEFIT proposition of testing to help management teams decide how much to spend on testing and quality efforts. When stakeholder teams understand the value and benefits they are getting from testing, I believe (or at least hope!) they will manage testing efforts differently.

      • Josifov Gjorgi
        December 5, 2013

        With such experience, you can’t judge how much is need to test some software system?
        Also in books about Software Engineering, there is ratio of development/testing depending on type of system. For ex. in critical system this ratio is 40%/60%, in business system is 60%/40%
        Testing is process of proving that the software you deliver is what client wants and work correctly as it’s specified in requirements and the VALUE and BENEFIT is client is happy with the product and wants more of it :)
        How much to spend on testing is question that can be answered only by software veterans that work in IT industry. To know how to estimate how long it will that testing or any work you need to have experience in that area. You can’t ask manager from Toyota/GM or other auto company to answer the question how much to spend on testing ERP software from your company, you ask software engineer, who have spend 5+ years in software development :)
        But if management don’t want to spend money on veterans they only like juniors, than that is other issue

      • dhtester
        December 5, 2013

        You are so right – judging how much testing is needed is a very complex endeavor. And to that end, I would never use an external ratio or rule of thumb as my only input. Even if it were an industry norm. I would perform an analysis of all the relevant context factors – which include industry/domain, technical, staff (skill level, knowledge, related experience, etc.), client/user, competition (if any), compliance, key risks and ultimately, the complexity of the deliverables. Then all that has to be weighed against the risk the team is willing to accept and the constraints of the project (financial and time related). When I have “judged” what testing I’m advocating for, I’m still not going to describe it terms of ROI calculations. I will describe in terms of Cost/Benefits (for these testing costs, we are looking to realize these benefits). For me, it changes the face of the discussion and enhances the engagement of the stakeholder team enough to make the distinction worth the effort. So I’m just sharing my experience. If helpful, great! If not, please ignore. :)

      • Scott Barber
        December 7, 2013

        Dude, if you want to quote books, why not quote the one that says that testing can never prove the absence of a bug, only the presence of one?

        Regardless, how much to spend on testing is a stupid question in the first place as it implies that developing and testing are separate and distinguishable activities. If you’ve ever written a line of code, you know that isn’t the case — for example, every time I execute code as a developer, I’m testing whether or not it did what I intended. What is the cost of that? What is the cost of *never* doing that? How often should I do it? blah, blah, blah… who cares?!?

        Hire competent, responsible people, make them aware of and accountable for achieving a mission, give them parameters to work with (like budget and timeline) and get out of the way. If we gave all the time back to the teams that we take trying to calculate things like ROI, we’d end up with better software anyway.

      • Josifov Gjorgi
        December 7, 2013

        I agree 100%, but from what I have read from this blog, I get conclusion that testing isn’t very valuable because there is no ROI, we can skip some types of testing.
        This is why documentation for software isn’t updated and many software system lack of it because someone many years ago and now in Agile say their isn’t any value in documentation. Some stupid people from management will take this as granted and than you have buzzwords like Testing doesn’t have any ROI or something like this and than you have millions of blogs saying that testing doesn’t have ROI. (Ex of these are RoR, NodeJS you name it)
        A good QA know how to test software and know how much time it will take.
        And I would like to see blogs that saying testing vehicles doesn’t have ROI

  4. Gregory Mooney
    December 4, 2013

    I think the issue here is semantics.

    ROI to me means there is an actual–down to the penny–number of cash returned over time on cash invested.

    ROI in testing to me is the future cost savings of not having a defect make it to release since we all know that bugs after release are 10 times or more costly than they are before release. But that’s just guessing.

    In my opinion, the point is there is no “real” dollar-value for the ROI in testing. Of course, we could argue that there is ROI for testing if you believe in imaginary numbers, but you are creating a formula based on what you think the cost of that bug you just found would cost your company after release. It’s not a real ROI, which would consist of a real number.

    Either way, I don’t think Dawn is devaluing the job of the tester or the testing itself.

    • dhtester
      December 5, 2013

      Thanks Greg! Yes, it is really a semantic issue. Most qualitative things are very difficult to quantitatively measure. That doesn’t mean they can’t be measured … or valued. There’s big return for investing in testing, it’s just not the same return financial folks are thinking about. I used to be a Finance major and worked in an accounting office for 5 years running financials for a non-profit board of directors. I have some insight into the real difference between ROI and Cost-Benefit Analysis. And I have my preference about which one I believe test teams should use when communicating with stakeholders and executives.

  5. Robert Hodges
    December 4, 2013

    I’m having some difficulty following this argument. Testing ROI is hard to calculate in some cases, just as it’s hard to compute ROI for written designs, code reviews, and many other things software engineers do. However, that’s different from saying that there is no return on investment. If there is truly no return on investment, why do so many businesses invest so much in testing?

    • dhtester
      December 5, 2013

      Thanks for sharing your thoughts Robert. I’m not suggesting there’s no value or benefit to investing in testing, I’m suggesting we use more accurate financial terminology to describe it. ROI is a very specialized quantitative analysis calculation targeted at determining if a financial expenditure will provide enough revenue or profit to be worth pursuing. Testing doesn’t provide revenue or profit directly. But it can contribute to revenue/profit margins indirectly. So I’m advocating the use of the Cost/Benefit calculation (instead of ROI) to describe/illustrate/measure the value and “benefit” received from the “costs” incurred (or invested) in testing.

      ROI vs Cost/Benefit Analysis may be a trivial distinction to those of us in testing groups, but at the senior executive and stakeholder level – I believe the distinction matters. It mostly has to do with appropriate expectation setting and alignment of responsibility and accountability. So I’m eager to hear from folks who have shifted their terminology – did it help? Hurt? Not matter at all? I’m curious too!

      • Agile Lasagna
        December 7, 2013

        Dawn,

        I thought the point of automated testing is to reduce the amount of time it takes to release the software. A company invests in automated testing to reduce the release cost of a release. So the investment results in reducing costs, which could be considered an ROI.

      • Scott Barber
        December 7, 2013

        Reducing costs does not increase revenue. It *might* increase profit margin, but that is a different calculation entirely.

        Remember… Accounting isn’t actually math, it just looks like math. The rules are strange. ROI is an accounting term. Whether or not you ultimately “spend less” by “investing in testing” is a classic Cost/Benefit Calculation, not an ROI calculation.

        Before I got into IT, I got a degree in Civil Engineering. What we are talking about here is the same as asking “What’s the ROI of more expensive bolts for the bridge?”

        *If* the more expensive bolts mean that the bridge will last longer, or can be built faster, or support more cars, there may be *benefit* to using more expensive bolts. HOWEVER, no matter what bolts I use, my company will get paid the same for successful completion of the project, thus no ROI. (Conversely, if I try to *save* money by getting super cheap bolts that are of poor quality and the bridge falls, kills people, gets the company sued, and my licence revoked, ultimately, I’ll have lost more than I “saved”). Still no ROI. Just Costs & Benefits (or not).

  6. PM Hut
    December 5, 2013

    If programmers tested the software (and not just their code) before handing it over to the testers like the good old days, then testing wouldn’t cost as much and projects would finish definitely faster!

    • dhtester
      December 5, 2013

      That’s the theory for sure! Typically when developers and programmers find and fix issues related to functionality and code stability before handoffs, downstream testing tends to go more smoothly. From a cost/benefit perspective however, this may simply be a cost shift. When it matters who pays for testing, sometimes there is a lot of resistance to testing in development — but there are real, tangible benefits to doing so. Those are precisely the things I hope people are sharing with their project managers and stakeholders to get more buy-in for developer/unit testing (or in my nirvana testing universe – get teams to allocate time and budget for testing AND development together – as it rarely serves users, quality, or teams to separate them).

  7. Martin Hynie
    December 7, 2013

    Interesting article Dawn, and even more interesting reactions in the comments. It is always quite fascinating to me how so many responses to articles that discuss value and testing carry a very personal belief system. They often speak of personal experiences and points of view, but present them as factual. In some ways, it is somewhat encouraging to me that our community is searching for patterns that can be shared and applied across multiple sectors, but to argue that value, and the very nature of ROI can be generalized is a very very difficult hypothesis to defend. Let’s look at some of the elements we would use in our “standard calculations”:

    - Average cost of bug… when one bug can effectively cost more than the previous 1000 bugs combined.
    - Average cost of check implemented… which of course depends not only on the automation framework, but also the complexity/age of the product in test, the skills of the coders and scripters, the sustainability of the code under test… etc
    - Average cost of test executed… I think we know how many variables can impact this range
    - Average cost of released defect and associated fix… a spelling mistake vs a space program being cancelled.
    - Average cost to reputation and potential business for the companies at risk (usually more than one).
    - Average number of bugs or tests against lines of code… probably need to then create categories based on age of code, frameworks, design patterns, maintenance plan commitments, etc

    In other words, I agree with Dawn. It is important to consider how you spend your money (by all means, if you have found some predictive measurements that help you make decisions about how to spend your money then YES… use these), but know that you are choosing to
    - spend money *here*
    - because you are worried about *this*
    - and you think that by applying *this approach*
    - you can possibly help discover potential problems *there*
    - and help protect *this investment*…

    If you believe in the scientific method, then do not seek out magic formulas and short cuts. Be a scientist! Learn and study your unique system under test. Become an expert about what information is available and also how that information tends to evolves with each iteration. Is it predictable?

    Testing is all about learning about something that is valuable to someone. Help them make informed decisions… and one of those decisions is “Do I need to learn more to make my decisions? Do I need to test more to make informed decisions?”

    Cheers and happy testing
    Martin

    • dhtester
      December 7, 2013

      Martin, thanks for extending the thread. Nothing about this is simple or should have blanket ratios or rules of thumb applied to them. I love your scientific approach to the problem and the model you shared. I will refer to it in future discussions … thanks!

      I suggest a similar model when folks focus on counting test cases. Test cases can’t be normalized (in terms of size, span-depth-coverage-scope, level of effort to document, execute, analyze, debug, maintain, etc.), therefore counting them for the purpose of determining testing time or costs is risky and likely to be error prone. Further it’s quantitative instead of qualitative, so it can’t answer any quality related questions about the testing. Like, are there enough tests? Are they the right tests? Will they expose bugs related to … ? Lots of people get wrapped up in trying to answer the eternal question:

      “How many test cases does it take to cover a requirement?”

      Hmmm. Sounds like a great title for a blog post!
      Cheers,
      Dawn

      • Griffin Jones
        December 8, 2013

        Dawn,

        Thank you for publishing this. For me, you (and Scott) have been the best people to articulate the misunderstanding / problem / conflict / misuse / obliviousness of: ROI versus cost benefit; quantitative versus qualitative analysis; and Social ROI. Yes, it is semantics problem – but bad semantics leads to bad communications and analysis. This is exactly why I have to clash with badly directed and implanted metrics programs.
        Griffin

      • dhtester
        December 12, 2013

        Thanks for validating the message Griffin!
        Cheers,
        Dawn

  8. Rick Brannon CSM
    December 9, 2013

    Hi all I found this to be an interesting conversation on a difficult topic, however my question to this is why ROI and not opportunity cost?
    It would appear that both are a subjective measure in this particular case and hard to quantify. I also find Scott Barber’s views into accounting refreshing as GAAP isn’t a standard and even though ROI is ratio analysis it is still not an easy (non subjective measure)
    I believe the cost of testing is truely a varible cost and not a fixed cost as it is depicted here so the measure of true ROI is a range and not a succint measure.
    Just my two cents worth anything from .01 – .03 depending on the cost of money :).

    • Scott Barber
      December 9, 2013

      I like to compare it to trying to calculate the ROI of auto insurance. Auto insurance is a total waste of $ if you never have an accident, but can save you *crazy* $ if you’re accident prone (or end up at fault when there are injuries/fatalities involved). I *guess* if you had sufficient data, you could calculate the “statistical likelihood of achieving an ROI of x% based on age, gender, car type, zip code, and driving history…” Oh, wait, that’s what insurance companies do to set your rate — and somehow, I think if we had that kind of time and $ available, we’d get far more benefit by spending that $ on development/testing than we would by spending it on trying to calculate the ROI of testing.

      Just sayin’.

    • dhtester
      December 12, 2013

      Hi Rick,

      Thanks for joining in. Opportunity cost in an interesting calculation to consider, but again, it’s often tied to income or revenue generated by choosing A over B (like ROI). However, it *can* be tied to benefits, but they may be very intangible/subjective. For example, what’s the opportunity cost of spending $1 on an apple vs. an orange? Well, you get the apple and forgo the orange. That’s the opportunity cost -> no orange. A Cost/Benefit analysis (CBA) ties the expenditure to the benefits received or perceived. So apple gets us X and orange gets us Y. Opportunity cost pits X against Y whereas CBA doesn’t. If you want X benefit, spend money on apples. If you want Y benefit, spend money on oranges. If you want both, spend money on both.

      My goal with provoking this discussion is to highlight the fact that these types of calculations are fundamentally imperfect for the task (due to things like variability and subjectivity, etc.), but CBA at least gets the *benefits* discussion on the table – and that helps everyone understand the value/benefit proposition the funded testing needs to target. I think that’s a wonderful starting point.

      Cheers,
      Dawn

  9. Kobi Halperin
    December 10, 2013

    Now – If I find a Bug, and Dev fix it and insert 2 new bugs along with it – What would the ROI be ??? :-)

    • dhtester
      December 12, 2013

      Ha! Right!

      Testing = Bug
      Bug Fix = Bug+2
      Quality = (Quality+1)+(Quality-2)
      Time = ((FindBugTime)+(FileBugTime)+(DebugTime)+(FixTime))x3

      So, Negative ROI in terms of time/$$$/quality. Just one reason not to use ROI. If one of the “benefits” of testing is *information* – then we have:

      Information = (OrigBug)+(NewBug1)+(NewBug2)+(BugFixAdded2Bugs)
      Quality improvement/risk reduction opportunities = 3
      Maintainability Metrics = (1Fix == Bug*2)

      We could have fun with this! ;-)

  10. kiwiqa
    December 23, 2013

    yes i am agree with your statement, but all are not same as per you describe…

  11. Martin K. Schröder
    January 31, 2014

    It’s like saying there is no ROI in software development. Testing is an integral part of software development. And it reduces spending in the future when bugs appear which could have been avoided by having a comprehensive set of tests that software must satisfy before a release.

    You may think that there is no ROI in testing, but even the best programmers make mistakes – forget something or miscalculate. Tests are a tool for catching errors that may later lead to difficult to track bugs that may be even more costly than what you can save by omitting the tests.

    • Baustin213
      January 31, 2014

      I don’t think anyone is arguing that testing is not important or valuable (or at least I hope not). The point is that you can’t put an exact price tag or financial gain on running a test or finding a bug.

  12. Agile Lasagna
    June 20, 2014

    To get technical, ROI is based on cash flows. In this case, we’re talking about changes in cash flow. This is known as differential or incremental cash flow.

    When you use the time value of money to discount the differential cash flows, you calculate a more accurate ROI. It’s also more useful, because usually you aren’t investing from scratch, you’re usually interested in the potential financial impact on cash flows of an investment outflow.

    These differentials can include the reduction of costs, as the net cash flow goes up if costs go down.

    ROI in the context of investing is a bit different. While it draws on the same principles, you’re working from different assumptions. In the investing case, you can buy or sell the investment at a moment’s notice. You generate additional return via diversification.

    In the case of a going concern like a company, calculating the ROI of an investment project can’t assume that. Also diversification tends to depress company profitability, as you have too many projects/products competing for the same resources.

    Not sure if that helps?

Leave a Reply

 

 

 

Back to top
mobile desktop