There Ain’t No ROI in Software Testing

Using a return on investment calculation for testing (or test automation) is an error. It’s misuse. It’s kinda like my misuse of English grammar in the title up there … But I doubt you had any trouble understanding my meaning, and you may already understand my “joke.” Regardless, it’s still wrong.

Let’s clear something up first: There’s no ROI in testing unless you are selling testing services like outsourced testing. Only then does testing generate tangible, traceable, 100% correlated revenue. And therein lies a rub. ROI calculations are based on revenue-generating activities and, by definition, are a quantitative analysis method. We should use quantitative algorithms to assess tangible things like number of items sold, or amount of profit, and qualitative measures to evaluate intangible things, like quality (or the “total” value of testing).

So am I making a big deal out of nothing? I don’t think so.

Let’s use a common example: coupons and discounts. Marketers want you believe that you’ll “save” money using coupons and buying when promotional discounts are in effect. In reality, you’ll save money by not spending any. When you make a discounted purchase, you are not “saving” money, you are simply spending less. Splitting hairs, you say? Look at the bank account at the end of a shopping trip and you tell me. Did you spend or did you save?

Testing costs you, just like a shopping trip, and you might justify your actions by listing a set of benefits obtained or needs that were met, but not in money RETURNED. (A nitpicker’s note: a rebate is just a coupon– money returned–after the fact) The fundamental purpose of an ROI calculation is to determine how much revenue or profit will likely be generated as a result of particular activities and their associated investments. It helps people make business decisions like whether this project is worth pursuing, or determining if an effort is going to break even or lose the company money.

So back to the big deal. After a big shopping trip, do you give status reports and brag about how much you spent, or do you put a spotlight on how much you saved? Why? Simply put, either choice is manipulative. You are manipulating how the information will be perceived, and possibly used. Finance is all about manipulating numbers to tell a story. Which story? Whichever one you want to tell. This is the number one reason I changed my major in college from Finance to Management of Information Systems halfway through the pursuit of my Bachelor’s degree. I wanted to major in fact-finding and reporting, not storytelling.

Information about testing is predominantly used by stakeholders to make decisions. So I think it’s important to consider what decisions about testing could be influenced by twisted information. The most common misuse of ROI in testing is with regard to the marketing of testing tools and the use of test automation in general. Why do they do it? To get you to buy tools and invest in automation. What’s really happening? Your company is spending money on tools and investing in automation. What’s the return (revenue potential)? There is none. It costs. Sunk costs in accounting terms. So am I saying don’t buy test tools or automate? NO! I’m suggesting we stop lying about the costs of testing and help stakeholders and teams figure out how to balance the investment in testing with the benefits it can provide. It’s called a cost/benefit (or cost-based) analysis. And while quantitative in nature, it’s accurate.

Today there is so much pressure in software development projects to reduce costs and timeframes, and a common target is testing. Testing is expensive and time consuming, and often the benefits aren’t understood, aren’t seen, don’t have dollars associated with them, or they truly aren’t there. When this happens, executives don’t want to fix testing, they want it to impact the schedule and the bottom line less. So we get mandates to do less testing, leaving the scope and quality goals intact. There’s no such thing as doing more (or even the same) amount of testing with less. You do less with less. But when you can be more efficient by doing the key testing first, reducing redundancy, or automating effectively, the benefit received from doing that testing may cost less in terms of time and expenditure. There’s the real story.

Testing costs! More testing costs more. And less testing often costs more than more testing! If you follow that logic, help me stomp out the misnomer of the ROI of testing. Then you can have a real conversation about what testing needs to deliver and what the company or project team is willing to do to get the benefits of the testing they fund.

See Also:

dc34f1fe-5c99-484f-9247-e3c6236c8516

subscribe-3

Comments

  1. Pawel Dolega says:

    bollocks ! Obviously there is ROI in testing. Lots of organizations calculated already average cost per defect and avarege % of defects found per different kinds of software and different types of testing activity. On top of that there are values to be calculated in your environment (which may or may not differ from industry standards).

    Point is – most of things in companies can be at the end calculated in money (and ROI), testing is obviously one of them (though if you don’t have metrics then you may have hard time calculating approximated values).

    • I didn’t suggest at all there’s no benefit related to the investment in testing. I’m advocating for different language to describe it because testing provides benefits that are not as flat as the pure financial ROI calculation. There are tangible and intangible benefits of testing. I’m simply suggesting that testing is a “cost” decision, not a revenue generating proposition. And therefore, it makes more sense to use the COST/BENEFIT calculation than the ROI calculation when talking about investing in testing.

      And I absolutely agree with you that most things can be calculated in terms of money some how some way. But when project teams are trying to figure out how much to “spend” on testing, I really, really, really want them to be able to describe what they expect from the testing (in terms of the benefits it can provide — which I’d like them to describe with something other than “please spend less on it”).

    • Dude, you’re missing the point. It’s not $ and value – it’s about misuse of terminology. What you are talking about (I think) maps to a thing called Social ROI (SROI) — look it up… I suspect that would help to clarify the distinction here.

  2. Quality is measurable, % of bug per line is measurable, C.R.A.P. index exists, code coverage is measurable, duration of acceptance regarding to testing duration is measurable, costs of bug fixes on each environment are measurable.
    With all this, you can calculate a ROI of testing.
    You’re mixing planning and budget issue with ROI calculation. Sometimes, it’s better to have poor quality and a TTM shorter, so the “overall” ROI will be better with nearly no testing. Fortunately, most of the time, the *calculated* ROI is better with widthspread testing

    • % of bug per line? -_-

      • Yes, after development, on a significant project, and for a precise language, you can count the bugs in your trackers and the number of LoC, and then compare with test effort.

        • Baustin213 says:

          But each bug is different. You could have a great % of bug per line, but you missed a huge, costly bug. That percentage alone is not a good indication of ROI or quality of your product.

    • Thanks for your thoughts. I certainly did not intend to imply that quality is not measurable. Just simply pointing out that ROI means something VERY different to executives funding projects and testing (therefore THEY mix up and misinterpret the notion of ROI in testing related to planning & budget – so I completely agree with you there!). And that’s why I prefer to use cost/benefit analysis (instead of ROI) to describe the value of testing (simply put, the measurable benefits realized from the expenditures related to quality initiatives and testing efforts).

    • No. You can calculate a real or perceived *value* (i.e. benefit) of testing, but that is not the same as ROI. ROI is a business/accounting term with a particular meaning that does not apply here. Don’t believe us, start a business of your own and try to convince your accountant to classify your “investment” in test automation tools your way & see what happens.

  3. Josifov Gjorgi says:

    I dare you to ship untested billing module for your system and then you will see what is ROI in Software testing.
    What you don’t understand is building software isn’t just coding web page and ship to clients, it’s a complex process in which coding is just 7% of all work.
    Please, read some of the introduction level books for Software Engineering and you will never wrote blog posts like these

    • Yikes!! I never suggested not testing!! Not sure what I wrote that implied that. I do understand quite a lot about building, testing, and supporting software systems – 30 years of experience. I also understand how testing efforts get squeezed and how test teams get pressured to PROVE their worth. In some organizations, management wants to see tangible ROI. Well, I don’t believe that testing delivers only tangible, countable ROI (which is a dollar value calculation). I believe that many of the benefits of testing are intangible (or qualitative), which makes them much harder (but not impossible) to measure when someone is looking for proof. I’m trying to address this issue by helping folks expose the VALUE and BENEFIT proposition of testing to help management teams decide how much to spend on testing and quality efforts. When stakeholder teams understand the value and benefits they are getting from testing, I believe (or at least hope!) they will manage testing efforts differently.

      • Josifov Gjorgi says:

        With such experience, you can’t judge how much is need to test some software system?
        Also in books about Software Engineering, there is ratio of development/testing depending on type of system. For ex. in critical system this ratio is 40%/60%, in business system is 60%/40%
        Testing is process of proving that the software you deliver is what client wants and work correctly as it’s specified in requirements and the VALUE and BENEFIT is client is happy with the product and wants more of it :)
        How much to spend on testing is question that can be answered only by software veterans that work in IT industry. To know how to estimate how long it will that testing or any work you need to have experience in that area. You can’t ask manager from Toyota/GM or other auto company to answer the question how much to spend on testing ERP software from your company, you ask software engineer, who have spend 5+ years in software development :)
        But if management don’t want to spend money on veterans they only like juniors, than that is other issue

        • You are so right – judging how much testing is needed is a very complex endeavor. And to that end, I would never use an external ratio or rule of thumb as my only input. Even if it were an industry norm. I would perform an analysis of all the relevant context factors – which include industry/domain, technical, staff (skill level, knowledge, related experience, etc.), client/user, competition (if any), compliance, key risks and ultimately, the complexity of the deliverables. Then all that has to be weighed against the risk the team is willing to accept and the constraints of the project (financial and time related). When I have “judged” what testing I’m advocating for, I’m still not going to describe it terms of ROI calculations. I will describe in terms of Cost/Benefits (for these testing costs, we are looking to realize these benefits). For me, it changes the face of the discussion and enhances the engagement of the stakeholder team enough to make the distinction worth the effort. So I’m just sharing my experience. If helpful, great! If not, please ignore. :)

        • Dude, if you want to quote books, why not quote the one that says that testing can never prove the absence of a bug, only the presence of one?

          Regardless, how much to spend on testing is a stupid question in the first place as it implies that developing and testing are separate and distinguishable activities. If you’ve ever written a line of code, you know that isn’t the case — for example, every time I execute code as a developer, I’m testing whether or not it did what I intended. What is the cost of that? What is the cost of *never* doing that? How often should I do it? blah, blah, blah… who cares?!?

          Hire competent, responsible people, make them aware of and accountable for achieving a mission, give them parameters to work with (like budget and timeline) and get out of the way. If we gave all the time back to the teams that we take trying to calculate things like ROI, we’d end up with better software anyway.

          • Josifov Gjorgi says:

            I agree 100%, but from what I have read from this blog, I get conclusion that testing isn’t very valuable because there is no ROI, we can skip some types of testing.
            This is why documentation for software isn’t updated and many software system lack of it because someone many years ago and now in Agile say their isn’t any value in documentation. Some stupid people from management will take this as granted and than you have buzzwords like Testing doesn’t have any ROI or something like this and than you have millions of blogs saying that testing doesn’t have ROI. (Ex of these are RoR, NodeJS you name it)
            A good QA know how to test software and know how much time it will take.
            And I would like to see blogs that saying testing vehicles doesn’t have ROI

  4. Gregory Mooney says:

    I think the issue here is semantics.

    ROI to me means there is an actual–down to the penny–number of cash returned over time on cash invested.

    ROI in testing to me is the future cost savings of not having a defect make it to release since we all know that bugs after release are 10 times or more costly than they are before release. But that’s just guessing.

    In my opinion, the point is there is no “real” dollar-value for the ROI in testing. Of course, we could argue that there is ROI for testing if you believe in imaginary numbers, but you are creating a formula based on what you think the cost of that bug you just found would cost your company after release. It’s not a real ROI, which would consist of a real number.

    Either way, I don’t think Dawn is devaluing the job of the tester or the testing itself.

    • Thanks Greg! Yes, it is really a semantic issue. Most qualitative things are very difficult to quantitatively measure. That doesn’t mean they can’t be measured … or valued. There’s big return for investing in testing, it’s just not the same return financial folks are thinking about. I used to be a Finance major and worked in an accounting office for 5 years running financials for a non-profit board of directors. I have some insight into the real difference between ROI and Cost-Benefit Analysis. And I have my preference about which one I believe test teams should use when communicating with stakeholders and executives.

  5. Robert Hodges says:

    I’m having some difficulty following this argument. Testing ROI is hard to calculate in some cases, just as it’s hard to compute ROI for written designs, code reviews, and many other things software engineers do. However, that’s different from saying that there is no return on investment. If there is truly no return on investment, why do so many businesses invest so much in testing?

    • Thanks for sharing your thoughts Robert. I’m not suggesting there’s no value or benefit to investing in testing, I’m suggesting we use more accurate financial terminology to describe it. ROI is a very specialized quantitative analysis calculation targeted at determining if a financial expenditure will provide enough revenue or profit to be worth pursuing. Testing doesn’t provide revenue or profit directly. But it can contribute to revenue/profit margins indirectly. So I’m advocating the use of the Cost/Benefit calculation (instead of ROI) to describe/illustrate/measure the value and “benefit” received from the “costs” incurred (or invested) in testing.

      ROI vs Cost/Benefit Analysis may be a trivial distinction to those of us in testing groups, but at the senior executive and stakeholder level – I believe the distinction matters. It mostly has to do with appropriate expectation setting and alignment of responsibility and accountability. So I’m eager to hear from folks who have shifted their terminology – did it help? Hurt? Not matter at all? I’m curious too!

      • Agile Lasagna says:

        Dawn,

        I thought the point of automated testing is to reduce the amount of time it takes to release the software. A company invests in automated testing to reduce the release cost of a release. So the investment results in reducing costs, which could be considered an ROI.

        • Reducing costs does not increase revenue. It *might* increase profit margin, but that is a different calculation entirely.

          Remember… Accounting isn’t actually math, it just looks like math. The rules are strange. ROI is an accounting term. Whether or not you ultimately “spend less” by “investing in testing” is a classic Cost/Benefit Calculation, not an ROI calculation.

          Before I got into IT, I got a degree in Civil Engineering. What we are talking about here is the same as asking “What’s the ROI of more expensive bolts for the bridge?”

          *If* the more expensive bolts mean that the bridge will last longer, or can be built faster, or support more cars, there may be *benefit* to using more expensive bolts. HOWEVER, no matter what bolts I use, my company will get paid the same for successful completion of the project, thus no ROI. (Conversely, if I try to *save* money by getting super cheap bolts that are of poor quality and the bridge falls, kills people, gets the company sued, and my licence revoked, ultimately, I’ll have lost more than I “saved”). Still no ROI. Just Costs & Benefits (or not).

  6. If programmers tested the software (and not just their code) before handing it over to the testers like the good old days, then testing wouldn’t cost as much and projects would finish definitely faster!

    • That’s the theory for sure! Typically when developers and programmers find and fix issues related to functionality and code stability before handoffs, downstream testing tends to go more smoothly. From a cost/benefit perspective however, this may simply be a cost shift. When it matters who pays for testing, sometimes there is a lot of resistance to testing in development — but there are real, tangible benefits to doing so. Those are precisely the things I hope people are sharing with their project managers and stakeholders to get more buy-in for developer/unit testing (or in my nirvana testing universe – get teams to allocate time and budget for testing AND development together – as it rarely serves users, quality, or teams to separate them).

  7. Martin Hynie says:

    Interesting article Dawn, and even more interesting reactions in the comments. It is always quite fascinating to me how so many responses to articles that discuss value and testing carry a very personal belief system. They often speak of personal experiences and points of view, but present them as factual. In some ways, it is somewhat encouraging to me that our community is searching for patterns that can be shared and applied across multiple sectors, but to argue that value, and the very nature of ROI can be generalized is a very very difficult hypothesis to defend. Let’s look at some of the elements we would use in our “standard calculations”:

    – Average cost of bug… when one bug can effectively cost more than the previous 1000 bugs combined.
    – Average cost of check implemented… which of course depends not only on the automation framework, but also the complexity/age of the product in test, the skills of the coders and scripters, the sustainability of the code under test… etc
    – Average cost of test executed… I think we know how many variables can impact this range
    – Average cost of released defect and associated fix… a spelling mistake vs a space program being cancelled.
    – Average cost to reputation and potential business for the companies at risk (usually more than one).
    – Average number of bugs or tests against lines of code… probably need to then create categories based on age of code, frameworks, design patterns, maintenance plan commitments, etc

    In other words, I agree with Dawn. It is important to consider how you spend your money (by all means, if you have found some predictive measurements that help you make decisions about how to spend your money then YES… use these), but know that you are choosing to
    – spend money *here*
    – because you are worried about *this*
    – and you think that by applying *this approach*
    – you can possibly help discover potential problems *there*
    – and help protect *this investment*…

    If you believe in the scientific method, then do not seek out magic formulas and short cuts. Be a scientist! Learn and study your unique system under test. Become an expert about what information is available and also how that information tends to evolves with each iteration. Is it predictable?

    Testing is all about learning about something that is valuable to someone. Help them make informed decisions… and one of those decisions is “Do I need to learn more to make my decisions? Do I need to test more to make informed decisions?”

    Cheers and happy testing
    Martin

    • Martin, thanks for extending the thread. Nothing about this is simple or should have blanket ratios or rules of thumb applied to them. I love your scientific approach to the problem and the model you shared. I will refer to it in future discussions … thanks!

      I suggest a similar model when folks focus on counting test cases. Test cases can’t be normalized (in terms of size, span-depth-coverage-scope, level of effort to document, execute, analyze, debug, maintain, etc.), therefore counting them for the purpose of determining testing time or costs is risky and likely to be error prone. Further it’s quantitative instead of qualitative, so it can’t answer any quality related questions about the testing. Like, are there enough tests? Are they the right tests? Will they expose bugs related to … ? Lots of people get wrapped up in trying to answer the eternal question:

      “How many test cases does it take to cover a requirement?”

      Hmmm. Sounds like a great title for a blog post!
      Cheers,
      Dawn

      • Griffin Jones says:

        Dawn,

        Thank you for publishing this. For me, you (and Scott) have been the best people to articulate the misunderstanding / problem / conflict / misuse / obliviousness of: ROI versus cost benefit; quantitative versus qualitative analysis; and Social ROI. Yes, it is semantics problem – but bad semantics leads to bad communications and analysis. This is exactly why I have to clash with badly directed and implanted metrics programs.
        Griffin

  8. Rick Brannon CSM says:

    Hi all I found this to be an interesting conversation on a difficult topic, however my question to this is why ROI and not opportunity cost?
    It would appear that both are a subjective measure in this particular case and hard to quantify. I also find Scott Barber’s views into accounting refreshing as GAAP isn’t a standard and even though ROI is ratio analysis it is still not an easy (non subjective measure)
    I believe the cost of testing is truely a varible cost and not a fixed cost as it is depicted here so the measure of true ROI is a range and not a succint measure.
    Just my two cents worth anything from .01 – .03 depending on the cost of money :).

    • I like to compare it to trying to calculate the ROI of auto insurance. Auto insurance is a total waste of $ if you never have an accident, but can save you *crazy* $ if you’re accident prone (or end up at fault when there are injuries/fatalities involved). I *guess* if you had sufficient data, you could calculate the “statistical likelihood of achieving an ROI of x% based on age, gender, car type, zip code, and driving history…” Oh, wait, that’s what insurance companies do to set your rate — and somehow, I think if we had that kind of time and $ available, we’d get far more benefit by spending that $ on development/testing than we would by spending it on trying to calculate the ROI of testing.

      Just sayin’.

    • Hi Rick,

      Thanks for joining in. Opportunity cost in an interesting calculation to consider, but again, it’s often tied to income or revenue generated by choosing A over B (like ROI). However, it *can* be tied to benefits, but they may be very intangible/subjective. For example, what’s the opportunity cost of spending $1 on an apple vs. an orange? Well, you get the apple and forgo the orange. That’s the opportunity cost -> no orange. A Cost/Benefit analysis (CBA) ties the expenditure to the benefits received or perceived. So apple gets us X and orange gets us Y. Opportunity cost pits X against Y whereas CBA doesn’t. If you want X benefit, spend money on apples. If you want Y benefit, spend money on oranges. If you want both, spend money on both.

      My goal with provoking this discussion is to highlight the fact that these types of calculations are fundamentally imperfect for the task (due to things like variability and subjectivity, etc.), but CBA at least gets the *benefits* discussion on the table – and that helps everyone understand the value/benefit proposition the funded testing needs to target. I think that’s a wonderful starting point.

      Cheers,
      Dawn

  9. Kobi Halperin says:

    Now – If I find a Bug, and Dev fix it and insert 2 new bugs along with it – What would the ROI be ??? :-)

    • Ha! Right!

      Testing = Bug
      Bug Fix = Bug+2
      Quality = (Quality+1)+(Quality-2)
      Time = ((FindBugTime)+(FileBugTime)+(DebugTime)+(FixTime))x3

      So, Negative ROI in terms of time/$$$/quality. Just one reason not to use ROI. If one of the “benefits” of testing is *information* – then we have:

      Information = (OrigBug)+(NewBug1)+(NewBug2)+(BugFixAdded2Bugs)
      Quality improvement/risk reduction opportunities = 3
      Maintainability Metrics = (1Fix == Bug*2)

      We could have fun with this! ;-)

  10. yes i am agree with your statement, but all are not same as per you describe…

  11. Martin K. Schröder says:

    It’s like saying there is no ROI in software development. Testing is an integral part of software development. And it reduces spending in the future when bugs appear which could have been avoided by having a comprehensive set of tests that software must satisfy before a release.

    You may think that there is no ROI in testing, but even the best programmers make mistakes – forget something or miscalculate. Tests are a tool for catching errors that may later lead to difficult to track bugs that may be even more costly than what you can save by omitting the tests.

    • Baustin213 says:

      I don’t think anyone is arguing that testing is not important or valuable (or at least I hope not). The point is that you can’t put an exact price tag or financial gain on running a test or finding a bug.

  12. Agile Lasagna says:

    To get technical, ROI is based on cash flows. In this case, we’re talking about changes in cash flow. This is known as differential or incremental cash flow.

    When you use the time value of money to discount the differential cash flows, you calculate a more accurate ROI. It’s also more useful, because usually you aren’t investing from scratch, you’re usually interested in the potential financial impact on cash flows of an investment outflow.

    These differentials can include the reduction of costs, as the net cash flow goes up if costs go down.

    ROI in the context of investing is a bit different. While it draws on the same principles, you’re working from different assumptions. In the investing case, you can buy or sell the investment at a moment’s notice. You generate additional return via diversification.

    In the case of a going concern like a company, calculating the ROI of an investment project can’t assume that. Also diversification tends to depress company profitability, as you have too many projects/products competing for the same resources.

    Not sure if that helps?

    • Dawn Haynes says:

      Thanks for adding to the thread. I’m not actually sure if getting more “technical” about the intricacies of financial ROI calculations will help. Instead I’ll suggest that most stakeholders and even seen some CFOs misuse and misrepresent financial calculations like ROI. Because of that I think it’s too easy to confuse the real issues. And the point I’m trying to make is about the terminology everyone “thinks” they understand – all the way up to the C-level. My goal is to drive the conversation toward cost-benefit or cost-saving concerns. CBA instead of ROI, as illustrated in the comparison here:

      http://www.stuart-hall.com/2013/03/05/cba-vs-roi-cost-benefit-analysis-vs-return-on-investment/

      Your point about cash flows is an interesting one though. In my opinion, I don’t think organizations should make decisions about how much to test or how much to invest in testing based on the need for increased cash flows. While that might be an operating reality, I would prefer that teams determine the need for testing based on context factors and risk … targeting tangible goals for quality, project outcomes, business objectives, compliance criteria, contractual guidelines, etc. To sum it up, decide what’s important and test it. Driving decisions through the lens of purely quantitative financial algorithms feels risky to me.

      For now, I’ll hang with CBA and avoid ROI whenever possible.

      Cheers,
      Dawn

  13. I can see your attempt here to be ‘witty’ but I just don’t buy it.

    In it’s simplest form, creating a piece of code or application with or without testing will certainly effect the cost, as at a minimum, time itself has an equation of cost.

    As it is companies have a hard time understanding the importance of quality, and strange articles like this simply don’t help.

    Why an article like this is on Smartbear’s site, of which their tools are very good, I have no idea and makes me question my involvement in general with their products.

    Quality is no laughing matter, and not something to be toyed with or made into a quirky joke. As it is, Quality can be one of the most difficult battles to win, then you put your little article out and it’s all some VP or leadership person needs to put the brakes on supporting quality initiatives in automation by taking part of what you say and using it against logical and well proven historical evidence of the effectiveness of automation.

    I don’t mean to be grouchy or a jerk here, but this kind of article is dangerous and I don’t find it humorous in any way.

    At the most basic form of ROI calculation here with automation.
    1. Money will be spent to create the product
    2. Automated testing, given the proper coverage and reusability, can, but not always, saves time and allows bugs to be fixed earlier.

    That is why the ROI is crucial.

    You calculate the times looking into the future for the life the automation can be used and compare manual Vs. automated times.

    One will cost more than the other, period.

    In a culture where testing is one of the first things to get cut, in the nicest way possible, I suggest you post your theories on Mad Magazine or another suitable place where the likelihood of your ideas have less of a chance to hurt a companies bottom line.

    I don’t mean to be a jerk here, I really don’t, but as a person who has fixed many companies quality by using automation and turned around their quality based revenue generating disasters, I just can’t go for what you are posting here.

    I’m really sorry I even posted this, but this is a very serious matter and it takes all of the people in quality to spread the truth and that message together, which is basically, this…

    – Sometimes automation saves time, and time is money
    – Sometimes it doesn’t so don’t use it unless you need the precision of computer based accuracy for validations, then decide if it’s worth it (i.e. military application sometimes use multiple computer validations to check the same information in different ways to give the highest accuracy in determining if the systems are working correctly or not.)

    But to say there is no return on investment just because you aren’t paying for testing is wrong and ridiculous. Are the testers working for free? Is the development team volunteering their time when it comes to bug fixing, etc…?

    Think deeper, be real.

    – Trevor Chandler

    • Dawn Haynes says:

      Hi Trevor,
      Thanks for your thoughtful comments. Your read of my post is a bit unfortunate. I’m not being witty for fun, I don’t think quality is a joke, and I am being very, very real about this. I have a degree in management, years of experience in finance, and worked for an automated tools company for more than 8 years. And in my experience, one of the most poisonous ways to try and get stakeholders to invest in testing or automation is to use ROI to pitch it. Instead, I’m simply suggesting we use a more accurate tool: CBA. Cost-benefit analysis allows us to more accurately monetize the cost savings of testing efforts and/or automated tools implementations. And in your words, “sometimes automation saves time, and time is money.” I agree. I’m not debating that. But savings isn’t ROI, it’s cost savings. Different place on the corporate balance sheet.

      This web page shows a nice comparison of the two methods: http://www.stuart-hall.com/2013/03/05/cba-vs-roi-cost-benefit-analysis-vs-return-on-investment/

      My ultimate goal with this post is to try and drive conversations about testing in the directions of demonstrable benefits testing can provide. And many of those benefits are qualitative and difficult to monetize or measure. A pure focus on ROI tends to have the side effect of warping the conversation. Testing has value beyond dollars saved and I’m trying to highlight that. A small step would be a simple change in our vocabulary. If and when testing saves money, please refer to it as “cost savings,” not ROI. The next step would be to shift the discussion to “cost benefits” and while cost savings might be on the list, my ultimate hope is that other benefits would get the spotlight too!

      • Dawn,

        Thanks for your reply.

        I do understand what you are saying, and don’t disagree on the technicality of naming, but the real problem is being very specific and detailed in a way that only QA people will really understand, but the person that wants to stop automation because they think it takes to long, won’t understand and will grab the part they want to use to try to stop automation on their project and use it out of context, and interpret it as – ‘There is no ROI for Automation’. they take the title of the posting, and say they saw it on Smart Bear and Smart Bear knows all about automation, so it must be true.

        Is that fair for them to do, does it make sense, is it truthful? No on all accounts, yet still, this is the kind of thing we still face commonly as we try to push automation forward.

        Automation, in many companies, is at a crucial point in time where without an outpouring of proof and support for automation, little hope remains of automation being implemented and succeeding.

        To me, the biggest automation killer of all is when automation is used, and fails, and throws constant false failures that end up being test bugs and more time is spent maintaining the tests than would take to just give up and run the tests manually to get the product out the door with some kind of faith in its quality.

        To avoid that kind of thing, we generally have to…

        – Compare manual time Vs. Automated time for all testing activities and make sure automation will save time.
        – Design the automated tests first. Start with the functions that will be shared, define what each function will do and what arguments it takes, etc…
        – Set standards and avoid common test killers like not using hard coded data, not re-writing the same code in tests instead of creating a shared function
        – Use good commenting and documentation so other testers can have the advantage of knowing what there is so they can save time in using it and not recreating it.
        – Make sure the manual tests cases that you are creating your automation from are good and have the proper amount of test coverage. (If you create the worlds best automated test, but it uses steps from a bad test case, it’s a bad move for quality)
        – And as we all know, the list goes on.

        The ability to put ourselves in a bad position with automation is greatly increased these days as Selenium, and other tools like it, gain massive traction across industries and enterprise class tools, although still used, are in decline when compared to historical usage and growth.

        ((( Selenium is basically a set of APIs that you use to create automated tests from nothing. So without the guidance a Test Tool forces on you in some ways through it’s features, there is no forced guidance with tools like Selenium, so if you don’t create the tests properly things can go horribly wrong for automation. Btw… Good Selenium testing can be done by using one of two design patterns: Page Objects or Page Factories, both of which can be learned with some simple Google searching. )))

        I admit, I have gone a little bit off topic here, and only partially directed my content at the original idea of this posting, but I believe it’s a valuable posting of information with the insight of automation being perceived as a success or a failure now discussed through the ROI Vs. Cost Savings names, as well as the other common pitfalls associated with Automated Testing.

        Thanks again for posting publicly allowing important conversations and ideals to be discussed that are often simply misunderstood even by QA engineers at times, but very often by other people involved in the SDLC of any given company.

        Thanks again,

        Trevor Chandler

        • Dawn Haynes says:

          Hi Trevor,

          It’s clear you have the background and experience to implement automation to put it on the best path to success (would have been great to work with you when I was at Rational, ha!). I only wish more folks had your insights and thank you for sharing them here.

          Additionally, I agree that some folks might read the title of a blog post and not read any further. It’s sad when that happens, but true enough. I was going for the opposite reaction. I wanted to be provoking. I wanted people to read the headline and stop and ask, “what is that crazy lady talking about?” … and read further. For those that take the time, what you’ve added to thread is invaluable.

          And quite honestly, that was the goal. To provoke thoughtful and deep explorations of the problem of quantifying the value of testing. Through honest, open, and rich discourse I believe we make each other better, help to clarify the issues, and crystallize the incremental changes we can make to help the industry mature and grow.

          I appreciate your time and patience to work through this with me and share it with TesterLand.

          Happy Testing!
          Dawn

  14. Thanks Dawn for the link to my page on the difference between CBA and ROI. It inspired me to add an example to better illustrate how the same dollar amount, in terms of cost, produces a different result when calculated for CBA purposes, vs ROI purposes.

Trackbacks

  1. […] There Ain’t No ROI in Software Testing Written by: Dawn Haynes […]

Speak Your Mind

*