Managing Test Flakiness: How to Make Tests Stable, Scalable, & Maintainable

Managing Test Flakiness Twitter

Building and maintaining automated tests can often be challenging and cumbersome.

In fact, the term “flaky” is often associated with tests — especially tests created at the UI layer.

Even the smallest amount of changes in the UI can cause automated tests created at the UI level to break, resulting in a lot of maintenance overhead. This problem hasn’t gotten any easier with the explosion of browsers, mobile, third-party controls, and other IoT devices.

For instance, dynamically generated controls due to the use of popular web frameworks such as AngularJS pose a serious challenge when recognizing objects on the screen. In such a case, JavaScript modifies the webpage directly, which in turn causes problems while locating controls on the app under test.

Hence, unless there is a solid test automation strategy, UI tests can result in false negatives — a situation where UI a test fails on a proper code. Or even false positives — a situation where a UI test passes even on a bad code.

Deciding whether or not to file a bug in such cases often involves a lot of time spent debugging and figuring out what’s wrong with the UI test.

Do you have a problem with flaky tests?

It’s common knowledge that flaky tests shouldn’t be tolerated. A test with a pass rate of 99.5% often looks impressive, but is it? In fact, a 99.5% pass rate can look less extraordinary when looked across a suite containing 300 tests. Take the following for example:

  • A test suite has 300 tests
  • Each test has a .5% failure rate
  • Pass rate for each tests: 99.5%
  • The pass rate for the entire suite in this scenario is: (99.5%)^300 = 22.23%

As a result, a 99.5% pass rate for a particular test comes to be a modest >25% when looked across the suite.

The bottom line here is that testing teams cannot, and should not, just look at the high rate of pass percentage for individual tests. You instead need to look at the entire test suite.

There are other reasons why flakiness shouldn’t be tolerated and how flaky tests can be overcome.

That’s why we created this eBook: Managing Test Flakiness: How to Make Tests Stable, Scalable, & Maintainable

In this eBook, we’ll answer some important questions:

  • Root causes of flaky tests
  • Solutions to test flakiness
  • Flaky vs. Healthy Tests: How to Find a Balance
  • Tools to scale automated tests
  • How to separate signal from noise when looking at flaky tests

Get your copy!



  1. Mark Alexander says:

    Your analysis of the probability of pass rate is not accurate.

    The number you are calculating (0.955^300) is the probability that every test in a set of 300 will pass. But this does not imply the pass rate across the suite of 300 tests is 22.23%. Rather it means that 77.77% of the time, at least 1 test out of 300 will fail.

    In reality, given that each test has a pass rate of 99.5% and is independent of all other tests, if you ran the set of 300 tests, say, 10,000 times, the _average_ pass rate for each set of 300 tests would be approximately 99.5%.

    Consider the coin flip. Head vs. tails rate is 50% heads. The odds of 300 heads in a row is exceedingly low (0.5^300), but the probability of heads in 300 flips is still 50%.

Speak Your Mind