Monitoring UI tests in TravelTriangle!

Every organization deals with flakiness of UI automation tests. UI tests are brittle and least reliable.However other side of coin is they are very crucial and significant, Teams can not live without them.

It is said that ideally tests should be written in a way where, test results clearly indicate where the problem is in the code base. In fact, productivity of developers is dependent on the ability of tests to point out actual problem in the code being changed or developed.

Unfortunately, this is not always the case with UI tests, we can not infer in terms of 0 and 1 especially with UI tests. How many times have we encountered that even though the success rate of tests are between 70% to 90%, but after analysis we realize that the reason of test failures was infrastructure issue, third party dependencies, dependent test cases, test script errors and so on.


“So in short it wouldn’t be wrong to say that in TravelTriangle there is a flakiness of ~10% to 30%!”.

In this article we will discuss how Traveltriangle’s QA team mitigates the brittleness of UI tests:

  • Avoid Dependent Test Cases:

We encourage the team to write test cases as independent as possible. However,sometimes failures are real application failures like for instance if “Request Form” is not rendering, all other dependent test cases would fail.In such a scenario we use two strategies to have quick feedback.

First is, marking the dependent test step as “critical”, which will break the test then and there, without wasting any effort and time in running all other stuff.

Second strategy is, to initially execute “Smoke” test group to know the modules health status before diving deep into regressions. This will give quick feedback to the team on code base health.

Also, test cases are written independent of the environment.

  • Dedicated Analysis of each execution run and keeping them green:

We put focused effort in reading a pattern of test results. We are utilizing the capability of various open source libraries and Jenkins plugins like Plot plugin, groovy executors, custom reporting, Test Analyser plugin, elastic dump and so on to automatically have detailed pattern of test results across the builds, which quickly raises an alarm to the team to work on stability of test cases if success rate is going low. Some real time example patterns are pasted below.






  • Dev-QA Collaboration in devising UI Locator strategies:

Devising proper locator strategy while building a product, plays a significant role in stability of UI tests. Dev and QA collaborate, to come up with proper locators and DOM structures across the application. Like placing proper unique ids and classes to DOM elements, no UI element without tags, minimum dependencies on siblings and parent elements and so on.Refer UI locators best practices here.

  • Rerunning Failed Test Cases:

Our Test framework is configured to rerun the failed test cases automatically, which helps to control the flakiness of UI tests.

  • Setting Up Basic Health Check Tests:

The purpose of health check tests are to ensure all third party dependencies are up and running.This reduces the test failures due to third party dependencies.

  • Setting Up Production Like Environment:

This solves majority of issues, team generally targets to stable the test scripts over this environment. And use test results over this environment as a benchmark to other environment stability.

  • Minimum UI Tests:

As UI tests are not so reliable, we should keep their count restricted.

Of Course, all above mitigation measures practiced in TravelTriangle are not rule of thumb, but these measures can reduce the uncertainties of UI tests to a large extent. Try them 🙂

Hope this helps!