3 Practices to Reduce Flakiness in Cypress Tests

Flaky tests pass sometimes and fail at other times, even though nothing changed in the code between test runs. As a result of this unpredictability, troubleshooting flaky tests can be frustrating. That’s because it’s often difficult to reproduce failures and uncover their cause. Flaky tests can also create confusion and reduce confidence around whether your test suite is failing for legitimate reasons. The following are a few common causes for flakiness that our team has found while using Cypress tests, as well as practices to help reduce it.

Reduce Inconsistent Interactions in the DOM

A frequent source of flakiness in our test suite has been attempting to interact with elements in the DOM that do not render consistently. In these cases, the element causing failures will seem fine when manually testing, but Cypress sometimes produces timeout errors such as "Expected to find element: [some-element] but never found it." or "cy.command() failed because this element is detached from the DOM".

A few practices will reduce flakiness related to DOM elements. The first involves understanding how Cypress retries commands and assertions. Under the hood, Cypress retries commands like .get() and .find() if you make an assertion on them that fails. In retrying, Cypress re-queries the DOM until it either finds the element or it times out. However, Cypress will only retry the command that is prior to the assertion that failed in the chain. So, if I have a chain of commands with an assertion like:

cy.get("[some-element]").find("[the-child-element]").assert("has", "something")

and the assertion fails, Cypress will only retry the .find() command, instead of starting from the beginning of the chain and retrying .get(). To prevent this from creating flakiness when waiting for elements to render, I need to ensure that I assert on each command that I want Cypress to retry.

We noticed this issue manifest when one of our tests would always pass locally, but it frequently failed in our CI pipeline. One way we fixed this was by combining our use of .get() and .find() into a single command. We then used an assertion on the single command. This way, we knew that the .get() command would retry if the assertion failed. Our new command and assertion looked like:

cy.get("[some-element the-child-element]").assert("has", "something").

Sometimes the timing of events like keypresses can cause DOM-related flakiness. In one of our flaky tests, we needed to input text into a form field, but the keypresses didn’t always complete before we tried submitting the form. For this case, we implemented a custom Cypress command that used cy.wait() and other commands to wait until the field blurred before submitting the form. While this was a viable solution for our scenario, using the cy.wait() command anytime DOM flakiness occurs isn’t always the best solution.

There are several reasons to use cy.wait() with caution when addressing flakiness. First, your test suite may still be flaky because you’re waiting an arbitrary number of milliseconds. Introducing cy.wait() willy-nilly can also significantly impact the speed of your test suite (more on that below). Most importantly, though, it can hide real bugs in your code. For example, say you force Cypress to wait until an element is added to the DOM. You may be missing a real-world issue where a user with a slow connection tries to interact with an element in the DOM that needs data that’s not yet loaded. To fix this, you could disallow interaction with the element until loading is complete. But by forcing Cypress to wait, you may miss the bug.

Reduce Inconsistency in Requests

Another cause of flakiness comes from inconsistent responses when making network requests. This can occur when a request hasn’t returned a response but the test continues executing. You may notice you receive network errors or DOM-related error messages if the DOM updates based on the response from the request.

You can use cy.intercept() to stub responses for your requests if you want to ensure a successful response. This came in handy for us when we were testing a page that made a request to an external API. We expected the response to eventually change in a way that we anticipated would lead to test flakiness. Using Cypress fixtures and cy.intercept() to stub a response, we were able to test the page without worrying about whether changes in the API response would introduce flakiness.

Timing can also be an issue with flakiness in network requests. You can use cy.wait() to force Cypress to wait for the request to finish before continuing with test execution. However, manually enforcing waiting can increase the time it takes for the test suite to run. To offset the increased time, we introduced parallelization in our CI pipeline. This allowed us to parallelize our test runs and reduce our run time. In turn, we were able to wait more within our test suite without worrying too much about decreasing the speed of our pipeline execution.

Utilize the Cypress Dashboard

If your budget allows, you can also buy Cypress’ premium Dashboard feature. Our team uses it to gain insight and create transparency around flakiness in our test suite. The dashboard can identify flaky tests, and it quantifies flakiness severity based on how many retries a test required before it passed. It also provides insights into historical failures. Historical failures can help determine if there are frequent patterns used in your tests that are introducing flakiness.

Reducing Flakiness in Cypress Tests

Troubleshooting and reducing flakiness in test suites can be a frustrating process. Hopefully some of these tips have helped. If you have any additional tips you’ve used in dealing with flakiness, please leave a comment below!