Flaky Frontend Tests? #39010
-
|
Hi, I wasn't sure where to ask or if I should open a bug. Are the tests flaky? We get a lot of test timeouts and we see similar sometimes in the failed actions. We are struggling to get all the tests to pass because of numerous timeouts. Some of these tests we see also failed in the actions here in the repo. We have tried npm run tests and also similar to the github build machines with:
mkdir -p superset-frontend/coverage docker run -v ./superset-frontend/coverage:/app/superset-frontend/coverage --rm test01042026:latest bash -c "npm run test -- --coverage --coverageReporters=json-summary" For reference we are using the 6.0 branch. Example: FAIL src/dashboard/components/EmbeddedModal/EmbeddedModal.test.tsx (101.488 s) Any solution for this would be great. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
|
Hi @ZeroWiggliness! I'm Dosu and I’m helping the superset team. Yes, frontend test flakiness (especially timeouts) is a well-documented problem in Superset. The team has been actively fixing these across multiple PRs. Here's what's going on and what you can try: Root cause for most timeouts: The most common culprit is unmocked API endpoints — when a component makes an API call on mount but the test doesn't mock that endpoint, Jest hangs waiting for a response until it hits the 20-second timeout [1]. The global timeout is set to 20000ms in Specifically for Why sharding helps: The CI uses 8-way Jest sharding to parallelize tests, which reduces memory pressure and timer pollution between tests — both known causes of intermittent failures. What the team has been fixing recently:
Quick things you can try:
Modal components in particular are prone to this kind of flakiness — FiltersConfigModal.test.tsx was actually skipped after 11 fix attempts due to complex Redux + API mock coordination issues. Since you're on the 6.0 branch, you may not have all the recent flakiness fixes. It might be worth cherry-picking or checking if the fixes from the PRs above apply to your branch. To reply, just mention @dosu. How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other |
Beta Was this translation helpful? Give feedback.
-
|
Flaky timeout failures in CI are really common in large frontend test suites — you're definitely not alone. The root cause is usually one of two things: runner resource contention (slower runner = more timeouts) or test isolation issues where shared state makes tests order-dependent. One thing that helps is analyzing your CI history to see which specific tests are flaky vs. consistently slow. If a test fails 30% of runs but passes on retry, that's a different problem than a test that consistently times out. I built Kleore specifically to surface this — it connects to GitHub, analyzes your CI run history, and ranks tests by failure rate and cost. Free tier shows you the full picture. Might help identify which tests are genuinely flaky vs. just slow. |
Beta Was this translation helpful? Give feedback.
Hi @ZeroWiggliness! I'm Dosu and I’m helping the superset team.
Yes, frontend test flakiness (especially timeouts) is a well-documented problem in Superset. The team has been actively fixing these across multiple PRs. Here's what's going on and what you can try:
Root cause for most timeouts: The most common culprit is unmocked API endpoints — when a component makes an API call on mount but the test doesn't mock that endpoint, Jest hangs waiting for a response until it hits the 20-second timeout [1]. The global timeout is set to 20000ms in
jest.config.js[2].Specifically for
EmbeddedModal.test.tsx: There's a race condition in the test setup whereresetMockApi()is called afterrender(), …