this post was submitted on 30 Aug 2023
34 points (100.0% liked)

Experienced Devs

3959 readers
7 users here now

A community for discussion amongst professional software developers.

Posts should be relevant to those well into their careers.

For those looking to break into the industry, are hustling for their first job, or have just started their career and are looking for advice, check out:

founded 1 year ago
MODERATORS
 

End to end and smoke tests give a really valuable angle on what the app is doing and can warn you about failures before they happen. However, because they're working with a live app and a live database over a live network, they can introduce a lot of flakiness. Beyond just changes to the app, different data in the environment or other issues can cause a smoke test failure.

How do you handle the inherent flakiness of testing against a live app?

When do you run smokes? On every phoenix branch? Pre-prod? Prod only?

Who fixes the issues that the smokes find?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

You run E2E test before each merge. So, you don't merge very often?

How about running an integration test before each merge instead of a full fledged E2E and mocking out external dependencies (other services) during the test, then do E2E testing on a schedule like nightly?

I prefer it this way, because mocking out external dependencies cut out network instability and bugginess from dependencies. So, we can merge faster. Agree that test scenarios are overlapping, and if your E2E is very stable then it is probably not worth it, but unfortunately it's not so stable in my environment.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Luckily, our e2e tests are pretty stable. And unfortunately we are not given the time to write integration tests as you describe. The good thing would be that with these mocks we were then also be able to load test single services instead of the whole product.

We merge multiple times a day and run only those e2e tests we think are relevant. Of course, this is not optimal and it is not too rare that one of the teams merges a regression, where one team or more talented at that than the others.

You see, we have issues and we realize we have them. Our management just thinks these are not important enough to spend time on writing integration tests. I think money and developer time are two of the reasons, but the lack of feature documentation, the lack of experts for parts of the codebase (some already left for another employer), and the amount of spaghetti code and infrastructure we have are other important reasons.

[–] [email protected] 2 points 1 year ago

Reading the 3rd paragraph and I see myself 😄. Glad that you and the team managed to add another layer of testing successfully.