At Atomic Object, TDD, Test Driven Development, and automated tests are part of the culture. Be it unit tests, integration tests, or UI tests, there’s almost always a test. Almost.
What if you don’t have budget for tests? Or, say the app is an experiment and tests are impractical when the app’s lifespan is TBD. I will always argue that testing is a worthwhile investment, but sometimes it’s not possible. Here are some tips my team picked up working in a multiple frontend environment with no tests.
Add and update manual testing scripts regularly.
Manual testing scripts are written user journeys in the app or apps. Basically, how does a user interact with the app? What are the steps they take to accomplish their goal?
Let’s take a library app as an example:
Library App User flow
Here’s how you’d search for a book and borrow it.
- Open the Library app, and authenticate through.
- Click search, and enter `Watership Down`. 3Click on the first result, and navigate to the book details page.
- The user should see the book details page, and the title should be `Watership Down`.
- Scroll to the bottom of the page, and see the “Borrow” button is visible and enabled.
- Click the “Borrow” button, and see the confirmation modal that displays pick up location.
Normally, automated frontend tests would bear the burden of both describing and verifying that this behavior hadn’t regressed due to any changes the developer made.
Not having automated tests means we have to do this manually. For our purposes, we added them to the git repository as a Markdown file. This serves two purposes: it allows us to keep track and update the expected behavior of the app’s different workflows, and puts it in a low-resistance spot to be referenced by developers as they make changes to the app.
Pain Points: It’s easy to both forget to update the manual testing scripts, and to forget to run through them. Plus, manual testing is error prone—it’s easy to miss something.
Commit to a review process.
Not having automated tests on the frontend also means the team needs to come to a consensus on how to keep the app working as expected. Regressions can be difficult to track down, especially if no one remembers testing a certain feature. You end up playing the when-did-this-break game, which bloats the amount of time it takes to find and fix a bug.
Instead, we started the habit of reviewing the main manual testing scripts as part of the PR review process and / or after a merge. Our mobile apps are Expo managed, so creating a simulator or preview build was relatively painless. This allowed us to easily say, “Please use this build to quickly test our feature / bug fix, and make sure basic functionality remains the same.”
Individually, I made a habit out of running through the main testing scripts prior to starting and after completing a new feature branch or bug fix—all in effort to establish a baseline of expected behavior (e.g., what’s working, and what’s not).
Does this bloat the amount of time it takes to get the feature work or bug fix done? Yeah, but without the peace of mind I would usually get from having automated tests, I needed to find a practical alternative.
Pain Points: Running through all the manual testing scripts feels tedious, and I’m not one for monotony. I said “main” manual testing scripts because some workflows were minor or disconnected wholly from the work I was completing at the time or under the purview of another team. Manually testing the entirety of the app, before and after feature work or bug fixes, can be impractical when it’s being worked on by multiple teams.
Leverage your Tester, a.k.a. Bug Magnet.
All hail invaluable testers! The overhead I described above? They’re great at sharing the cognitive load—they know how the app is supposed to work, and when it broke. There are times they know the app functionality better than the developers do.
In our situation, we have multiple web and mobile apps, so it’s not always feasible for developers to run through all the manual testing scripts of each and every app. Working with multiple teams adds in the challenge and complexity of tracking changes.
Having a Master Tester, a.k.a. Bug Magnet? They can quickly find regressions and unexpected behavior in all the apps.
We made a habit of producing mobile app preview builds or directing our tester to `please go test this functionality on `. Between our manual testing during feature work and the PR review process, and the tester’s periodic checks, we were able to catch many (not all) regressions and bugs before they went out to production.
You should still have tests.
Having no automated frontend tests isn’t my preferred way to work — it’s a reality to contend with. These strategies have made the process of developing without UI tests much more maintainable, and helped us move more confidently. But it’s still a slow, painful, and error prone process.
Tests are far more valuable. They are living documentation of the app’s expected behavior, they reduce the amount of error prone manual testing, and they prevent painful regressions. Adding them, even belatedly, allows developers to move more swiftly and confidently.
The ‘monotony’ of running manual scripts can be lessened somewhat by thinking up other tests that could also be done as you’re running the tests eg hmm, I should be checking the screens are all good when the phone text setting is set to X-large. And yeh, there are some people who take pride in knowing an app so well they can blast through these really quickly then get onto exploring the ideas that came up