Building the Right Acceptance Tests

Automated acceptance tests are valuable, but they’re also easy to build badly. So to make sure you don’t end up with a frustrating pile of automated headaches, first strive to understand why you’re writing the tests. Specific goals often dictate a different set of tools or approach to your instrumentation around the tests. Done correctly, system tests are a powerful tool that can help you:

  • Think through how a feature is going to work before you start implementing it.
  • Document how your software is supposed to work. (The best documentation is executable.)
  • Show that your application works as a whole.
  • Explore unknown behavior in third-party services and document your assumptions.
  • Verify that future changes don’t have unintended consequences.
  • Test things you couldn’t test well at other levels.

These are all situations where acceptance tests provide particular value. We use them to document, to learn, and to improve the quality of our test suites. They’re important. They’re useful.

And here are a few tidbits I’ve picked up working through the tough parts of writing acceptance tests.  

Picking System Testing Tools

When possible, pick tools that reside in close proximity to the rest of your development tools. Being able to run your system and unit tests from within your IDE or from the same command line using the same Rakefile means they’ll be more likely to get run. And if they get run, there’s more chance they’ll be kept up to date. Acceptance testing tools that run on a desktop computer near Joe’s desk that only two other people have access to are far less likely to be used.

Also be mindful of the performance of your chosen tools. Nobody likes a slow test suite. The slower it runs, the less value it provides and the less often it will get run. There will always be tradeoffs in performance — prefer tests that pass quickly even if it means failure takes a bit longer (hopefully tests pass more often than they fail).

Pick tools that fit your goals for your system tests. If documentation is a goal, then maybe Cucumber or a similar tool is a good fit, but remember that it’s not a requirement for an acceptance test. If you’re testing a web application, then pick a tool that drives a web browser; but if you’re testing a JSON service, pick something faster.

Asking the Right Questions

Should I stub third-party services?

Should you use the real third-party services your software integrates with when running system tests, or should you stub them out instead? It depends. The primary deciding factors for my recent projects have been:

  • Can the real service be instrumented well enough to set up the test cases you want to test?
  • Can the service be kept active for future developers to test against? How much will it cost?
  • Is it feasible to built a stub version of the service?
  • Do you understand how the service works well enough to stub the behavior? Or are you still learning how it works?

I have stubbed out geocoding services and other relatively simple APIs that were well documented. But I’ve also used real Outlook SOAP services in acceptance tests because we were still learning some details of how they worked and wanted to explore using our system tests to document the behavior along the way. Both worked out very well.

How many cases should I test?

Again, there are a number of factors that lead to an appropriate answer. The goal is to balance the rigor of testing of different types (unit tests, acceptance or system tests, exploratory testing, manual testing) with the criticality of each feature.

Recently, my pattern has been to start with a happy-path system test to drive initial development of a feature, implement the feature driving with unit tests top-down, and then revisit other important cases I might want to write acceptance tests for. My unit tests stretch to cover the less-likely edge cases in detail so I don’t feel pressure to cover every last case in acceptance tests.

The more critical a feature is, the more attention you’ll want to give them in all levels of testing.

Where is the greatest risk in the application?

If you can answer this question, then you know where testing of all kinds is especially important. Spend more time and effort on these places.

My Usual Pattern

This is a brief description of my development pattern on my current ASP.NET MVC 3 web application.

  1. Write a happy-path acceptance test for the story I’m about to start working on.
  2. Start development at the controller.
    • Write a unit test for the action that should be invoked.
    • Use mocks to set up the context and verify the result.
    • Create new classes and methods for the controller action to use as needed — unimplemented ones throw a NotImplementedException.
    • Implement the controller action.
  3. Run the acceptance test and see where the first NotImplementedException comes from.
  4. Write tests and implement the method that threw the NotImplementedException.
  5. Repeat until the acceptance test passes.
  6. Revisit other interesting cases for acceptance tests, and repeat 2 through 5 as necessary.

I hope this helps you improve your own acceptance testing.