Let’s say you’ve been given ownership of a software component, and you’ve found that the quality is not where you need it to be. The defect backlog is a mile long, and every bug fix or new feature your team implements creates regressions.
What do you do? Well, you ask your development team to write a test plan to flush out remaining defects and ensure that you don’t run into more regressions. Problem solved! That is, until your development team comes back with those terrible words: “This code is untestable.”
What does that even mean? Surely all code is testable? You can run it manually and validate that it works. Or you could write an automated suite that brings up the application, walks through your user stories, and makes sure that everything checks out. So what’s the problem?
Manual testing is endless.
Well, the problem with manual testing is fairly obvious — it’s expensive. People today expect feature-rich software, and having someone manually validate that every operation behaves correctly under all possible scenarios is a huge undertaking. Add in that any testing performed loses its value as the software is modified, and the cost explodes. Pair those realities with a poor codebase prone to regressions, and you have a perfect storm.
Most likely, your team won’t be able to manually validate all functionality for each release, you’ll be reluctant to fix some defects (as you won’t be sure what you might be breaking in the process), and you’ll end up shipping software without a particularly high confidence in its quality.
Automated integration testing is time-consuming and inexact.
So how about a nice automated integration suite? I’ll start by saying that full, end-to-end integration suites are very important and should be a part of your test plan. However, they are also a prohibitively expensive way to exhaustively test software.
Full, black-box integration tests tend to run very slowly. Early on, you may not see a problem — the tests are exercising the full software stack as the user would, so everyone is happy! Soon however you’ll find that the tests are taking too long to run, so you’ll run them only nightly. Before long, even that will prove too slow, so your developers will start to write larger tests, or bring the system up and run multiple tests to save on the cost of starting up and shutting down the system.
Eventually you’ll run a subset of the tests for each build, and run the full suite weekly, or only prior to release. And then the larger problems will set in — test failures for the latest build may be due to a week’s worth of code changes, and issues become expensive to debug. Tests that haven’t run in several weeks have diverged from the application under test, and require a lot of maintenance when the team is trying to release a build! It’s not uncommon to start seeing bleed-over between tests, and an innocent change by one developer can cause failures that require significant triage time from the organization. Your team will start to think and say very nasty things about the test suite as it becomes an anchor to the team rather than something that boosts productivity.
What I’m getting at is that automated integration tests are a great way to test integration issues, but they’re a very expensive way to test your core application logic.
Unit testing is the answer.
So what is testable code? When a developer says that code is “testable,” they mean that it’s written in a way that allows them to efficiently write unit tests.
Unit tests exercise individual classes and methods in isolation, so they are cheap to write, they run very quickly, and their failures are generally very easy to understand. Unit test suites are fast enough to be run continually as a developer modifies code, and they can be run as part of every server build, even for the largest components.
One last note is that when you have confidence in your unit test suite, you can safely limit your integration test layer to focus on integration issues, keep it lightweight and run it regularly. Good unit tests keep your integration tests healthy.
What should you do with untestable code?
So can you just add unit tests to your “untestable code”? Maybe, but maybe not. If your code also has poor architecture — e.g. functionality in the system is badly organized, it’s tightly coupled to the rest of the system, it has side-effects that can’t be validated, it’s role is poorly defined, etc. — it can be extremely difficult to unit test.
When you have untestable, poorly-constructed code, validating correctness at runtime is only one of your problems. Untestable code will be difficult for new team members to learn, expensive to maintain, and expensive to modify. As a manager, hearing that you own untestable code should be a huge red flag. You need to think about sunsetting the software if possible or coming up with a plan to address the structural problems in the codebase.
Untestable code, allowed to remain untestable, grows more and more expensive to your team over time and is likely to cost your organization as a whole when defects leak to your customers.
My next post will focus on the technical details of what makes code untestable, but the simple answer is “poor architecture.”